[
  {
    "path": ".github/FUNDING.yml",
    "content": "github: Fs02\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "version: 2\nupdates:\n  - package-ecosystem: \"gomod\"\n    directory: \"/\"\n    schedule:\n      interval: \"daily\"\n"
  },
  {
    "path": ".gitignore",
    "content": ".env\nbin\n!bin/README.md\ndata/\n"
  },
  {
    "path": ".tool-versions",
    "content": "golang 1.19\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: go\ngo:\n  - \"1.17.x\"\n  - \"1.18.x\"\n  - \"1.19.x\"\nenv:\n  - COVER=-coverprofile=c.out\nscript:\n  - go test -race $COVER ./...\n  - curl -L https://codeclimate.com/downloads/test-reporter/test-reporter-latest-linux-amd64 > ./cc-test-reporter\n  - chmod +x ./cc-test-reporter\nafter_script:\n  - ./cc-test-reporter after-build --exit-code $TRAVIS_TEST_RESULT\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2020 Muhammad Surya\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "Makefile",
    "content": "export RELEASE_VERSION\t?= $(shell git show -q --format=%h)\nexport DOCKER_REGISTRY\t?= docker.pkg.github.com/fs02/go-todo-backend\nexport DEPLOY\t\t\t?= api\n\nall: build start\ndb-migrate:\n\trel migrate\ndb-rollback:\n\trel rollback\ngen:\n\tgo generate ./...\nbuild: gen\n\tgo build -mod=vendor -o bin/api ./cmd/api\ntest: gen\n\tgo test -mod=vendor -race ./...\nstart:\n\texport $$(cat .env | grep -v ^\\# | xargs) && ./bin/api\ndocker:\n\tdocker build -t $(DOCKER_REGISTRY)/$(DEPLOY):$(RELEASE_VERSION) -f ./deploy/$(DEPLOY)/Dockerfile .\npush:\n\tdocker push $(DOCKER_REGISTRY)/$(DEPLOY):$(RELEASE_VERSION)\n"
  },
  {
    "path": "Procfile",
    "content": "web: bin/api\n"
  },
  {
    "path": "README.md",
    "content": "# go-todo-backend\n\n[![GoDoc](https://godoc.org/github.com/Fs02/go-todo-backend?status.svg)](https://godoc.org/github.com/Fs02/go-todo-backend)\n[![Build Status](https://travis-ci.com/Fs02/go-todo-backend.svg?branch=master)](https://travis-ci.com/Fs02/go-todo-backend)\n[![Go Report Card](https://goreportcard.com/badge/github.com/Fs02/go-todo-backend)](https://goreportcard.com/report/github.com/Fs02/go-todo-backend)\n[![Maintainability](https://api.codeclimate.com/v1/badges/d506b5b2df687cbcd358/maintainability)](https://codeclimate.com/github/Fs02/go-todo-backend/maintainability)\n[![Test Coverage](https://api.codeclimate.com/v1/badges/d506b5b2df687cbcd358/test_coverage)](https://codeclimate.com/github/Fs02/go-todo-backend/test_coverage)\n\nGo Todo Backend Example Using Modular Project Layout for Product Microservice. It's suitable as starting point for a medium to larger project.\n\nThis example uses [Chi](https://github.com/go-chi/chi) for http router and [REL](https://github.com/go-rel/rel) for database access.\n\nFeature:\n\n- Modular Project Structure.\n- Full example including tests.\n- Docker deployment.\n- Compatible with [todobackend](https://www.todobackend.com/specs/index.html).\n\n## Installation\n\n### Prerequisite\n\n1. Install [mockery](https://github.com/vektra/mockery#installation) for interface mock generation.\n2. Install [rel cli](https://go-rel.github.io/migration/#running-migration) for database migration.\n\n### Running\n\n1. Prepare `.env`.\n    ```\n    cp .env.sample .env\n    ```\n2. Start postgresql and create database.\n    ```\n    docker-compose up -d\n    ```\n2. Prepare database schema.\n    ```\n    rel migrate\n    ```\n3. Build and Running\n    ```\n    make\n    ```\n\n## Project Structure\n\n```\n.\n├── api\n│   ├── handler\n│   │   ├── todos.go\n│   │   └── [other handler].go\n│   └── middleware\n│       └── [other middleware].go\n├── bin\n│   ├── api\n│   └── [other executable]\n├── cmd\n│   ├── api\n│   │   └── main.go\n│   └── [other cmd]\n│       └── main.go\n├── db\n│   ├── schema.sql\n│   └── migrations\n│       └── [migration file]\n├── todos\n│   ├── todo.go\n│   ├── create.go\n│   ├── update.go\n│   ├── delete.go\n│   ├── service.go\n│   └── todostest\n│       ├── todo.go\n│       └── service.go\n├── [other domain]\n│   ├── [entity a].go\n│   ├── [business logic].go\n│   ├── [other domain]test\n│   │   └── service.go\n│   └── service.go\n└── [other client]\n    ├── [entity b].go\n    ├── client.go\n    └── [other client]test\n        └── client.go\n```\n\nThis project structure is based on a modular project structure, with loosely coupled dependencies between domain, Think of making libraries under a single repo that only exports certain functionality that used by other service and http handler. One of domain that present in this example is todos.\n\nLoosely coupled dependency between domain is enforced by avoiding the use of shared entity package, therefore any entity struct should be included inside it's own respective domain. This will prevent cyclic dependency between entity. This shouldn't be a problem in most cases, becasause if you encounter cyclic dependency, there's huge chance that the entity should belongs to the same domain.\n\nFor example, consider three structs: user, transaction and transaction items. transaction and its transaction items might need cyclic dependency and items doesn't works standalone (items without transaction should not exists), thus it should be on the same domain.\nIn the other hand, user and transaction shouldn't require cyclic dependency, transaction might have a user field in the struct, but user shouldn't have a slice of transaction field, therefore it should be on a separate domain.\n\n### Domain vs Client\n\nDomain and Client folder is very similar, the difference is client folder doesn't actually implement any business logic (service), but instead a client that calls any internal/external API to works with the domain entity.\n"
  },
  {
    "path": "api/README.md",
    "content": "# api\n\nThis package contains the root router for the handler. The root router should be `mountable` as sub router in other application (modular).\n"
  },
  {
    "path": "api/api.go",
    "content": "package api\n\nimport (\n\t\"github.com/Fs02/go-todo-backend/api/handler\"\n\t\"github.com/Fs02/go-todo-backend/scores\"\n\t\"github.com/Fs02/go-todo-backend/todos\"\n\t\"github.com/go-chi/chi\"\n\tchimid \"github.com/go-chi/chi/middleware\"\n\t\"github.com/go-rel/rel\"\n\t\"github.com/goware/cors\"\n)\n\n// NewMux api.\nfunc NewMux(repository rel.Repository) *chi.Mux {\n\tvar (\n\t\tmux            = chi.NewMux()\n\t\tscores         = scores.New(repository)\n\t\ttodos          = todos.New(repository, scores)\n\t\thealthzHandler = handler.NewHealthz()\n\t\ttodosHandler   = handler.NewTodos(repository, todos)\n\t\tscoreHandler   = handler.NewScore(repository)\n\t)\n\n\thealthzHandler.Add(\"database\", repository)\n\n\tmux.Use(chimid.RequestID)\n\tmux.Use(chimid.RealIP)\n\tmux.Use(chimid.Recoverer)\n\tmux.Use(cors.AllowAll().Handler)\n\n\tmux.Mount(\"/healthz\", healthzHandler)\n\tmux.Mount(\"/todos\", todosHandler)\n\tmux.Mount(\"/score\", scoreHandler)\n\n\treturn mux\n}\n"
  },
  {
    "path": "api/handler/README.md",
    "content": "# handler\n\nThis package contains handler that handle http request for each endpoints.\nEvery handler should be simple and light, it's should only responsible for decoding request, calling business service, and encoding the response.\n\nIt's recommended to avoid implementing any business logic directly in handler, including writes to the database.\nEven if the logic seems simple at the beginning, implementing the logic directly in the handler might trigger tech-debt when additional requirement comes and other engineer just added the implementation directly in handler without moving it to a specific service.\n"
  },
  {
    "path": "api/handler/handler.go",
    "content": "package handler\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"net/http\"\n\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\tlogger, _ = zap.NewProduction(zap.Fields(zap.String(\"type\", \"handler\")))\n\t// ErrBadRequest error.\n\tErrBadRequest = errors.New(\"Bad Request\")\n)\n\nfunc render(w http.ResponseWriter, body interface{}, status int) {\n\tw.Header().Set(\"Content-Type\", \"application/json\")\n\tw.WriteHeader(status)\n\n\tswitch v := body.(type) {\n\tcase string:\n\t\tjson.NewEncoder(w).Encode(struct {\n\t\t\tMessage string `json:\"message\"`\n\t\t}{\n\t\t\tMessage: v,\n\t\t})\n\tcase error:\n\t\tjson.NewEncoder(w).Encode(struct {\n\t\t\tError string `json:\"error\"`\n\t\t}{\n\t\t\tError: v.Error(),\n\t\t})\n\tcase nil:\n\t\t// do nothing\n\tdefault:\n\t\tjson.NewEncoder(w).Encode(body)\n\t}\n}\n"
  },
  {
    "path": "api/handler/handler_test.go",
    "content": "package handler\n\nimport (\n\t\"errors\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestRender(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tdata     interface{}\n\t\tresponse string\n\t}{\n\t\t{\n\t\t\tname:     \"message\",\n\t\t\tdata:     \"lorem\",\n\t\t\tresponse: `{\"message\":\"lorem\"}`,\n\t\t},\n\t\t{\n\t\t\tname:     \"error\",\n\t\t\tdata:     errors.New(\"system error\"),\n\t\t\tresponse: `{\"error\":\"system error\"}`,\n\t\t},\n\t\t{\n\t\t\tname:     \"nil\",\n\t\t\tdata:     nil,\n\t\t\tresponse: ``,\n\t\t},\n\t\t{\n\t\t\tname: \"struct\",\n\t\t\tdata: struct {\n\t\t\t\tID int `json:\"id\"`\n\t\t\t}{ID: 1},\n\t\t\tresponse: `{\"id\":1}`,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\trr = httptest.NewRecorder()\n\t\t\t)\n\n\t\t\trender(rr, test.data, 200)\n\t\t\tif test.response != \"\" {\n\t\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\t\t\t} else {\n\t\t\t\tassert.Equal(t, test.response, rr.Body.String())\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "api/handler/healthz.go",
    "content": "package handler\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"sync\"\n\n\t\"github.com/go-chi/chi\"\n\t\"go.uber.org/zap\"\n)\n\n// Pinger interface.\ntype Pinger interface {\n\tPing(ctx context.Context) error\n}\n\ntype ping struct {\n\tService string `json:\"service\"`\n\tStatus  string `json:\"status\"`\n}\n\n// Healthz handler.\ntype Healthz struct {\n\t*chi.Mux\n\tpingers map[string]Pinger\n}\n\n// Show handle GET /\nfunc (h Healthz) Show(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\twg     sync.WaitGroup\n\t\tstatus = 200\n\t\tpings  = make([]ping, len(h.pingers))\n\t)\n\n\twg.Add(len(h.pingers))\n\n\ti := 0\n\tfor service, pinger := range h.pingers {\n\t\tgo func(i int, service string, pinger Pinger) {\n\t\t\tdefer wg.Done()\n\n\t\t\tpings[i].Service = service\n\t\t\tif err := pinger.Ping(r.Context()); err != nil {\n\t\t\t\tlogger.Error(\"ping error\", zap.Error(err))\n\n\t\t\t\tstatus = 503\n\t\t\t\tpings[i].Status = err.Error()\n\t\t\t} else {\n\t\t\t\tpings[i].Status = \"UP\"\n\t\t\t}\n\t\t}(i, service, pinger)\n\t\ti++\n\t}\n\twg.Wait()\n\n\trender(w, pings, status)\n}\n\n// Add a pinger.\nfunc (h *Healthz) Add(name string, ping Pinger) {\n\th.pingers[name] = ping\n}\n\n// NewHealthz handler.\nfunc NewHealthz() Healthz {\n\th := Healthz{\n\t\tMux:     chi.NewMux(),\n\t\tpingers: make(map[string]Pinger),\n\t}\n\n\th.Get(\"/\", h.Show)\n\n\treturn h\n}\n"
  },
  {
    "path": "api/handler/healthz_test.go",
    "content": "package handler_test\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\n\t\"github.com/Fs02/go-todo-backend/api/handler\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\ntype pinger struct {\n\terr error\n}\n\nfunc (p pinger) Ping(ctx context.Context) error {\n\treturn p.err\n}\n\nfunc TestHealthz_Show(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tpinger   handler.Pinger\n\t\tstatus   int\n\t\tpath     string\n\t\tresponse string\n\t}{\n\t\t{\n\t\t\tname:     \"all dependencies are healthy\",\n\t\t\tpinger:   pinger{},\n\t\t\tstatus:   http.StatusOK,\n\t\t\tpath:     \"/\",\n\t\t\tresponse: `[{\"service\": \"test\", \"status\": \"UP\"}]`,\n\t\t},\n\t\t{\n\t\t\tname:     \"some dependencies are sick\",\n\t\t\tpinger:   pinger{err: errors.New(\"service is down\")},\n\t\t\tstatus:   http.StatusServiceUnavailable,\n\t\t\tpath:     \"/\",\n\t\t\tresponse: `[{\"service\": \"test\", \"status\": \"service is down\"}]`,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\treq, _  = http.NewRequest(\"GET\", test.path, nil)\n\t\t\t\trr      = httptest.NewRecorder()\n\t\t\t\thandler = handler.NewHealthz()\n\t\t\t)\n\n\t\t\thandler.Add(\"test\", test.pinger)\n\n\t\t\thandler.ServeHTTP(rr, req)\n\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "api/handler/score.go",
    "content": "package handler\n\nimport (\n\t\"net/http\"\n\n\t\"github.com/Fs02/go-todo-backend/scores\"\n\t\"github.com/go-chi/chi\"\n\t\"github.com/go-rel/rel\"\n)\n\n// Score for score endpoints.\ntype Score struct {\n\t*chi.Mux\n\trepository rel.Repository\n}\n\n// Index handle GET /\nfunc (s Score) Index(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx    = r.Context()\n\t\tresult scores.Score\n\t)\n\n\ts.repository.Find(ctx, &result)\n\trender(w, result, 200)\n}\n\n// Points handle Get /points\nfunc (s Score) Points(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx    = r.Context()\n\t\tresult []scores.Point\n\t)\n\n\ts.repository.FindAll(ctx, &result)\n\trender(w, result, 200)\n}\n\n// NewScore handler.\nfunc NewScore(repository rel.Repository) Score {\n\th := Score{\n\t\tMux:        chi.NewMux(),\n\t\trepository: repository,\n\t}\n\n\th.Get(\"/\", h.Index)\n\th.Get(\"/points\", h.Points)\n\n\treturn h\n}\n"
  },
  {
    "path": "api/handler/score_test.go",
    "content": "package handler_test\n\nimport (\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"testing\"\n\n\t\"github.com/Fs02/go-todo-backend/api/handler\"\n\t\"github.com/Fs02/go-todo-backend/scores\"\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestScore_Index(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tstatus   int\n\t\tpath     string\n\t\tresponse string\n\t\tmockRepo func(repo *reltest.Repository)\n\t}{\n\t\t{\n\t\t\tname:     \"ok\",\n\t\t\tstatus:   http.StatusOK,\n\t\t\tpath:     \"/\",\n\t\t\tresponse: `{\"id\":1, \"total_point\":10, \"created_at\":\"0001-01-01T00:00:00Z\", \"updated_at\":\"0001-01-01T00:00:00Z\"}`,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind().Result(scores.Score{ID: 1, TotalPoint: 10})\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\treq, _     = http.NewRequest(\"GET\", test.path, nil)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\thandler    = handler.NewScore(repository)\n\t\t\t)\n\n\t\t\tif test.mockRepo != nil {\n\t\t\t\ttest.mockRepo(repository)\n\t\t\t}\n\n\t\t\thandler.ServeHTTP(rr, req)\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\n\t\t\trepository.AssertExpectations(t)\n\t\t})\n\t}\n}\n\nfunc TestScore_Points(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tstatus   int\n\t\tpath     string\n\t\tresponse string\n\t\tmockRepo func(repo *reltest.Repository)\n\t}{\n\t\t{\n\t\t\tname:     \"ok\",\n\t\t\tstatus:   http.StatusOK,\n\t\t\tpath:     \"/points\",\n\t\t\tresponse: `[{\"id\":1, \"name\": \"todo completed\", \"count\":1, \"score_id\": 0, \"created_at\":\"0001-01-01T00:00:00Z\", \"updated_at\":\"0001-01-01T00:00:00Z\"}]`,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFindAll().Result([]scores.Point{{ID: 1, Name: \"todo completed\", Count: 1}})\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\treq, _     = http.NewRequest(\"GET\", test.path, nil)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\thandler    = handler.NewScore(repository)\n\t\t\t)\n\n\t\t\tif test.mockRepo != nil {\n\t\t\t\ttest.mockRepo(repository)\n\t\t\t}\n\n\t\t\thandler.ServeHTTP(rr, req)\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\n\t\t\trepository.AssertExpectations(t)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "api/handler/todos.go",
    "content": "package handler\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"strconv\"\n\n\t\"github.com/Fs02/go-todo-backend/todos\"\n\t\"github.com/go-chi/chi\"\n\t\"github.com/go-rel/rel\"\n\t\"github.com/go-rel/rel/where\"\n\t\"go.uber.org/zap\"\n)\n\ntype ctx int\n\nconst (\n\tbodyKey ctx = 0\n\tloadKey ctx = 1\n)\n\n// Todos for todos endpoints.\ntype Todos struct {\n\t*chi.Mux\n\trepository rel.Repository\n\ttodos      todos.Service\n}\n\n// Index handle GET /.\nfunc (t Todos) Index(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx    = r.Context()\n\t\tquery  = r.URL.Query()\n\t\tresult []todos.Todo\n\t\tfilter = todos.Filter{\n\t\t\tKeyword: query.Get(\"keyword\"),\n\t\t}\n\t)\n\n\tif str := query.Get(\"completed\"); str != \"\" {\n\t\tcompleted := str == \"true\"\n\t\tfilter.Completed = &completed\n\t}\n\n\tt.todos.Search(ctx, &result, filter)\n\trender(w, result, 200)\n}\n\n// Create handle POST /\nfunc (t Todos) Create(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx  = r.Context()\n\t\ttodo todos.Todo\n\t)\n\n\tif err := json.NewDecoder(r.Body).Decode(&todo); err != nil {\n\t\tlogger.Warn(\"decode error\", zap.Error(err))\n\t\trender(w, ErrBadRequest, 400)\n\t\treturn\n\t}\n\n\tif err := t.todos.Create(ctx, &todo); err != nil {\n\t\trender(w, err, 422)\n\t\treturn\n\t}\n\n\tw.Header().Set(\"Location\", fmt.Sprint(r.RequestURI, \"/\", todo.ID))\n\trender(w, todo, 201)\n}\n\n// Show handle GET /{ID}\nfunc (t Todos) Show(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx  = r.Context()\n\t\ttodo = ctx.Value(loadKey).(todos.Todo)\n\t)\n\n\trender(w, todo, 200)\n}\n\n// Update handle PATCH /{ID}\nfunc (t Todos) Update(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx     = r.Context()\n\t\ttodo    = ctx.Value(loadKey).(todos.Todo)\n\t\tchanges = rel.NewChangeset(&todo)\n\t)\n\n\tif err := json.NewDecoder(r.Body).Decode(&todo); err != nil {\n\t\tlogger.Warn(\"decode error\", zap.Error(err))\n\t\trender(w, ErrBadRequest, 400)\n\t\treturn\n\t}\n\n\tif err := t.todos.Update(ctx, &todo, changes); err != nil {\n\t\trender(w, err, 422)\n\t\treturn\n\t}\n\n\trender(w, todo, 200)\n}\n\n// Destroy handle DELETE /{ID}\nfunc (t Todos) Destroy(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx  = r.Context()\n\t\ttodo = ctx.Value(loadKey).(todos.Todo)\n\t)\n\n\tt.todos.Delete(ctx, &todo)\n\trender(w, nil, 204)\n}\n\n// Clear handle DELETE /\nfunc (t Todos) Clear(w http.ResponseWriter, r *http.Request) {\n\tvar (\n\t\tctx = r.Context()\n\t)\n\n\tt.todos.Clear(ctx)\n\trender(w, nil, 204)\n}\n\n// Load is middleware that loads todos to context.\nfunc (t Todos) Load(next http.Handler) http.Handler {\n\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tvar (\n\t\t\tctx   = r.Context()\n\t\t\tid, _ = strconv.Atoi(chi.URLParam(r, \"ID\"))\n\t\t\ttodo  todos.Todo\n\t\t)\n\n\t\tif err := t.repository.Find(ctx, &todo, where.Eq(\"id\", id)); err != nil {\n\t\t\tif errors.Is(err, rel.ErrNotFound) {\n\t\t\t\trender(w, err, 404)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tpanic(err)\n\t\t}\n\n\t\tctx = context.WithValue(ctx, loadKey, todo)\n\t\tnext.ServeHTTP(w, r.WithContext(ctx))\n\t})\n}\n\n// NewTodos handler.\nfunc NewTodos(repository rel.Repository, todos todos.Service) Todos {\n\th := Todos{\n\t\tMux:        chi.NewMux(),\n\t\trepository: repository,\n\t\ttodos:      todos,\n\t}\n\n\th.Get(\"/\", h.Index)\n\th.Post(\"/\", h.Create)\n\th.With(h.Load).Get(\"/{ID}\", h.Show)\n\th.With(h.Load).Patch(\"/{ID}\", h.Update)\n\th.With(h.Load).Delete(\"/{ID}\", h.Destroy)\n\th.Delete(\"/\", h.Clear)\n\n\treturn h\n}\n"
  },
  {
    "path": "api/handler/todos_test.go",
    "content": "package handler_test\n\nimport (\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/Fs02/go-todo-backend/api/handler\"\n\t\"github.com/Fs02/go-todo-backend/todos\"\n\t\"github.com/Fs02/go-todo-backend/todos/todostest\"\n\t\"github.com/go-rel/rel/where\"\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestTodos_Index(t *testing.T) {\n\tvar (\n\t\ttrueb = true\n\t)\n\n\ttests := []struct {\n\t\tname            string\n\t\tstatus          int\n\t\tpath            string\n\t\tresponse        string\n\t\tmockTodosSearch func(todos *todostest.Service)\n\t}{\n\t\t{\n\t\t\tname:     \"ok\",\n\t\t\tstatus:   http.StatusOK,\n\t\t\tpath:     \"/\",\n\t\t\tresponse: `[{\"id\":1, \"title\":\"Sleep\", \"completed\":false, \"order\":0, \"url\":\"todos/1\", \"created_at\":\"0001-01-01T00:00:00Z\", \"updated_at\":\"0001-01-01T00:00:00Z\"}]`,\n\t\t\tmockTodosSearch: todostest.MockSearch(\n\t\t\t\t[]todos.Todo{{ID: 1, Title: \"Sleep\"}},\n\t\t\t\ttodos.Filter{},\n\t\t\t\tnil,\n\t\t\t),\n\t\t},\n\t\t{\n\t\t\tname:     \"with keyword and filter completed\",\n\t\t\tstatus:   http.StatusOK,\n\t\t\tpath:     \"/?keyword=Wake&completed=true\",\n\t\t\tresponse: `[{\"id\":2, \"title\":\"Wake\", \"completed\":true, \"order\":0, \"url\":\"todos/2\", \"created_at\":\"0001-01-01T00:00:00Z\", \"updated_at\":\"0001-01-01T00:00:00Z\"}]`,\n\t\t\tmockTodosSearch: todostest.MockSearch(\n\t\t\t\t[]todos.Todo{{ID: 2, Title: \"Wake\", Completed: true}},\n\t\t\t\ttodos.Filter{Keyword: \"Wake\", Completed: &trueb},\n\t\t\t\tnil,\n\t\t\t),\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\treq, _     = http.NewRequest(\"GET\", test.path, nil)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\ttodos      = &todostest.Service{}\n\t\t\t\thandler    = handler.NewTodos(repository, todos)\n\t\t\t)\n\n\t\t\ttodostest.Mock(todos, test.mockTodosSearch)\n\n\t\t\thandler.ServeHTTP(rr, req)\n\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\n\t\t\trepository.AssertExpectations(t)\n\t\t\ttodos.AssertExpectations(t)\n\t\t})\n\t}\n}\n\nfunc TestTodos_Create(t *testing.T) {\n\ttests := []struct {\n\t\tname            string\n\t\tstatus          int\n\t\tpath            string\n\t\tpayload         string\n\t\tresponse        string\n\t\tlocation        string\n\t\tmockTodosCreate func(todos *todostest.Service)\n\t}{\n\t\t{\n\t\t\tname:     \"created\",\n\t\t\tstatus:   http.StatusCreated,\n\t\t\tpath:     \"/\",\n\t\t\tpayload:  `{\"title\": \"Sleep\"}`,\n\t\t\tresponse: `{\"id\":1, \"title\":\"Sleep\", \"completed\":false, \"order\":0, \"url\":\"todos/1\", \"created_at\":\"0001-01-01T00:00:00Z\", \"updated_at\":\"0001-01-01T00:00:00Z\"}`,\n\t\t\tlocation: \"/1\",\n\t\t\tmockTodosCreate: todostest.MockCreate(\n\t\t\t\ttodos.Todo{ID: 1, Title: \"Sleep\"},\n\t\t\t\tnil,\n\t\t\t),\n\t\t},\n\t\t{\n\t\t\tname:     \"validation error\",\n\t\t\tstatus:   http.StatusUnprocessableEntity,\n\t\t\tpath:     \"/\",\n\t\t\tpayload:  `{\"title\": \"\"}`,\n\t\t\tresponse: `{\"error\":\"Title can't be blank\"}`,\n\t\t\tmockTodosCreate: todostest.MockCreate(\n\t\t\t\ttodos.Todo{Title: \"Sleep\"},\n\t\t\t\ttodos.ErrTodoTitleBlank,\n\t\t\t),\n\t\t},\n\t\t{\n\t\t\tname:     \"bad request\",\n\t\t\tstatus:   http.StatusBadRequest,\n\t\t\tpath:     \"/\",\n\t\t\tpayload:  ``,\n\t\t\tresponse: `{\"error\":\"Bad Request\"}`,\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\tbody       = strings.NewReader(test.payload)\n\t\t\t\treq, _     = http.NewRequest(\"POST\", test.path, body)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\ttodos      = &todostest.Service{}\n\t\t\t\thandler    = handler.NewTodos(repository, todos)\n\t\t\t)\n\n\t\t\ttodostest.Mock(todos, test.mockTodosCreate)\n\n\t\t\thandler.ServeHTTP(rr, req)\n\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tassert.Equal(t, test.location, rr.Header().Get(\"Location\"))\n\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\n\t\t\trepository.AssertExpectations(t)\n\t\t\ttodos.AssertExpectations(t)\n\t\t})\n\t}\n}\n\nfunc TestTodos_Show(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tstatus   int\n\t\tpath     string\n\t\tresponse string\n\t\tisPanic  bool\n\t\tmockRepo func(repo *reltest.Repository)\n\t}{\n\t\t{\n\t\t\tname:     \"ok\",\n\t\t\tstatus:   http.StatusOK,\n\t\t\tpath:     \"/1\",\n\t\t\tresponse: `{\"id\":1, \"title\":\"Sleep\", \"completed\":false, \"order\":0, \"url\":\"todos/1\", \"created_at\":\"0001-01-01T00:00:00Z\", \"updated_at\":\"0001-01-01T00:00:00Z\"}`,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind(where.Eq(\"id\", 1)).Result(todos.Todo{ID: 1, Title: \"Sleep\"})\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:     \"not found\",\n\t\t\tstatus:   http.StatusNotFound,\n\t\t\tpath:     \"/1\",\n\t\t\tresponse: `{\"error\":\"entity not found\"}`,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind(where.Eq(\"id\", 1)).NotFound()\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"panic\",\n\t\t\tpath:    \"/1\",\n\t\t\tisPanic: true,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind(where.Eq(\"id\", 1)).ConnectionClosed()\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\treq, _     = http.NewRequest(\"GET\", test.path, nil)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\ttodos      = &todostest.Service{}\n\t\t\t\thandler    = handler.NewTodos(repository, todos)\n\t\t\t)\n\n\t\t\tif test.mockRepo != nil {\n\t\t\t\ttest.mockRepo(repository)\n\t\t\t}\n\n\t\t\tif test.isPanic {\n\t\t\t\tassert.Panics(t, func() {\n\t\t\t\t\thandler.ServeHTTP(rr, req)\n\t\t\t\t})\n\t\t\t} else {\n\t\t\t\thandler.ServeHTTP(rr, req)\n\t\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\t\t\t}\n\n\t\t\trepository.AssertExpectations(t)\n\t\t\ttodos.AssertExpectations(t)\n\t\t})\n\t}\n}\n\nfunc TestTodos_Update(t *testing.T) {\n\ttests := []struct {\n\t\tname            string\n\t\tstatus          int\n\t\tpath            string\n\t\tpayload         string\n\t\tresponse        string\n\t\tmockRepo        func(repo *reltest.Repository)\n\t\tmockTodosUpdate func(todos *todostest.Service)\n\t}{\n\t\t{\n\t\t\tname:     \"ok\",\n\t\t\tstatus:   http.StatusOK,\n\t\t\tpath:     \"/1\",\n\t\t\tpayload:  `{\"title\": \"Wake\"}`,\n\t\t\tresponse: `{\"id\":1, \"title\":\"Wake\", \"completed\":false, \"order\":0, \"url\":\"todos/1\", \"created_at\":\"0001-01-01T00:00:00Z\", \"updated_at\":\"0001-01-01T00:00:00Z\"}`,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind(where.Eq(\"id\", 1)).Result(todos.Todo{ID: 1, Title: \"Sleep\"})\n\t\t\t},\n\t\t\tmockTodosUpdate: todostest.MockUpdate(\n\t\t\t\ttodos.Todo{ID: 1, Title: \"Wake\"},\n\t\t\t\tnil,\n\t\t\t),\n\t\t},\n\t\t{\n\t\t\tname:     \"validation error\",\n\t\t\tstatus:   http.StatusUnprocessableEntity,\n\t\t\tpath:     \"/1\",\n\t\t\tpayload:  `{\"title\": \"\"}`,\n\t\t\tresponse: `{\"error\":\"Title can't be blank\"}`,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind(where.Eq(\"id\", 1)).Result(todos.Todo{ID: 1, Title: \"Sleep\"})\n\t\t\t},\n\t\t\tmockTodosUpdate: todostest.MockUpdate(\n\t\t\t\ttodos.Todo{ID: 1, Title: \"\"},\n\t\t\t\ttodos.ErrTodoTitleBlank,\n\t\t\t),\n\t\t},\n\t\t{\n\t\t\tname:     \"bad request\",\n\t\t\tstatus:   http.StatusBadRequest,\n\t\t\tpath:     \"/1\",\n\t\t\tpayload:  ``,\n\t\t\tresponse: `{\"error\":\"Bad Request\"}`,\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind(where.Eq(\"id\", 1)).Result(todos.Todo{ID: 1, Title: \"Sleep\"})\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\tbody       = strings.NewReader(test.payload)\n\t\t\t\treq, _     = http.NewRequest(\"PATCH\", test.path, body)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\ttodos      = &todostest.Service{}\n\t\t\t\thandler    = handler.NewTodos(repository, todos)\n\t\t\t)\n\n\t\t\tif test.mockRepo != nil {\n\t\t\t\ttest.mockRepo(repository)\n\t\t\t}\n\n\t\t\ttodostest.Mock(todos, test.mockTodosUpdate)\n\n\t\t\thandler.ServeHTTP(rr, req)\n\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\n\t\t\trepository.AssertExpectations(t)\n\t\t\ttodos.AssertExpectations(t)\n\t\t})\n\t}\n}\n\nfunc TestTodos_Destroy(t *testing.T) {\n\ttests := []struct {\n\t\tname            string\n\t\tstatus          int\n\t\tpath            string\n\t\tresponse        string\n\t\tmockRepo        func(repo *reltest.Repository)\n\t\tmockTodosDelete func(todos *todostest.Service)\n\t}{\n\t\t{\n\t\t\tname:     \"ok\",\n\t\t\tstatus:   http.StatusNoContent,\n\t\t\tpath:     \"/1\",\n\t\t\tresponse: \"\",\n\t\t\tmockRepo: func(repo *reltest.Repository) {\n\t\t\t\trepo.ExpectFind(where.Eq(\"id\", 1)).Result(todos.Todo{ID: 1, Title: \"Sleep\"})\n\t\t\t},\n\t\t\tmockTodosDelete: todostest.MockDelete(),\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\treq, _     = http.NewRequest(\"DELETE\", test.path, nil)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\ttodos      = &todostest.Service{}\n\t\t\t\thandler    = handler.NewTodos(repository, todos)\n\t\t\t)\n\n\t\t\tif test.mockRepo != nil {\n\t\t\t\ttest.mockRepo(repository)\n\t\t\t}\n\n\t\t\ttodostest.Mock(todos, test.mockTodosDelete)\n\n\t\t\thandler.ServeHTTP(rr, req)\n\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tassert.Equal(t, test.response, rr.Body.String())\n\n\t\t\trepository.AssertExpectations(t)\n\t\t\ttodos.AssertExpectations(t)\n\t\t})\n\t}\n}\n\nfunc TestTodos_Clear(t *testing.T) {\n\ttests := []struct {\n\t\tname           string\n\t\tstatus         int\n\t\tpath           string\n\t\tresponse       string\n\t\tmockTodosClear func(todos *todostest.Service)\n\t}{\n\t\t{\n\t\t\tname:           \"created\",\n\t\t\tstatus:         http.StatusNoContent,\n\t\t\tpath:           \"/\",\n\t\t\tresponse:       \"\",\n\t\t\tmockTodosClear: todostest.MockClear(),\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tt.Run(test.name, func(t *testing.T) {\n\t\t\tvar (\n\t\t\t\treq, _     = http.NewRequest(\"DELETE\", test.path, nil)\n\t\t\t\trr         = httptest.NewRecorder()\n\t\t\t\trepository = reltest.New()\n\t\t\t\ttodos      = &todostest.Service{}\n\t\t\t\thandler    = handler.NewTodos(repository, todos)\n\t\t\t)\n\n\t\t\ttodostest.Mock(todos, test.mockTodosClear)\n\n\t\t\thandler.ServeHTTP(rr, req)\n\n\t\t\tassert.Equal(t, test.status, rr.Code)\n\t\t\tif test.response != \"\" {\n\t\t\t\tassert.JSONEq(t, test.response, rr.Body.String())\n\t\t\t} else {\n\t\t\t\tassert.Equal(t, \"\", rr.Body.String())\n\t\t\t}\n\n\t\t\trepository.AssertExpectations(t)\n\t\t\ttodos.AssertExpectations(t)\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "api/middleware/README.md",
    "content": "# middleware\n\nThis package contains shared middleware that can be used accross handler. An example middleware that can be implemented here is authentication related middleware.\n"
  },
  {
    "path": "cmd/README.md",
    "content": "# cmd\n\nContains folders for main function for each application, the directory name for each server should match the name of the executable you want to have.\n"
  },
  {
    "path": "cmd/api/main.go",
    "content": "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"os/signal\"\n\t\"strings\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/Fs02/go-todo-backend/api\"\n\t\"github.com/go-rel/postgres\"\n\t\"github.com/go-rel/rel\"\n\t_ \"github.com/lib/pq\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\tlogger, _ = zap.NewProduction(zap.Fields(zap.String(\"type\", \"main\")))\n\tshutdowns []func() error\n)\n\nfunc main() {\n\tvar (\n\t\tctx        = context.Background()\n\t\tport       = os.Getenv(\"PORT\")\n\t\trepository = initRepository()\n\t\tmux        = api.NewMux(repository)\n\t\tserver     = http.Server{\n\t\t\tAddr:    \":\" + port,\n\t\t\tHandler: mux,\n\t\t}\n\t\tshutdown = make(chan struct{})\n\t)\n\n\tgo gracefulShutdown(ctx, &server, shutdown)\n\n\tlogger.Info(\"server starting: http://localhost\" + server.Addr)\n\tif err := server.ListenAndServe(); err != http.ErrServerClosed {\n\t\tlogger.Fatal(\"server error\", zap.Error(err))\n\t}\n\n\t<-shutdown\n}\n\nfunc initRepository() rel.Repository {\n\tvar (\n\t\tlogger, _ = zap.NewProduction(zap.Fields(zap.String(\"type\", \"repository\")))\n\t\tdsn       = fmt.Sprintf(\"postgres://%s:%s@%s:%s/%s?sslmode=disable\",\n\t\t\tos.Getenv(\"POSTGRESQL_USERNAME\"),\n\t\t\tos.Getenv(\"POSTGRESQL_PASSWORD\"),\n\t\t\tos.Getenv(\"POSTGRESQL_HOST\"),\n\t\t\tos.Getenv(\"POSTGRESQL_PORT\"),\n\t\t\tos.Getenv(\"POSTGRESQL_DATABASE\"))\n\t)\n\n\tadapter, err := postgres.Open(dsn)\n\tif err != nil {\n\t\tlogger.Fatal(err.Error(), zap.Error(err))\n\t}\n\t// add to graceful shutdown list.\n\tshutdowns = append(shutdowns, adapter.Close)\n\n\trepository := rel.New(adapter)\n\trepository.Instrumentation(func(ctx context.Context, op string, message string, args ...interface{}) func(err error) {\n\t\t// no op for rel functions.\n\t\tif strings.HasPrefix(op, \"rel-\") {\n\t\t\treturn func(error) {}\n\t\t}\n\n\t\tt := time.Now()\n\n\t\treturn func(err error) {\n\t\t\tduration := time.Since(t)\n\t\t\tif err != nil {\n\t\t\t\tlogger.Error(message, zap.Error(err), zap.Duration(\"duration\", duration), zap.String(\"operation\", op))\n\t\t\t} else {\n\t\t\t\tlogger.Info(message, zap.Duration(\"duration\", duration), zap.String(\"operation\", op))\n\t\t\t}\n\t\t}\n\t})\n\n\treturn repository\n}\n\nfunc gracefulShutdown(ctx context.Context, server *http.Server, shutdown chan struct{}) {\n\tvar (\n\t\tsigint = make(chan os.Signal, 1)\n\t)\n\n\tsignal.Notify(sigint, os.Interrupt, syscall.SIGTERM)\n\t<-sigint\n\n\tlogger.Info(\"shutting down server gracefully\")\n\n\t// stop receiving any request.\n\tif err := server.Shutdown(ctx); err != nil {\n\t\tlogger.Fatal(\"shutdown error\", zap.Error(err))\n\t}\n\n\t// close any other modules.\n\tfor i := range shutdowns {\n\t\tshutdowns[i]()\n\t}\n\n\tclose(shutdown)\n}\n"
  },
  {
    "path": "db/README.md",
    "content": "# db\n\nContains file required for building [database migration](https://go-rel.github.io/migration/).\n"
  },
  {
    "path": "db/migrations/20202806225100_create_todos.go",
    "content": "package migrations\n\nimport (\n\t\"github.com/go-rel/rel\"\n)\n\n// MigrateCreateTodos definition\nfunc MigrateCreateTodos(schema *rel.Schema) {\n\tschema.CreateTable(\"todos\", func(t *rel.Table) {\n\t\tt.ID(\"id\")\n\t\tt.DateTime(\"created_at\")\n\t\tt.DateTime(\"updated_at\")\n\t\tt.String(\"title\")\n\t\tt.Bool(\"completed\")\n\t\tt.Int(\"order\")\n\t})\n\n\tschema.CreateIndex(\"todos\", \"order\", []string{\"order\"})\n}\n\n// RollbackCreateTodos definition\nfunc RollbackCreateTodos(schema *rel.Schema) {\n\tschema.DropTable(\"todos\")\n}\n"
  },
  {
    "path": "db/migrations/20203006230600_create_scores.go",
    "content": "package migrations\n\nimport (\n\t\"github.com/go-rel/rel\"\n)\n\n// MigrateCreateScores definition\nfunc MigrateCreateScores(schema *rel.Schema) {\n\tschema.CreateTable(\"scores\", func(t *rel.Table) {\n\t\tt.ID(\"id\")\n\t\tt.DateTime(\"created_at\")\n\t\tt.DateTime(\"updated_at\")\n\t\tt.Int(\"total_point\")\n\t})\n}\n\n// RollbackCreateScores definition\nfunc RollbackCreateScores(schema *rel.Schema) {\n\tschema.DropTable(\"scores\")\n}\n"
  },
  {
    "path": "db/migrations/20203006230700_create_points.go",
    "content": "package migrations\n\nimport (\n\t\"github.com/go-rel/rel\"\n)\n\n// MigrateCreatePoints definition\nfunc MigrateCreatePoints(schema *rel.Schema) {\n\tschema.CreateTable(\"points\", func(t *rel.Table) {\n\t\tt.ID(\"id\")\n\t\tt.DateTime(\"created_at\")\n\t\tt.DateTime(\"updated_at\")\n\t\tt.String(\"name\")\n\t\tt.Int(\"count\")\n\t\tt.Int(\"score_id\", rel.Unsigned(true))\n\n\t\tt.ForeignKey(\"score_id\", \"scores\", \"id\")\n\t})\n}\n\n// RollbackCreatePoints definition\nfunc RollbackCreatePoints(schema *rel.Schema) {\n\tschema.DropTable(\"points\")\n}\n"
  },
  {
    "path": "deploy/README.md",
    "content": "# deploy\n\nThis folder is where you store any deployable artifacts like `Dockerfile`.\n"
  },
  {
    "path": "deploy/api/Dockerfile",
    "content": "# Step 1:\nFROM golang:1.13.5-alpine3.11 AS builder\n\nRUN apk update && apk add --no-cache git make\n\nWORKDIR $GOPATH/src/github.com/Fs02/go-todo-backend\nCOPY . .\n\nRUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64\\\n    go build -mod=vendor -ldflags=\"-w -s\" -o /go/bin/api ./cmd/api\n\n# Step 2:\n# you can also use scratch here, but I prefer to use alpine because it comes with basic command such as curl useful for debugging.\nFROM alpine:3.11\n\nRUN apk update && apk add --no-cache curl ca-certificates\nRUN rm -rf /var/cache/apk/*\n\nCOPY --from=builder --chown=65534:0 /go/bin/api /go/bin/api\n\nUSER 65534\nEXPOSE 3000\n\nENTRYPOINT [\"/go/bin/api\"]\n"
  },
  {
    "path": "docker-compose.yml",
    "content": "version: '3'\n\nservices:\n  postgres:\n    image: postgres:alpine\n    environment:\n      POSTGRES_USER: ${POSTGRESQL_USERNAME}\n      POSTGRES_PASSWORD: ${POSTGRESQL_PASSWORD}\n      POSTGRES_DB: ${POSTGRESQL_DATABASE}\n    ports:\n    - ${POSTGRESQL_PORT}:5432\n    volumes:\n    - ./data/postgresql:/var/lib/postgresql/data/\n"
  },
  {
    "path": "go.mod",
    "content": "module github.com/Fs02/go-todo-backend\n\ngo 1.19\n\nrequire (\n\tgithub.com/go-chi/chi v4.1.2+incompatible\n\tgithub.com/go-rel/postgres v0.8.0\n\tgithub.com/go-rel/rel v0.39.0\n\tgithub.com/go-rel/reltest v0.11.0\n\tgithub.com/goware/cors v1.1.1\n\tgithub.com/lib/pq v1.10.9\n\tgithub.com/stretchr/testify v1.8.3\n\tgo.uber.org/zap v1.24.0\n)\n\nrequire (\n\tgithub.com/davecgh/go-spew v1.1.1 // indirect\n\tgithub.com/go-rel/sql v0.12.0 // indirect\n\tgithub.com/jinzhu/inflection v1.0.0 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.0 // indirect\n\tgithub.com/serenize/snaker v0.0.0-20201027110005-a7ad2135616e // indirect\n\tgithub.com/stretchr/objx v0.5.0 // indirect\n\tgo.uber.org/atomic v1.10.0 // indirect\n\tgo.uber.org/multierr v1.8.0 // indirect\n\tgopkg.in/yaml.v3 v3.0.1 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=\ngithub.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=\ngithub.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=\ngithub.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=\ngithub.com/go-chi/chi v4.1.2+incompatible h1:fGFk2Gmi/YKXk0OmGfBh0WgmN3XB8lVnEyNz34tQRec=\ngithub.com/go-chi/chi v4.1.2+incompatible/go.mod h1:eB3wogJHnLi3x/kFX2A+IbTBlXxmMeXJVKy9tTv1XzQ=\ngithub.com/go-rel/postgres v0.8.0 h1:SBaXmCQbZ7t0JBw9M2UUnNgna+vAVsxPehOfllW63RU=\ngithub.com/go-rel/postgres v0.8.0/go.mod h1:74yHS5xTTMTBUys1XqfsPea3yOdCXtSa7J1BxvJY/so=\ngithub.com/go-rel/primaryreplica v0.4.0 h1:lhU+4dh0/sDQEs602Chiz0SJDXewPU06baWQlx7oB3c=\ngithub.com/go-rel/rel v0.38.0/go.mod h1:Zq18pQqXZbDh2JBCo29jgt+y90nZWkUvI+W9Ls29ans=\ngithub.com/go-rel/rel v0.39.0 h1:2zmK8kazM82iRRfWX7+mm1MxDkGKDj2W+xJLjguli5U=\ngithub.com/go-rel/rel v0.39.0/go.mod h1:yN6+aimHyRIzbuWFe5DaxiZPuVuPfd7GlLpy/YTqTUg=\ngithub.com/go-rel/reltest v0.11.0 h1:X9UsgZlk4zHAQlckQ5iRCE7GG1ZT2VpbLEca/dnEmYQ=\ngithub.com/go-rel/reltest v0.11.0/go.mod h1:NWpBpRcdzy7UU6/KZtJVLOvCKoiNcQEWYEZ9//cCaTw=\ngithub.com/go-rel/sql v0.12.0 h1:1iIm2JgUr854TjN2C2403A9nZKH1RwbMJp09SQC4HO8=\ngithub.com/go-rel/sql v0.12.0/go.mod h1:Usxy37iCTA5aIqoJGekV4ATdCUOK5w2FiR00/VvvLJQ=\ngithub.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=\ngithub.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=\ngithub.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=\ngithub.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=\ngithub.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=\ngithub.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=\ngithub.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=\ngithub.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=\ngithub.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=\ngithub.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=\ngithub.com/goware/cors v1.1.1 h1:70q2dL4qV2Gl5ZlPCH8VO2ZsANEcidqbpb6Pru6qKzs=\ngithub.com/goware/cors v1.1.1/go.mod h1:b14AZ0Wsjv3gNG3fr/TTDexvbEJyWljkGLKLVpe4vns=\ngithub.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=\ngithub.com/jackc/chunkreader/v2 v2.0.1 h1:i+RDz65UE+mmpjTfyz0MoVTnzeYxroil2G82ki7MGG8=\ngithub.com/jackc/pgconn v1.12.1 h1:rsDFzIpRk7xT4B8FufgpCCeyjdNpKyghZeSefViE5W8=\ngithub.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE=\ngithub.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=\ngithub.com/jackc/pgproto3/v2 v2.3.0 h1:brH0pCGBDkBW07HWlN/oSBXrmo3WB0UvZd1pIuDcL8Y=\ngithub.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b h1:C8S2+VttkHFdOOCXJe+YGfa4vHYwlt4Zx+IVXQ97jYg=\ngithub.com/jackc/pgtype v1.11.0 h1:u4uiGPz/1hryuXzyaBhSk6dnIyyG2683olG2OV+UUgs=\ngithub.com/jackc/pgx/v4 v4.16.1 h1:JzTglcal01DrghUqt+PmzWsZx/Yh7SC/CTQmSBMTd0Y=\ngithub.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=\ngithub.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=\ngithub.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=\ngithub.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=\ngithub.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=\ngithub.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=\ngithub.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=\ngithub.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=\ngithub.com/onsi/ginkgo v1.15.0 h1:1V1NfVQR87RtWAgp1lv9JZJ5Jap+XFGKPi00andXGi4=\ngithub.com/onsi/ginkgo v1.15.0/go.mod h1:hF8qUzuuC8DJGygJH3726JnCZX4MYbRB8yFfISqnKUg=\ngithub.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=\ngithub.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=\ngithub.com/onsi/gomega v1.10.5 h1:7n6FEkpFmfCoo2t+YYqXH0evK+a9ICQz0xcAy9dYcaQ=\ngithub.com/onsi/gomega v1.10.5/go.mod h1:gza4q3jKQJijlu05nKWRCW/GavJumGt8aNRxWg7mt48=\ngithub.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=\ngithub.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/serenize/snaker v0.0.0-20201027110005-a7ad2135616e h1:zWKUYT07mGmVBH+9UgnHXd/ekCK99C8EbDSAt5qsjXE=\ngithub.com/serenize/snaker v0.0.0-20201027110005-a7ad2135616e/go.mod h1:Yow6lPLSAXx2ifx470yD/nUe22Dv5vBvxK/UK9UUTVs=\ngithub.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=\ngithub.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=\ngithub.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=\ngithub.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=\ngithub.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=\ngithub.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=\ngithub.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=\ngithub.com/stretchr/testify v1.8.3 h1:RP3t2pwF7cMEbC1dqtB6poj3niw/9gnV4Cjg5oW5gtY=\ngithub.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=\ngithub.com/subosito/gotenv v1.4.0/go.mod h1:mZd6rFysKEcUhUHXJk0C/08wAgyDBFuwEYL7vWWGaGo=\ngithub.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=\ngo.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=\ngo.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ=\ngo.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=\ngo.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI=\ngo.uber.org/multierr v1.8.0 h1:dg6GjLku4EH+249NNmoIciG9N/jURbDG+pFlTkhzIC8=\ngo.uber.org/multierr v1.8.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=\ngo.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=\ngo.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=\ngolang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=\ngolang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=\ngolang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=\ngolang.org/x/crypto v0.0.0-20210921155107-089bfa567519 h1:7I4JAnoQBe7ZtJcBaYHi5UtiO8tQHbUSXxL+pnGRANg=\ngolang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=\ngolang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=\ngolang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=\ngolang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=\ngolang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=\ngolang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=\ngolang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=\ngolang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0=\ngolang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=\ngolang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=\ngolang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=\ngolang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=\ngolang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=\ngolang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=\ngolang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=\ngolang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=\ngolang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=\ngolang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=\ngolang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngolang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=\ngoogle.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=\ngoogle.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=\ngoogle.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=\ngoogle.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=\ngoogle.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=\ngoogle.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=\ngopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=\ngopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=\ngopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=\ngopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=\ngopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\n"
  },
  {
    "path": "scores/earn.go",
    "content": "package scores\n\nimport (\n\t\"context\"\n\t\"errors\"\n\n\t\"github.com/go-rel/rel\"\n)\n\ntype earn struct {\n\trepository rel.Repository\n}\n\nfunc (e earn) Earn(ctx context.Context, name string, count int) error {\n\tvar (\n\t\tscore Score\n\t)\n\n\treturn e.repository.Transaction(ctx, func(ctx context.Context) error {\n\t\t// for simplicity, assumes only one user, so there's only one score and always retrieve the first one.\n\t\t// this will probably lock the entire table since there's no where clause provided, but it's find since we assume only one user.\n\t\tif err := e.repository.Find(ctx, &score, rel.ForUpdate()); err != nil {\n\t\t\tif !errors.Is(err, rel.ErrNotFound) {\n\t\t\t\t// unexpected error.\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tscore.TotalPoint = count\n\t\t\te.repository.MustInsert(ctx, &score)\n\t\t} else {\n\t\t\tscore.TotalPoint += count\n\t\t\te.repository.Update(ctx, &score)\n\t\t}\n\n\t\t// insert point history.\n\t\te.repository.MustInsert(ctx, &Point{Name: name, Count: count, ScoreID: score.ID})\n\t\treturn nil\n\t})\n}\n"
  },
  {
    "path": "scores/earn_test.go",
    "content": "package scores\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/go-rel/rel\"\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestEarn(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tservice    = New(repository)\n\t\tname       = \"todo completed\"\n\t\tcount      = 1\n\t)\n\n\trepository.ExpectTransaction(func(repository *reltest.Repository) {\n\t\trepository.ExpectFind(rel.ForUpdate()).Result(Score{ID: 1, TotalPoint: 10})\n\t\trepository.ExpectUpdate().For(&Score{ID: 1, TotalPoint: 11})\n\t\trepository.ExpectInsert().For(&Point{Name: name, Count: count, ScoreID: 1})\n\t})\n\n\tassert.Nil(t, service.Earn(ctx, name, count))\n\trepository.AssertExpectations(t)\n}\n\nfunc TestEarn_insertScore(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tservice    = New(repository)\n\t\tname       = \"todo completed\"\n\t\tcount      = 1\n\t)\n\n\trepository.ExpectTransaction(func(repository *reltest.Repository) {\n\t\trepository.ExpectFind(rel.ForUpdate()).NotFound()\n\t\trepository.ExpectInsert().For(&Score{TotalPoint: 1})\n\t\trepository.ExpectInsert().For(&Point{Name: name, Count: count, ScoreID: 1})\n\t})\n\n\tassert.Nil(t, service.Earn(ctx, name, count))\n\trepository.AssertExpectations(t)\n}\n\nfunc TestEarn_findError(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tservice    = New(repository)\n\t\tname       = \"todo completed\"\n\t\tcount      = 1\n\t)\n\n\trepository.ExpectTransaction(func(repository *reltest.Repository) {\n\t\trepository.ExpectFind(rel.ForUpdate()).ConnectionClosed()\n\t})\n\n\tassert.Equal(t, reltest.ErrConnectionClosed, service.Earn(ctx, name, count))\n\n\trepository.AssertExpectations(t)\n}\n"
  },
  {
    "path": "scores/point.go",
    "content": "package scores\n\nimport (\n\t\"time\"\n)\n\n// Point component for score.\ntype Point struct {\n\tID        int       `json:\"id\"`\n\tName      string    `json:\"name\"`\n\tCount     int       `json:\"count\"`\n\tScoreID   int       `json:\"score_id\"`\n\tCreatedAt time.Time `json:\"created_at\"`\n\tUpdatedAt time.Time `json:\"updated_at\"`\n}\n"
  },
  {
    "path": "scores/score.go",
    "content": "package scores\n\nimport (\n\t\"time\"\n)\n\n// Score stores total points.\ntype Score struct {\n\tID         int       `json:\"id\"`\n\tTotalPoint int       `json:\"total_point\"`\n\tCreatedAt  time.Time `json:\"created_at\"`\n\tUpdatedAt  time.Time `json:\"updated_at\"`\n}\n"
  },
  {
    "path": "scores/scorestest/service.go",
    "content": "// Code generated by mockery 2.9.0. DO NOT EDIT.\n\npackage scorestest\n\nimport (\n\tcontext \"context\"\n\n\tmock \"github.com/stretchr/testify/mock\"\n)\n\n// Service is an autogenerated mock type for the Service type\ntype Service struct {\n\tmock.Mock\n}\n\n// Earn provides a mock function with given fields: ctx, name, count\nfunc (_m *Service) Earn(ctx context.Context, name string, count int) error {\n\tret := _m.Called(ctx, name, count)\n\n\tvar r0 error\n\tif rf, ok := ret.Get(0).(func(context.Context, string, int) error); ok {\n\t\tr0 = rf(ctx, name, count)\n\t} else {\n\t\tr0 = ret.Error(0)\n\t}\n\n\treturn r0\n}\n"
  },
  {
    "path": "scores/service.go",
    "content": "package scores\n\nimport (\n\t\"context\"\n\n\t\"github.com/go-rel/rel\"\n)\n\n//go:generate mockery --name=Service --case=underscore --output scorestest --outpkg scorestest\n\n// Service instance for todo's domain.\n// Any operation done to any of object within this domain should use this service.\ntype Service interface {\n\tEarn(ctx context.Context, name string, count int) error\n}\n\n// beside embeding the struct, you can also declare the function directly on this struct.\n// the advantage of embedding the struct is it allows spreading the implementation across multiple files.\ntype service struct {\n\tearn\n}\n\nvar _ Service = (*service)(nil)\n\n// New Scores service.\nfunc New(repository rel.Repository) Service {\n\treturn service{\n\t\tearn: earn{repository: repository},\n\t}\n}\n"
  },
  {
    "path": "todos/README.md",
    "content": "# todos\n\nContains domain related entities and business logic implementations. The business functionality should be exported using `Service` interface that contains necessary functions to work with the entity.\n\nEvery domain/client should have it's own testing package (`todostest`) that can be used to mock the functionality of this package, usualy generated using external tools like `mockery`.\n"
  },
  {
    "path": "todos/clear.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\n\t\"github.com/go-rel/rel\"\n)\n\ntype clear struct {\n\trepository rel.Repository\n}\n\nfunc (c clear) Clear(ctx context.Context) {\n\tc.repository.MustDeleteAny(ctx, rel.From(\"todos\"))\n}\n"
  },
  {
    "path": "todos/clear_test.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/go-rel/rel\"\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestClear(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tservice    = New(repository, nil)\n\t)\n\n\trepository.ExpectDeleteAny(rel.From(\"todos\")).Unsafe()\n\n\tassert.NotPanics(t, func() {\n\t\tservice.Clear(ctx)\n\t})\n\n\trepository.AssertExpectations(t)\n}\n"
  },
  {
    "path": "todos/create.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\n\t\"github.com/Fs02/go-todo-backend/scores\"\n\t\"github.com/go-rel/rel\"\n\t\"go.uber.org/zap\"\n)\n\ntype create struct {\n\trepository rel.Repository\n\tscores     scores.Service\n}\n\nfunc (c create) Create(ctx context.Context, todo *Todo) error {\n\tif err := todo.Validate(); err != nil {\n\t\tlogger.Warn(\"validation error\", zap.Error(err))\n\t\treturn err\n\t}\n\n\t// if completed, then earn a point.\n\tif todo.Completed {\n\t\treturn c.repository.Transaction(ctx, func(ctx context.Context) error {\n\t\t\tc.repository.MustInsert(ctx, todo)\n\t\t\treturn c.scores.Earn(ctx, \"todo completed\", 1)\n\t\t})\n\t}\n\n\tc.repository.MustInsert(ctx, todo)\n\treturn nil\n}\n"
  },
  {
    "path": "todos/create_test.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/Fs02/go-todo-backend/scores/scorestest\"\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/mock\"\n)\n\nfunc TestCreate(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tscores     = &scorestest.Service{}\n\t\tservice    = New(repository, scores)\n\t\ttodo       = Todo{Title: \"Sleep\"}\n\t)\n\n\trepository.ExpectInsert().For(&todo)\n\n\tassert.Nil(t, service.Create(ctx, &todo))\n\tassert.NotEmpty(t, todo.ID)\n\n\trepository.AssertExpectations(t)\n\tscores.AssertExpectations(t)\n}\n\nfunc TestCreate_completed(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tscores     = &scorestest.Service{}\n\t\tservice    = New(repository, scores)\n\t\ttodo       = Todo{Title: \"Sleep\", Completed: true}\n\t)\n\n\trepository.ExpectTransaction(func(repository *reltest.Repository) {\n\t\tscores.On(\"Earn\", mock.Anything, \"todo completed\", 1).Return(nil)\n\t\trepository.ExpectInsert().For(&todo)\n\t})\n\n\tassert.Nil(t, service.Create(ctx, &todo))\n\tassert.NotEmpty(t, todo.ID)\n\n\trepository.AssertExpectations(t)\n\tscores.AssertExpectations(t)\n}\n\nfunc TestCreate_validateError(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tscores     = &scorestest.Service{}\n\t\tservice    = New(repository, scores)\n\t\ttodo       = Todo{Title: \"\"}\n\t)\n\n\tassert.Equal(t, ErrTodoTitleBlank, service.Create(ctx, &todo))\n\n\trepository.AssertExpectations(t)\n\tscores.AssertExpectations(t)\n}\n"
  },
  {
    "path": "todos/delete.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\n\t\"github.com/go-rel/rel\"\n)\n\ntype delete struct {\n\trepository rel.Repository\n}\n\nfunc (d delete) Delete(ctx context.Context, todo *Todo) {\n\td.repository.MustDelete(ctx, todo)\n}\n"
  },
  {
    "path": "todos/delete_test.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestDelete(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tservice    = New(repository, nil)\n\t\ttodo       = Todo{ID: 1, Title: \"Sleep\"}\n\t)\n\n\trepository.ExpectDelete().ForType(\"todos.Todo\")\n\n\tassert.NotPanics(t, func() {\n\t\tservice.Delete(ctx, &todo)\n\t})\n\n\trepository.AssertExpectations(t)\n}\n"
  },
  {
    "path": "todos/search.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\n\t\"github.com/go-rel/rel\"\n)\n\n// Filter for search.\ntype Filter struct {\n\tKeyword   string\n\tCompleted *bool\n}\n\ntype search struct {\n\trepository rel.Repository\n}\n\nfunc (s search) Search(ctx context.Context, todos *[]Todo, filter Filter) error {\n\tvar (\n\t\tquery = rel.Select().SortAsc(\"order\")\n\t)\n\n\tif filter.Keyword != \"\" {\n\t\tquery = query.Where(rel.Like(\"title\", \"%\"+filter.Keyword+\"%\"))\n\t}\n\n\tif filter.Completed != nil {\n\t\tquery = query.Where(rel.Eq(\"completed\", *filter.Completed))\n\t}\n\n\ts.repository.MustFindAll(ctx, todos, query)\n\treturn nil\n}\n"
  },
  {
    "path": "todos/search_test.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/go-rel/rel\"\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestSearch(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tservice    = New(repository, nil)\n\t\ttodos      []Todo\n\t\tcompleted  = false\n\t\tfilter     = Filter{Keyword: \"Sleep\", Completed: &completed}\n\t\tresult     = []Todo{{ID: 1, Title: \"Sleep\"}}\n\t)\n\n\trepository.ExpectFindAll(\n\t\trel.Select().SortAsc(\"order\").Where(rel.Like(\"title\", \"%Sleep%\").AndEq(\"completed\", false)),\n\t).Result(result)\n\n\tassert.NotPanics(t, func() {\n\t\tservice.Search(ctx, &todos, filter)\n\t\tassert.Equal(t, result, todos)\n\t})\n\n\trepository.AssertExpectations(t)\n}\n"
  },
  {
    "path": "todos/service.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\n\t\"github.com/Fs02/go-todo-backend/scores\"\n\t\"github.com/go-rel/rel\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\tlogger, _ = zap.NewProduction(zap.Fields(zap.String(\"type\", \"todos\")))\n)\n\n//go:generate mockery --name=Service --case=underscore --output todostest --outpkg todostest\n\n// Service instance for todo's domain.\n// Any operation done to any of object within this domain should use this service.\ntype Service interface {\n\tSearch(ctx context.Context, todos *[]Todo, filter Filter) error\n\tCreate(ctx context.Context, todo *Todo) error\n\tUpdate(ctx context.Context, todo *Todo, changes rel.Changeset) error\n\tDelete(ctx context.Context, todo *Todo)\n\tClear(ctx context.Context)\n}\n\n// beside embeding the struct, you can also declare the function directly on this struct.\n// the advantage of embedding the struct is it allows spreading the implementation across multiple files.\ntype service struct {\n\tsearch\n\tcreate\n\tupdate\n\tdelete\n\tclear\n}\n\nvar _ Service = (*service)(nil)\n\n// New Todos service.\nfunc New(repository rel.Repository, scores scores.Service) Service {\n\treturn service{\n\t\tsearch: search{repository: repository},\n\t\tcreate: create{repository: repository, scores: scores},\n\t\tupdate: update{repository: repository, scores: scores},\n\t\tdelete: delete{repository: repository},\n\t\tclear:  clear{repository: repository},\n\t}\n}\n"
  },
  {
    "path": "todos/todo.go",
    "content": "package todos\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n)\n\nvar (\n\t// TodoURLPrefix to be returned when encoding todo.\n\tTodoURLPrefix = os.Getenv(\"URL\") + \"todos/\"\n\t// ErrTodoTitleBlank validation error.\n\tErrTodoTitleBlank = errors.New(\"Title can't be blank\")\n)\n\n// Todo respresent a record stored in todos table.\ntype Todo struct {\n\tID        uint      `json:\"id\"`\n\tTitle     string    `json:\"title\"`\n\tOrder     int       `json:\"order\"`\n\tCompleted bool      `json:\"completed\"`\n\tCreatedAt time.Time `json:\"created_at\"`\n\tUpdatedAt time.Time `json:\"updated_at\"`\n}\n\n// Validate todo.\nfunc (t Todo) Validate() error {\n\tvar err error\n\tswitch {\n\tcase len(t.Title) == 0:\n\t\terr = ErrTodoTitleBlank\n\t}\n\n\treturn err\n}\n\n// MarshalJSON implement custom marshaller to marshal url.\nfunc (t Todo) MarshalJSON() ([]byte, error) {\n\ttype Alias Todo\n\n\treturn json.Marshal(struct {\n\t\tAlias\n\t\tURL string `json:\"url\"`\n\t}{\n\t\tAlias: Alias(t),\n\t\tURL:   fmt.Sprint(TodoURLPrefix, t.ID),\n\t})\n}\n"
  },
  {
    "path": "todos/todo_test.go",
    "content": "package todos\n\nimport (\n\t\"encoding/json\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc init() {\n\tTodoURLPrefix = \"http://localhost:3000/\"\n}\n\nfunc TestTodo_Validate(t *testing.T) {\n\tvar todo Todo\n\n\tt.Run(\"title is blank\", func(t *testing.T) {\n\t\tassert.Equal(t, ErrTodoTitleBlank, todo.Validate())\n\t})\n\n\tt.Run(\"valid\", func(t *testing.T) {\n\t\ttodo.Title = \"Sleep\"\n\t\tassert.Nil(t, todo.Validate())\n\t})\n}\n\nfunc TestTodo_MarshalJSON(t *testing.T) {\n\tvar (\n\t\ttodo = Todo{\n\t\t\tID:        1,\n\t\t\tTitle:     \"Sleep\",\n\t\t\tCompleted: true,\n\t\t}\n\t\tencoded, err = json.Marshal(todo)\n\t)\n\n\tassert.Nil(t, err)\n\tassert.JSONEq(t, `{\n\t\t\"id\": 1,\n\t\t\"title\": \"Sleep\",\n\t\t\"completed\": true,\n\t\t\"order\": 0,\n\t\t\"url\": \"http://localhost:3000/1\",\n\t\t\"created_at\": \"0001-01-01T00:00:00Z\",\n\t\t\"updated_at\": \"0001-01-01T00:00:00Z\"\n\t}`, string(encoded))\n}\n"
  },
  {
    "path": "todos/todostest/README.md",
    "content": "# todostest\n\nThis package should be named using `[domain]test` format, which is also used by standard package such as `net/http/httptest`.\n\nIf needed, this package may also contains additional function that act as helper for mocking (see todo.go).\n"
  },
  {
    "path": "todos/todostest/service.go",
    "content": "// Code generated by mockery 2.9.0. DO NOT EDIT.\n\npackage todostest\n\nimport (\n\tcontext \"context\"\n\n\trel \"github.com/go-rel/rel\"\n\tmock \"github.com/stretchr/testify/mock\"\n\n\ttodos \"github.com/Fs02/go-todo-backend/todos\"\n)\n\n// Service is an autogenerated mock type for the Service type\ntype Service struct {\n\tmock.Mock\n}\n\n// Clear provides a mock function with given fields: ctx\nfunc (_m *Service) Clear(ctx context.Context) {\n\t_m.Called(ctx)\n}\n\n// Create provides a mock function with given fields: ctx, todo\nfunc (_m *Service) Create(ctx context.Context, todo *todos.Todo) error {\n\tret := _m.Called(ctx, todo)\n\n\tvar r0 error\n\tif rf, ok := ret.Get(0).(func(context.Context, *todos.Todo) error); ok {\n\t\tr0 = rf(ctx, todo)\n\t} else {\n\t\tr0 = ret.Error(0)\n\t}\n\n\treturn r0\n}\n\n// Delete provides a mock function with given fields: ctx, todo\nfunc (_m *Service) Delete(ctx context.Context, todo *todos.Todo) {\n\t_m.Called(ctx, todo)\n}\n\n// Search provides a mock function with given fields: ctx, _a1, filter\nfunc (_m *Service) Search(ctx context.Context, _a1 *[]todos.Todo, filter todos.Filter) error {\n\tret := _m.Called(ctx, _a1, filter)\n\n\tvar r0 error\n\tif rf, ok := ret.Get(0).(func(context.Context, *[]todos.Todo, todos.Filter) error); ok {\n\t\tr0 = rf(ctx, _a1, filter)\n\t} else {\n\t\tr0 = ret.Error(0)\n\t}\n\n\treturn r0\n}\n\n// Update provides a mock function with given fields: ctx, todo, changes\nfunc (_m *Service) Update(ctx context.Context, todo *todos.Todo, changes rel.Changeset) error {\n\tret := _m.Called(ctx, todo, changes)\n\n\tvar r0 error\n\tif rf, ok := ret.Get(0).(func(context.Context, *todos.Todo, rel.Changeset) error); ok {\n\t\tr0 = rf(ctx, todo, changes)\n\t} else {\n\t\tr0 = ret.Error(0)\n\t}\n\n\treturn r0\n}\n"
  },
  {
    "path": "todos/todostest/todos.go",
    "content": "package todostest\n\nimport (\n\tcontext \"context\"\n\n\ttodos \"github.com/Fs02/go-todo-backend/todos\"\n\trel \"github.com/go-rel/rel\"\n\tmock \"github.com/stretchr/testify/mock\"\n)\n\n// MockFunc function.\ntype MockFunc func(service *Service)\n\n// Mock apply mock todo functions.\nfunc Mock(service *Service, funcs ...MockFunc) {\n\tfor i := range funcs {\n\t\tif funcs[i] != nil {\n\t\t\tfuncs[i](service)\n\t\t}\n\t}\n}\n\n// MockSearch util.\nfunc MockSearch(result []todos.Todo, filter todos.Filter, err error) MockFunc {\n\treturn func(service *Service) {\n\t\tservice.On(\"Search\", mock.Anything, mock.Anything, filter).\n\t\t\tReturn(func(ctx context.Context, out *[]todos.Todo, filter todos.Filter) error {\n\t\t\t\t*out = result\n\t\t\t\treturn err\n\t\t\t})\n\t}\n}\n\n// MockCreate util.\nfunc MockCreate(result todos.Todo, err error) MockFunc {\n\treturn func(service *Service) {\n\t\tservice.On(\"Create\", mock.Anything, mock.Anything).\n\t\t\tReturn(func(ctx context.Context, out *todos.Todo) error {\n\t\t\t\t*out = result\n\t\t\t\treturn err\n\t\t\t})\n\t}\n}\n\n// MockUpdate util.\nfunc MockUpdate(result todos.Todo, err error) MockFunc {\n\treturn func(service *Service) {\n\t\tservice.On(\"Update\", mock.Anything, mock.Anything, mock.Anything).\n\t\t\tReturn(func(ctx context.Context, out *todos.Todo, changeset rel.Changeset) error {\n\t\t\t\tif result.ID != out.ID {\n\t\t\t\t\tpanic(\"inconsistent id\")\n\t\t\t\t}\n\n\t\t\t\t*out = result\n\t\t\t\treturn err\n\t\t\t})\n\t}\n}\n\n// MockClear util.\nfunc MockClear() MockFunc {\n\treturn func(service *Service) {\n\t\tservice.On(\"Clear\", mock.Anything)\n\t}\n}\n\n// MockDelete util.\nfunc MockDelete() MockFunc {\n\treturn func(service *Service) {\n\t\tservice.On(\"Delete\", mock.Anything, mock.Anything)\n\t}\n}\n"
  },
  {
    "path": "todos/update.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\n\t\"github.com/Fs02/go-todo-backend/scores\"\n\t\"github.com/go-rel/rel\"\n\t\"go.uber.org/zap\"\n)\n\ntype update struct {\n\trepository rel.Repository\n\tscores     scores.Service\n}\n\nfunc (u update) Update(ctx context.Context, todo *Todo, changes rel.Changeset) error {\n\tif err := todo.Validate(); err != nil {\n\t\tlogger.Warn(\"validation error\", zap.Error(err))\n\t\treturn err\n\t}\n\n\t// update score if completed is changed.\n\tif changes.FieldChanged(\"completed\") {\n\t\treturn u.repository.Transaction(ctx, func(ctx context.Context) error {\n\t\t\tu.repository.MustUpdate(ctx, todo, changes)\n\n\t\t\tif todo.Completed {\n\t\t\t\treturn u.scores.Earn(ctx, \"todo completed\", 1)\n\t\t\t}\n\n\t\t\treturn u.scores.Earn(ctx, \"todo uncompleted\", -2)\n\t\t})\n\t}\n\n\tu.repository.MustUpdate(ctx, todo, changes)\n\treturn nil\n}\n"
  },
  {
    "path": "todos/update_test.go",
    "content": "package todos\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/Fs02/go-todo-backend/scores/scorestest\"\n\t\"github.com/go-rel/rel\"\n\t\"github.com/go-rel/reltest\"\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/mock\"\n)\n\nfunc TestUpdate(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tscores     = &scorestest.Service{}\n\t\tservice    = New(repository, scores)\n\t\ttodo       = Todo{ID: 1, Title: \"Sleep\"}\n\t\tchanges    = rel.NewChangeset(&todo)\n\t)\n\n\ttodo.Title = \"Wake up\"\n\n\trepository.ExpectUpdate(changes).ForType(\"todos.Todo\")\n\n\tassert.Nil(t, service.Update(ctx, &todo, changes))\n\tassert.NotEmpty(t, todo.ID)\n\n\trepository.AssertExpectations(t)\n\tscores.AssertExpectations(t)\n}\n\nfunc TestUpdate_completed(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tscores     = &scorestest.Service{}\n\t\tservice    = New(repository, scores)\n\t\ttodo       = Todo{ID: 1, Title: \"Sleep\"}\n\t\tchanges    = rel.NewChangeset(&todo)\n\t)\n\n\ttodo.Completed = true\n\n\trepository.ExpectTransaction(func(repository *reltest.Repository) {\n\t\tscores.On(\"Earn\", mock.Anything, \"todo completed\", 1).Return(nil)\n\t\trepository.ExpectUpdate(changes).ForType(\"todos.Todo\")\n\t})\n\n\tassert.Nil(t, service.Update(ctx, &todo, changes))\n\tassert.NotEmpty(t, todo.ID)\n\n\trepository.AssertExpectations(t)\n\tscores.AssertExpectations(t)\n}\n\nfunc TestUpdate_uncompleted(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tscores     = &scorestest.Service{}\n\t\tservice    = New(repository, scores)\n\t\ttodo       = Todo{ID: 1, Title: \"Sleep\", Completed: true}\n\t\tchanges    = rel.NewChangeset(&todo)\n\t)\n\n\ttodo.Completed = false\n\n\trepository.ExpectTransaction(func(repository *reltest.Repository) {\n\t\tscores.On(\"Earn\", mock.Anything, \"todo uncompleted\", -2).Return(nil)\n\t\trepository.ExpectUpdate(changes).ForType(\"todos.Todo\")\n\t})\n\n\tassert.Nil(t, service.Update(ctx, &todo, changes))\n\tassert.NotEmpty(t, todo.ID)\n\n\trepository.AssertExpectations(t)\n\tscores.AssertExpectations(t)\n}\n\nfunc TestUpdate_validateError(t *testing.T) {\n\tvar (\n\t\tctx        = context.TODO()\n\t\trepository = reltest.New()\n\t\tscores     = &scorestest.Service{}\n\t\tservice    = New(repository, scores)\n\t\ttodo       = Todo{ID: 1, Title: \"Sleep\"}\n\t\tchanges    = rel.NewChangeset(&todo)\n\t)\n\n\ttodo.Title = \"\"\n\n\tassert.Equal(t, ErrTodoTitleBlank, service.Update(ctx, &todo, changes))\n\n\trepository.AssertExpectations(t)\n\tscores.AssertExpectations(t)\n}\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/LICENSE",
    "content": "ISC License\n\nCopyright (c) 2012-2016 Dave Collins <dave@davec.name>\n\nPermission to use, copy, modify, and/or distribute this software for any\npurpose with or without fee is hereby granted, provided that the above\ncopyright notice and this permission notice appear in all copies.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\nWITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\nANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\nWHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\nACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\nOR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/bypass.go",
    "content": "// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>\n//\n// Permission to use, copy, modify, and distribute this software for any\n// purpose with or without fee is hereby granted, provided that the above\n// copyright notice and this permission notice appear in all copies.\n//\n// THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\n// NOTE: Due to the following build constraints, this file will only be compiled\n// when the code is not running on Google App Engine, compiled by GopherJS, and\n// \"-tags safe\" is not added to the go build command line.  The \"disableunsafe\"\n// tag is deprecated and thus should not be used.\n// Go versions prior to 1.4 are disabled because they use a different layout\n// for interfaces which make the implementation of unsafeReflectValue more complex.\n// +build !js,!appengine,!safe,!disableunsafe,go1.4\n\npackage spew\n\nimport (\n\t\"reflect\"\n\t\"unsafe\"\n)\n\nconst (\n\t// UnsafeDisabled is a build-time constant which specifies whether or\n\t// not access to the unsafe package is available.\n\tUnsafeDisabled = false\n\n\t// ptrSize is the size of a pointer on the current arch.\n\tptrSize = unsafe.Sizeof((*byte)(nil))\n)\n\ntype flag uintptr\n\nvar (\n\t// flagRO indicates whether the value field of a reflect.Value\n\t// is read-only.\n\tflagRO flag\n\n\t// flagAddr indicates whether the address of the reflect.Value's\n\t// value may be taken.\n\tflagAddr flag\n)\n\n// flagKindMask holds the bits that make up the kind\n// part of the flags field. In all the supported versions,\n// it is in the lower 5 bits.\nconst flagKindMask = flag(0x1f)\n\n// Different versions of Go have used different\n// bit layouts for the flags type. This table\n// records the known combinations.\nvar okFlags = []struct {\n\tro, addr flag\n}{{\n\t// From Go 1.4 to 1.5\n\tro:   1 << 5,\n\taddr: 1 << 7,\n}, {\n\t// Up to Go tip.\n\tro:   1<<5 | 1<<6,\n\taddr: 1 << 8,\n}}\n\nvar flagValOffset = func() uintptr {\n\tfield, ok := reflect.TypeOf(reflect.Value{}).FieldByName(\"flag\")\n\tif !ok {\n\t\tpanic(\"reflect.Value has no flag field\")\n\t}\n\treturn field.Offset\n}()\n\n// flagField returns a pointer to the flag field of a reflect.Value.\nfunc flagField(v *reflect.Value) *flag {\n\treturn (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))\n}\n\n// unsafeReflectValue converts the passed reflect.Value into a one that bypasses\n// the typical safety restrictions preventing access to unaddressable and\n// unexported data.  It works by digging the raw pointer to the underlying\n// value out of the protected value and generating a new unprotected (unsafe)\n// reflect.Value to it.\n//\n// This allows us to check for implementations of the Stringer and error\n// interfaces to be used for pretty printing ordinarily unaddressable and\n// inaccessible values such as unexported struct fields.\nfunc unsafeReflectValue(v reflect.Value) reflect.Value {\n\tif !v.IsValid() || (v.CanInterface() && v.CanAddr()) {\n\t\treturn v\n\t}\n\tflagFieldPtr := flagField(&v)\n\t*flagFieldPtr &^= flagRO\n\t*flagFieldPtr |= flagAddr\n\treturn v\n}\n\n// Sanity checks against future reflect package changes\n// to the type or semantics of the Value.flag field.\nfunc init() {\n\tfield, ok := reflect.TypeOf(reflect.Value{}).FieldByName(\"flag\")\n\tif !ok {\n\t\tpanic(\"reflect.Value has no flag field\")\n\t}\n\tif field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {\n\t\tpanic(\"reflect.Value flag field has changed kind\")\n\t}\n\ttype t0 int\n\tvar t struct {\n\t\tA t0\n\t\t// t0 will have flagEmbedRO set.\n\t\tt0\n\t\t// a will have flagStickyRO set\n\t\ta t0\n\t}\n\tvA := reflect.ValueOf(t).FieldByName(\"A\")\n\tva := reflect.ValueOf(t).FieldByName(\"a\")\n\tvt0 := reflect.ValueOf(t).FieldByName(\"t0\")\n\n\t// Infer flagRO from the difference between the flags\n\t// for the (otherwise identical) fields in t.\n\tflagPublic := *flagField(&vA)\n\tflagWithRO := *flagField(&va) | *flagField(&vt0)\n\tflagRO = flagPublic ^ flagWithRO\n\n\t// Infer flagAddr from the difference between a value\n\t// taken from a pointer and not.\n\tvPtrA := reflect.ValueOf(&t).Elem().FieldByName(\"A\")\n\tflagNoPtr := *flagField(&vA)\n\tflagPtr := *flagField(&vPtrA)\n\tflagAddr = flagNoPtr ^ flagPtr\n\n\t// Check that the inferred flags tally with one of the known versions.\n\tfor _, f := range okFlags {\n\t\tif flagRO == f.ro && flagAddr == f.addr {\n\t\t\treturn\n\t\t}\n\t}\n\tpanic(\"reflect.Value read-only flag has changed semantics\")\n}\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/bypasssafe.go",
    "content": "// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>\n//\n// Permission to use, copy, modify, and distribute this software for any\n// purpose with or without fee is hereby granted, provided that the above\n// copyright notice and this permission notice appear in all copies.\n//\n// THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n\n// NOTE: Due to the following build constraints, this file will only be compiled\n// when the code is running on Google App Engine, compiled by GopherJS, or\n// \"-tags safe\" is added to the go build command line.  The \"disableunsafe\"\n// tag is deprecated and thus should not be used.\n// +build js appengine safe disableunsafe !go1.4\n\npackage spew\n\nimport \"reflect\"\n\nconst (\n\t// UnsafeDisabled is a build-time constant which specifies whether or\n\t// not access to the unsafe package is available.\n\tUnsafeDisabled = true\n)\n\n// unsafeReflectValue typically converts the passed reflect.Value into a one\n// that bypasses the typical safety restrictions preventing access to\n// unaddressable and unexported data.  However, doing this relies on access to\n// the unsafe package.  This is a stub version which simply returns the passed\n// reflect.Value when the unsafe package is not available.\nfunc unsafeReflectValue(v reflect.Value) reflect.Value {\n\treturn v\n}\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/common.go",
    "content": "/*\n * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>\n *\n * Permission to use, copy, modify, and distribute this software for any\n * purpose with or without fee is hereby granted, provided that the above\n * copyright notice and this permission notice appear in all copies.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n */\n\npackage spew\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"reflect\"\n\t\"sort\"\n\t\"strconv\"\n)\n\n// Some constants in the form of bytes to avoid string overhead.  This mirrors\n// the technique used in the fmt package.\nvar (\n\tpanicBytes            = []byte(\"(PANIC=\")\n\tplusBytes             = []byte(\"+\")\n\tiBytes                = []byte(\"i\")\n\ttrueBytes             = []byte(\"true\")\n\tfalseBytes            = []byte(\"false\")\n\tinterfaceBytes        = []byte(\"(interface {})\")\n\tcommaNewlineBytes     = []byte(\",\\n\")\n\tnewlineBytes          = []byte(\"\\n\")\n\topenBraceBytes        = []byte(\"{\")\n\topenBraceNewlineBytes = []byte(\"{\\n\")\n\tcloseBraceBytes       = []byte(\"}\")\n\tasteriskBytes         = []byte(\"*\")\n\tcolonBytes            = []byte(\":\")\n\tcolonSpaceBytes       = []byte(\": \")\n\topenParenBytes        = []byte(\"(\")\n\tcloseParenBytes       = []byte(\")\")\n\tspaceBytes            = []byte(\" \")\n\tpointerChainBytes     = []byte(\"->\")\n\tnilAngleBytes         = []byte(\"<nil>\")\n\tmaxNewlineBytes       = []byte(\"<max depth reached>\\n\")\n\tmaxShortBytes         = []byte(\"<max>\")\n\tcircularBytes         = []byte(\"<already shown>\")\n\tcircularShortBytes    = []byte(\"<shown>\")\n\tinvalidAngleBytes     = []byte(\"<invalid>\")\n\topenBracketBytes      = []byte(\"[\")\n\tcloseBracketBytes     = []byte(\"]\")\n\tpercentBytes          = []byte(\"%\")\n\tprecisionBytes        = []byte(\".\")\n\topenAngleBytes        = []byte(\"<\")\n\tcloseAngleBytes       = []byte(\">\")\n\topenMapBytes          = []byte(\"map[\")\n\tcloseMapBytes         = []byte(\"]\")\n\tlenEqualsBytes        = []byte(\"len=\")\n\tcapEqualsBytes        = []byte(\"cap=\")\n)\n\n// hexDigits is used to map a decimal value to a hex digit.\nvar hexDigits = \"0123456789abcdef\"\n\n// catchPanic handles any panics that might occur during the handleMethods\n// calls.\nfunc catchPanic(w io.Writer, v reflect.Value) {\n\tif err := recover(); err != nil {\n\t\tw.Write(panicBytes)\n\t\tfmt.Fprintf(w, \"%v\", err)\n\t\tw.Write(closeParenBytes)\n\t}\n}\n\n// handleMethods attempts to call the Error and String methods on the underlying\n// type the passed reflect.Value represents and outputes the result to Writer w.\n//\n// It handles panics in any called methods by catching and displaying the error\n// as the formatted value.\nfunc handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {\n\t// We need an interface to check if the type implements the error or\n\t// Stringer interface.  However, the reflect package won't give us an\n\t// interface on certain things like unexported struct fields in order\n\t// to enforce visibility rules.  We use unsafe, when it's available,\n\t// to bypass these restrictions since this package does not mutate the\n\t// values.\n\tif !v.CanInterface() {\n\t\tif UnsafeDisabled {\n\t\t\treturn false\n\t\t}\n\n\t\tv = unsafeReflectValue(v)\n\t}\n\n\t// Choose whether or not to do error and Stringer interface lookups against\n\t// the base type or a pointer to the base type depending on settings.\n\t// Technically calling one of these methods with a pointer receiver can\n\t// mutate the value, however, types which choose to satisify an error or\n\t// Stringer interface with a pointer receiver should not be mutating their\n\t// state inside these interface methods.\n\tif !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {\n\t\tv = unsafeReflectValue(v)\n\t}\n\tif v.CanAddr() {\n\t\tv = v.Addr()\n\t}\n\n\t// Is it an error or Stringer?\n\tswitch iface := v.Interface().(type) {\n\tcase error:\n\t\tdefer catchPanic(w, v)\n\t\tif cs.ContinueOnMethod {\n\t\t\tw.Write(openParenBytes)\n\t\t\tw.Write([]byte(iface.Error()))\n\t\t\tw.Write(closeParenBytes)\n\t\t\tw.Write(spaceBytes)\n\t\t\treturn false\n\t\t}\n\n\t\tw.Write([]byte(iface.Error()))\n\t\treturn true\n\n\tcase fmt.Stringer:\n\t\tdefer catchPanic(w, v)\n\t\tif cs.ContinueOnMethod {\n\t\t\tw.Write(openParenBytes)\n\t\t\tw.Write([]byte(iface.String()))\n\t\t\tw.Write(closeParenBytes)\n\t\t\tw.Write(spaceBytes)\n\t\t\treturn false\n\t\t}\n\t\tw.Write([]byte(iface.String()))\n\t\treturn true\n\t}\n\treturn false\n}\n\n// printBool outputs a boolean value as true or false to Writer w.\nfunc printBool(w io.Writer, val bool) {\n\tif val {\n\t\tw.Write(trueBytes)\n\t} else {\n\t\tw.Write(falseBytes)\n\t}\n}\n\n// printInt outputs a signed integer value to Writer w.\nfunc printInt(w io.Writer, val int64, base int) {\n\tw.Write([]byte(strconv.FormatInt(val, base)))\n}\n\n// printUint outputs an unsigned integer value to Writer w.\nfunc printUint(w io.Writer, val uint64, base int) {\n\tw.Write([]byte(strconv.FormatUint(val, base)))\n}\n\n// printFloat outputs a floating point value using the specified precision,\n// which is expected to be 32 or 64bit, to Writer w.\nfunc printFloat(w io.Writer, val float64, precision int) {\n\tw.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))\n}\n\n// printComplex outputs a complex value using the specified float precision\n// for the real and imaginary parts to Writer w.\nfunc printComplex(w io.Writer, c complex128, floatPrecision int) {\n\tr := real(c)\n\tw.Write(openParenBytes)\n\tw.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))\n\ti := imag(c)\n\tif i >= 0 {\n\t\tw.Write(plusBytes)\n\t}\n\tw.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))\n\tw.Write(iBytes)\n\tw.Write(closeParenBytes)\n}\n\n// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'\n// prefix to Writer w.\nfunc printHexPtr(w io.Writer, p uintptr) {\n\t// Null pointer.\n\tnum := uint64(p)\n\tif num == 0 {\n\t\tw.Write(nilAngleBytes)\n\t\treturn\n\t}\n\n\t// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix\n\tbuf := make([]byte, 18)\n\n\t// It's simpler to construct the hex string right to left.\n\tbase := uint64(16)\n\ti := len(buf) - 1\n\tfor num >= base {\n\t\tbuf[i] = hexDigits[num%base]\n\t\tnum /= base\n\t\ti--\n\t}\n\tbuf[i] = hexDigits[num]\n\n\t// Add '0x' prefix.\n\ti--\n\tbuf[i] = 'x'\n\ti--\n\tbuf[i] = '0'\n\n\t// Strip unused leading bytes.\n\tbuf = buf[i:]\n\tw.Write(buf)\n}\n\n// valuesSorter implements sort.Interface to allow a slice of reflect.Value\n// elements to be sorted.\ntype valuesSorter struct {\n\tvalues  []reflect.Value\n\tstrings []string // either nil or same len and values\n\tcs      *ConfigState\n}\n\n// newValuesSorter initializes a valuesSorter instance, which holds a set of\n// surrogate keys on which the data should be sorted.  It uses flags in\n// ConfigState to decide if and how to populate those surrogate keys.\nfunc newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {\n\tvs := &valuesSorter{values: values, cs: cs}\n\tif canSortSimply(vs.values[0].Kind()) {\n\t\treturn vs\n\t}\n\tif !cs.DisableMethods {\n\t\tvs.strings = make([]string, len(values))\n\t\tfor i := range vs.values {\n\t\t\tb := bytes.Buffer{}\n\t\t\tif !handleMethods(cs, &b, vs.values[i]) {\n\t\t\t\tvs.strings = nil\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tvs.strings[i] = b.String()\n\t\t}\n\t}\n\tif vs.strings == nil && cs.SpewKeys {\n\t\tvs.strings = make([]string, len(values))\n\t\tfor i := range vs.values {\n\t\t\tvs.strings[i] = Sprintf(\"%#v\", vs.values[i].Interface())\n\t\t}\n\t}\n\treturn vs\n}\n\n// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted\n// directly, or whether it should be considered for sorting by surrogate keys\n// (if the ConfigState allows it).\nfunc canSortSimply(kind reflect.Kind) bool {\n\t// This switch parallels valueSortLess, except for the default case.\n\tswitch kind {\n\tcase reflect.Bool:\n\t\treturn true\n\tcase reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:\n\t\treturn true\n\tcase reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:\n\t\treturn true\n\tcase reflect.Float32, reflect.Float64:\n\t\treturn true\n\tcase reflect.String:\n\t\treturn true\n\tcase reflect.Uintptr:\n\t\treturn true\n\tcase reflect.Array:\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Len returns the number of values in the slice.  It is part of the\n// sort.Interface implementation.\nfunc (s *valuesSorter) Len() int {\n\treturn len(s.values)\n}\n\n// Swap swaps the values at the passed indices.  It is part of the\n// sort.Interface implementation.\nfunc (s *valuesSorter) Swap(i, j int) {\n\ts.values[i], s.values[j] = s.values[j], s.values[i]\n\tif s.strings != nil {\n\t\ts.strings[i], s.strings[j] = s.strings[j], s.strings[i]\n\t}\n}\n\n// valueSortLess returns whether the first value should sort before the second\n// value.  It is used by valueSorter.Less as part of the sort.Interface\n// implementation.\nfunc valueSortLess(a, b reflect.Value) bool {\n\tswitch a.Kind() {\n\tcase reflect.Bool:\n\t\treturn !a.Bool() && b.Bool()\n\tcase reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:\n\t\treturn a.Int() < b.Int()\n\tcase reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:\n\t\treturn a.Uint() < b.Uint()\n\tcase reflect.Float32, reflect.Float64:\n\t\treturn a.Float() < b.Float()\n\tcase reflect.String:\n\t\treturn a.String() < b.String()\n\tcase reflect.Uintptr:\n\t\treturn a.Uint() < b.Uint()\n\tcase reflect.Array:\n\t\t// Compare the contents of both arrays.\n\t\tl := a.Len()\n\t\tfor i := 0; i < l; i++ {\n\t\t\tav := a.Index(i)\n\t\t\tbv := b.Index(i)\n\t\t\tif av.Interface() == bv.Interface() {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn valueSortLess(av, bv)\n\t\t}\n\t}\n\treturn a.String() < b.String()\n}\n\n// Less returns whether the value at index i should sort before the\n// value at index j.  It is part of the sort.Interface implementation.\nfunc (s *valuesSorter) Less(i, j int) bool {\n\tif s.strings == nil {\n\t\treturn valueSortLess(s.values[i], s.values[j])\n\t}\n\treturn s.strings[i] < s.strings[j]\n}\n\n// sortValues is a sort function that handles both native types and any type that\n// can be converted to error or Stringer.  Other inputs are sorted according to\n// their Value.String() value to ensure display stability.\nfunc sortValues(values []reflect.Value, cs *ConfigState) {\n\tif len(values) == 0 {\n\t\treturn\n\t}\n\tsort.Sort(newValuesSorter(values, cs))\n}\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/config.go",
    "content": "/*\n * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>\n *\n * Permission to use, copy, modify, and distribute this software for any\n * purpose with or without fee is hereby granted, provided that the above\n * copyright notice and this permission notice appear in all copies.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n */\n\npackage spew\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n)\n\n// ConfigState houses the configuration options used by spew to format and\n// display values.  There is a global instance, Config, that is used to control\n// all top-level Formatter and Dump functionality.  Each ConfigState instance\n// provides methods equivalent to the top-level functions.\n//\n// The zero value for ConfigState provides no indentation.  You would typically\n// want to set it to a space or a tab.\n//\n// Alternatively, you can use NewDefaultConfig to get a ConfigState instance\n// with default settings.  See the documentation of NewDefaultConfig for default\n// values.\ntype ConfigState struct {\n\t// Indent specifies the string to use for each indentation level.  The\n\t// global config instance that all top-level functions use set this to a\n\t// single space by default.  If you would like more indentation, you might\n\t// set this to a tab with \"\\t\" or perhaps two spaces with \"  \".\n\tIndent string\n\n\t// MaxDepth controls the maximum number of levels to descend into nested\n\t// data structures.  The default, 0, means there is no limit.\n\t//\n\t// NOTE: Circular data structures are properly detected, so it is not\n\t// necessary to set this value unless you specifically want to limit deeply\n\t// nested data structures.\n\tMaxDepth int\n\n\t// DisableMethods specifies whether or not error and Stringer interfaces are\n\t// invoked for types that implement them.\n\tDisableMethods bool\n\n\t// DisablePointerMethods specifies whether or not to check for and invoke\n\t// error and Stringer interfaces on types which only accept a pointer\n\t// receiver when the current type is not a pointer.\n\t//\n\t// NOTE: This might be an unsafe action since calling one of these methods\n\t// with a pointer receiver could technically mutate the value, however,\n\t// in practice, types which choose to satisify an error or Stringer\n\t// interface with a pointer receiver should not be mutating their state\n\t// inside these interface methods.  As a result, this option relies on\n\t// access to the unsafe package, so it will not have any effect when\n\t// running in environments without access to the unsafe package such as\n\t// Google App Engine or with the \"safe\" build tag specified.\n\tDisablePointerMethods bool\n\n\t// DisablePointerAddresses specifies whether to disable the printing of\n\t// pointer addresses. This is useful when diffing data structures in tests.\n\tDisablePointerAddresses bool\n\n\t// DisableCapacities specifies whether to disable the printing of capacities\n\t// for arrays, slices, maps and channels. This is useful when diffing\n\t// data structures in tests.\n\tDisableCapacities bool\n\n\t// ContinueOnMethod specifies whether or not recursion should continue once\n\t// a custom error or Stringer interface is invoked.  The default, false,\n\t// means it will print the results of invoking the custom error or Stringer\n\t// interface and return immediately instead of continuing to recurse into\n\t// the internals of the data type.\n\t//\n\t// NOTE: This flag does not have any effect if method invocation is disabled\n\t// via the DisableMethods or DisablePointerMethods options.\n\tContinueOnMethod bool\n\n\t// SortKeys specifies map keys should be sorted before being printed. Use\n\t// this to have a more deterministic, diffable output.  Note that only\n\t// native types (bool, int, uint, floats, uintptr and string) and types\n\t// that support the error or Stringer interfaces (if methods are\n\t// enabled) are supported, with other types sorted according to the\n\t// reflect.Value.String() output which guarantees display stability.\n\tSortKeys bool\n\n\t// SpewKeys specifies that, as a last resort attempt, map keys should\n\t// be spewed to strings and sorted by those strings.  This is only\n\t// considered if SortKeys is true.\n\tSpewKeys bool\n}\n\n// Config is the active configuration of the top-level functions.\n// The configuration can be changed by modifying the contents of spew.Config.\nvar Config = ConfigState{Indent: \" \"}\n\n// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the formatted string as a value that satisfies error.  See NewFormatter\n// for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {\n\treturn fmt.Errorf(format, c.convertArgs(a)...)\n}\n\n// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {\n\treturn fmt.Fprint(w, c.convertArgs(a)...)\n}\n\n// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {\n\treturn fmt.Fprintf(w, format, c.convertArgs(a)...)\n}\n\n// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it\n// passed with a Formatter interface returned by c.NewFormatter.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {\n\treturn fmt.Fprintln(w, c.convertArgs(a)...)\n}\n\n// Print is a wrapper for fmt.Print that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Print(c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Print(a ...interface{}) (n int, err error) {\n\treturn fmt.Print(c.convertArgs(a)...)\n}\n\n// Printf is a wrapper for fmt.Printf that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {\n\treturn fmt.Printf(format, c.convertArgs(a)...)\n}\n\n// Println is a wrapper for fmt.Println that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Println(c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Println(a ...interface{}) (n int, err error) {\n\treturn fmt.Println(c.convertArgs(a)...)\n}\n\n// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the resulting string.  See NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Sprint(a ...interface{}) string {\n\treturn fmt.Sprint(c.convertArgs(a)...)\n}\n\n// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were\n// passed with a Formatter interface returned by c.NewFormatter.  It returns\n// the resulting string.  See NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Sprintf(format string, a ...interface{}) string {\n\treturn fmt.Sprintf(format, c.convertArgs(a)...)\n}\n\n// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it\n// were passed with a Formatter interface returned by c.NewFormatter.  It\n// returns the resulting string.  See NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))\nfunc (c *ConfigState) Sprintln(a ...interface{}) string {\n\treturn fmt.Sprintln(c.convertArgs(a)...)\n}\n\n/*\nNewFormatter returns a custom formatter that satisfies the fmt.Formatter\ninterface.  As a result, it integrates cleanly with standard fmt package\nprinting functions.  The formatter is useful for inline printing of smaller data\ntypes similar to the standard %v format specifier.\n\nThe custom formatter only responds to the %v (most compact), %+v (adds pointer\naddresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb\ncombinations.  Any other verbs such as %x and %q will be sent to the the\nstandard fmt package for formatting.  In addition, the custom formatter ignores\nthe width and precision arguments (however they will still work on the format\nspecifiers not handled by the custom formatter).\n\nTypically this function shouldn't be called directly.  It is much easier to make\nuse of the custom formatter by calling one of the convenience functions such as\nc.Printf, c.Println, or c.Printf.\n*/\nfunc (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {\n\treturn newFormatter(c, v)\n}\n\n// Fdump formats and displays the passed arguments to io.Writer w.  It formats\n// exactly the same as Dump.\nfunc (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {\n\tfdump(c, w, a...)\n}\n\n/*\nDump displays the passed parameters to standard out with newlines, customizable\nindentation, and additional debug information such as complete types and all\npointer addresses used to indirect to the final value.  It provides the\nfollowing features over the built-in printing facilities provided by the fmt\npackage:\n\n\t* Pointers are dereferenced and followed\n\t* Circular data structures are detected and handled properly\n\t* Custom Stringer/error interfaces are optionally invoked, including\n\t  on unexported types\n\t* Custom types which only implement the Stringer/error interfaces via\n\t  a pointer receiver are optionally invoked when passing non-pointer\n\t  variables\n\t* Byte arrays and slices are dumped like the hexdump -C command which\n\t  includes offsets, byte values in hex, and ASCII output\n\nThe configuration options are controlled by modifying the public members\nof c.  See ConfigState for options documentation.\n\nSee Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to\nget the formatted result as a string.\n*/\nfunc (c *ConfigState) Dump(a ...interface{}) {\n\tfdump(c, os.Stdout, a...)\n}\n\n// Sdump returns a string with the passed arguments formatted exactly the same\n// as Dump.\nfunc (c *ConfigState) Sdump(a ...interface{}) string {\n\tvar buf bytes.Buffer\n\tfdump(c, &buf, a...)\n\treturn buf.String()\n}\n\n// convertArgs accepts a slice of arguments and returns a slice of the same\n// length with each argument converted to a spew Formatter interface using\n// the ConfigState associated with s.\nfunc (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {\n\tformatters = make([]interface{}, len(args))\n\tfor index, arg := range args {\n\t\tformatters[index] = newFormatter(c, arg)\n\t}\n\treturn formatters\n}\n\n// NewDefaultConfig returns a ConfigState with the following default settings.\n//\n// \tIndent: \" \"\n// \tMaxDepth: 0\n// \tDisableMethods: false\n// \tDisablePointerMethods: false\n// \tContinueOnMethod: false\n// \tSortKeys: false\nfunc NewDefaultConfig() *ConfigState {\n\treturn &ConfigState{Indent: \" \"}\n}\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/doc.go",
    "content": "/*\n * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>\n *\n * Permission to use, copy, modify, and distribute this software for any\n * purpose with or without fee is hereby granted, provided that the above\n * copyright notice and this permission notice appear in all copies.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n */\n\n/*\nPackage spew implements a deep pretty printer for Go data structures to aid in\ndebugging.\n\nA quick overview of the additional features spew provides over the built-in\nprinting facilities for Go data types are as follows:\n\n\t* Pointers are dereferenced and followed\n\t* Circular data structures are detected and handled properly\n\t* Custom Stringer/error interfaces are optionally invoked, including\n\t  on unexported types\n\t* Custom types which only implement the Stringer/error interfaces via\n\t  a pointer receiver are optionally invoked when passing non-pointer\n\t  variables\n\t* Byte arrays and slices are dumped like the hexdump -C command which\n\t  includes offsets, byte values in hex, and ASCII output (only when using\n\t  Dump style)\n\nThere are two different approaches spew allows for dumping Go data structures:\n\n\t* Dump style which prints with newlines, customizable indentation,\n\t  and additional debug information such as types and all pointer addresses\n\t  used to indirect to the final value\n\t* A custom Formatter interface that integrates cleanly with the standard fmt\n\t  package and replaces %v, %+v, %#v, and %#+v to provide inline printing\n\t  similar to the default %v while providing the additional functionality\n\t  outlined above and passing unsupported format verbs such as %x and %q\n\t  along to fmt\n\nQuick Start\n\nThis section demonstrates how to quickly get started with spew.  See the\nsections below for further details on formatting and configuration options.\n\nTo dump a variable with full newlines, indentation, type, and pointer\ninformation use Dump, Fdump, or Sdump:\n\tspew.Dump(myVar1, myVar2, ...)\n\tspew.Fdump(someWriter, myVar1, myVar2, ...)\n\tstr := spew.Sdump(myVar1, myVar2, ...)\n\nAlternatively, if you would prefer to use format strings with a compacted inline\nprinting style, use the convenience wrappers Printf, Fprintf, etc with\n%v (most compact), %+v (adds pointer addresses), %#v (adds types), or\n%#+v (adds types and pointer addresses):\n\tspew.Printf(\"myVar1: %v -- myVar2: %+v\", myVar1, myVar2)\n\tspew.Printf(\"myVar3: %#v -- myVar4: %#+v\", myVar3, myVar4)\n\tspew.Fprintf(someWriter, \"myVar1: %v -- myVar2: %+v\", myVar1, myVar2)\n\tspew.Fprintf(someWriter, \"myVar3: %#v -- myVar4: %#+v\", myVar3, myVar4)\n\nConfiguration Options\n\nConfiguration of spew is handled by fields in the ConfigState type.  For\nconvenience, all of the top-level functions use a global state available\nvia the spew.Config global.\n\nIt is also possible to create a ConfigState instance that provides methods\nequivalent to the top-level functions.  This allows concurrent configuration\noptions.  See the ConfigState documentation for more details.\n\nThe following configuration options are available:\n\t* Indent\n\t\tString to use for each indentation level for Dump functions.\n\t\tIt is a single space by default.  A popular alternative is \"\\t\".\n\n\t* MaxDepth\n\t\tMaximum number of levels to descend into nested data structures.\n\t\tThere is no limit by default.\n\n\t* DisableMethods\n\t\tDisables invocation of error and Stringer interface methods.\n\t\tMethod invocation is enabled by default.\n\n\t* DisablePointerMethods\n\t\tDisables invocation of error and Stringer interface methods on types\n\t\twhich only accept pointer receivers from non-pointer variables.\n\t\tPointer method invocation is enabled by default.\n\n\t* DisablePointerAddresses\n\t\tDisablePointerAddresses specifies whether to disable the printing of\n\t\tpointer addresses. This is useful when diffing data structures in tests.\n\n\t* DisableCapacities\n\t\tDisableCapacities specifies whether to disable the printing of\n\t\tcapacities for arrays, slices, maps and channels. This is useful when\n\t\tdiffing data structures in tests.\n\n\t* ContinueOnMethod\n\t\tEnables recursion into types after invoking error and Stringer interface\n\t\tmethods. Recursion after method invocation is disabled by default.\n\n\t* SortKeys\n\t\tSpecifies map keys should be sorted before being printed. Use\n\t\tthis to have a more deterministic, diffable output.  Note that\n\t\tonly native types (bool, int, uint, floats, uintptr and string)\n\t\tand types which implement error or Stringer interfaces are\n\t\tsupported with other types sorted according to the\n\t\treflect.Value.String() output which guarantees display\n\t\tstability.  Natural map order is used by default.\n\n\t* SpewKeys\n\t\tSpecifies that, as a last resort attempt, map keys should be\n\t\tspewed to strings and sorted by those strings.  This is only\n\t\tconsidered if SortKeys is true.\n\nDump Usage\n\nSimply call spew.Dump with a list of variables you want to dump:\n\n\tspew.Dump(myVar1, myVar2, ...)\n\nYou may also call spew.Fdump if you would prefer to output to an arbitrary\nio.Writer.  For example, to dump to standard error:\n\n\tspew.Fdump(os.Stderr, myVar1, myVar2, ...)\n\nA third option is to call spew.Sdump to get the formatted output as a string:\n\n\tstr := spew.Sdump(myVar1, myVar2, ...)\n\nSample Dump Output\n\nSee the Dump example for details on the setup of the types and variables being\nshown here.\n\n\t(main.Foo) {\n\t unexportedField: (*main.Bar)(0xf84002e210)({\n\t  flag: (main.Flag) flagTwo,\n\t  data: (uintptr) <nil>\n\t }),\n\t ExportedField: (map[interface {}]interface {}) (len=1) {\n\t  (string) (len=3) \"one\": (bool) true\n\t }\n\t}\n\nByte (and uint8) arrays and slices are displayed uniquely like the hexdump -C\ncommand as shown.\n\t([]uint8) (len=32 cap=32) {\n\t 00000000  11 12 13 14 15 16 17 18  19 1a 1b 1c 1d 1e 1f 20  |............... |\n\t 00000010  21 22 23 24 25 26 27 28  29 2a 2b 2c 2d 2e 2f 30  |!\"#$%&'()*+,-./0|\n\t 00000020  31 32                                             |12|\n\t}\n\nCustom Formatter\n\nSpew provides a custom formatter that implements the fmt.Formatter interface\nso that it integrates cleanly with standard fmt package printing functions. The\nformatter is useful for inline printing of smaller data types similar to the\nstandard %v format specifier.\n\nThe custom formatter only responds to the %v (most compact), %+v (adds pointer\naddresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb\ncombinations.  Any other verbs such as %x and %q will be sent to the the\nstandard fmt package for formatting.  In addition, the custom formatter ignores\nthe width and precision arguments (however they will still work on the format\nspecifiers not handled by the custom formatter).\n\nCustom Formatter Usage\n\nThe simplest way to make use of the spew custom formatter is to call one of the\nconvenience functions such as spew.Printf, spew.Println, or spew.Printf.  The\nfunctions have syntax you are most likely already familiar with:\n\n\tspew.Printf(\"myVar1: %v -- myVar2: %+v\", myVar1, myVar2)\n\tspew.Printf(\"myVar3: %#v -- myVar4: %#+v\", myVar3, myVar4)\n\tspew.Println(myVar, myVar2)\n\tspew.Fprintf(os.Stderr, \"myVar1: %v -- myVar2: %+v\", myVar1, myVar2)\n\tspew.Fprintf(os.Stderr, \"myVar3: %#v -- myVar4: %#+v\", myVar3, myVar4)\n\nSee the Index for the full list convenience functions.\n\nSample Formatter Output\n\nDouble pointer to a uint8:\n\t  %v: <**>5\n\t %+v: <**>(0xf8400420d0->0xf8400420c8)5\n\t %#v: (**uint8)5\n\t%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5\n\nPointer to circular struct with a uint8 field and a pointer to itself:\n\t  %v: <*>{1 <*><shown>}\n\t %+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}\n\t %#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}\n\t%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}\n\nSee the Printf example for details on the setup of variables being shown\nhere.\n\nErrors\n\nSince it is possible for custom Stringer/error interfaces to panic, spew\ndetects them and handles them internally by printing the panic information\ninline with the output.  Since spew is intended to provide deep pretty printing\ncapabilities on structures, it intentionally does not return any errors.\n*/\npackage spew\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/dump.go",
    "content": "/*\n * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>\n *\n * Permission to use, copy, modify, and distribute this software for any\n * purpose with or without fee is hereby granted, provided that the above\n * copyright notice and this permission notice appear in all copies.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n */\n\npackage spew\n\nimport (\n\t\"bytes\"\n\t\"encoding/hex\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n)\n\nvar (\n\t// uint8Type is a reflect.Type representing a uint8.  It is used to\n\t// convert cgo types to uint8 slices for hexdumping.\n\tuint8Type = reflect.TypeOf(uint8(0))\n\n\t// cCharRE is a regular expression that matches a cgo char.\n\t// It is used to detect character arrays to hexdump them.\n\tcCharRE = regexp.MustCompile(`^.*\\._Ctype_char$`)\n\n\t// cUnsignedCharRE is a regular expression that matches a cgo unsigned\n\t// char.  It is used to detect unsigned character arrays to hexdump\n\t// them.\n\tcUnsignedCharRE = regexp.MustCompile(`^.*\\._Ctype_unsignedchar$`)\n\n\t// cUint8tCharRE is a regular expression that matches a cgo uint8_t.\n\t// It is used to detect uint8_t arrays to hexdump them.\n\tcUint8tCharRE = regexp.MustCompile(`^.*\\._Ctype_uint8_t$`)\n)\n\n// dumpState contains information about the state of a dump operation.\ntype dumpState struct {\n\tw                io.Writer\n\tdepth            int\n\tpointers         map[uintptr]int\n\tignoreNextType   bool\n\tignoreNextIndent bool\n\tcs               *ConfigState\n}\n\n// indent performs indentation according to the depth level and cs.Indent\n// option.\nfunc (d *dumpState) indent() {\n\tif d.ignoreNextIndent {\n\t\td.ignoreNextIndent = false\n\t\treturn\n\t}\n\td.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))\n}\n\n// unpackValue returns values inside of non-nil interfaces when possible.\n// This is useful for data types like structs, arrays, slices, and maps which\n// can contain varying types packed inside an interface.\nfunc (d *dumpState) unpackValue(v reflect.Value) reflect.Value {\n\tif v.Kind() == reflect.Interface && !v.IsNil() {\n\t\tv = v.Elem()\n\t}\n\treturn v\n}\n\n// dumpPtr handles formatting of pointers by indirecting them as necessary.\nfunc (d *dumpState) dumpPtr(v reflect.Value) {\n\t// Remove pointers at or below the current depth from map used to detect\n\t// circular refs.\n\tfor k, depth := range d.pointers {\n\t\tif depth >= d.depth {\n\t\t\tdelete(d.pointers, k)\n\t\t}\n\t}\n\n\t// Keep list of all dereferenced pointers to show later.\n\tpointerChain := make([]uintptr, 0)\n\n\t// Figure out how many levels of indirection there are by dereferencing\n\t// pointers and unpacking interfaces down the chain while detecting circular\n\t// references.\n\tnilFound := false\n\tcycleFound := false\n\tindirects := 0\n\tve := v\n\tfor ve.Kind() == reflect.Ptr {\n\t\tif ve.IsNil() {\n\t\t\tnilFound = true\n\t\t\tbreak\n\t\t}\n\t\tindirects++\n\t\taddr := ve.Pointer()\n\t\tpointerChain = append(pointerChain, addr)\n\t\tif pd, ok := d.pointers[addr]; ok && pd < d.depth {\n\t\t\tcycleFound = true\n\t\t\tindirects--\n\t\t\tbreak\n\t\t}\n\t\td.pointers[addr] = d.depth\n\n\t\tve = ve.Elem()\n\t\tif ve.Kind() == reflect.Interface {\n\t\t\tif ve.IsNil() {\n\t\t\t\tnilFound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tve = ve.Elem()\n\t\t}\n\t}\n\n\t// Display type information.\n\td.w.Write(openParenBytes)\n\td.w.Write(bytes.Repeat(asteriskBytes, indirects))\n\td.w.Write([]byte(ve.Type().String()))\n\td.w.Write(closeParenBytes)\n\n\t// Display pointer information.\n\tif !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {\n\t\td.w.Write(openParenBytes)\n\t\tfor i, addr := range pointerChain {\n\t\t\tif i > 0 {\n\t\t\t\td.w.Write(pointerChainBytes)\n\t\t\t}\n\t\t\tprintHexPtr(d.w, addr)\n\t\t}\n\t\td.w.Write(closeParenBytes)\n\t}\n\n\t// Display dereferenced value.\n\td.w.Write(openParenBytes)\n\tswitch {\n\tcase nilFound:\n\t\td.w.Write(nilAngleBytes)\n\n\tcase cycleFound:\n\t\td.w.Write(circularBytes)\n\n\tdefault:\n\t\td.ignoreNextType = true\n\t\td.dump(ve)\n\t}\n\td.w.Write(closeParenBytes)\n}\n\n// dumpSlice handles formatting of arrays and slices.  Byte (uint8 under\n// reflection) arrays and slices are dumped in hexdump -C fashion.\nfunc (d *dumpState) dumpSlice(v reflect.Value) {\n\t// Determine whether this type should be hex dumped or not.  Also,\n\t// for types which should be hexdumped, try to use the underlying data\n\t// first, then fall back to trying to convert them to a uint8 slice.\n\tvar buf []uint8\n\tdoConvert := false\n\tdoHexDump := false\n\tnumEntries := v.Len()\n\tif numEntries > 0 {\n\t\tvt := v.Index(0).Type()\n\t\tvts := vt.String()\n\t\tswitch {\n\t\t// C types that need to be converted.\n\t\tcase cCharRE.MatchString(vts):\n\t\t\tfallthrough\n\t\tcase cUnsignedCharRE.MatchString(vts):\n\t\t\tfallthrough\n\t\tcase cUint8tCharRE.MatchString(vts):\n\t\t\tdoConvert = true\n\n\t\t// Try to use existing uint8 slices and fall back to converting\n\t\t// and copying if that fails.\n\t\tcase vt.Kind() == reflect.Uint8:\n\t\t\t// We need an addressable interface to convert the type\n\t\t\t// to a byte slice.  However, the reflect package won't\n\t\t\t// give us an interface on certain things like\n\t\t\t// unexported struct fields in order to enforce\n\t\t\t// visibility rules.  We use unsafe, when available, to\n\t\t\t// bypass these restrictions since this package does not\n\t\t\t// mutate the values.\n\t\t\tvs := v\n\t\t\tif !vs.CanInterface() || !vs.CanAddr() {\n\t\t\t\tvs = unsafeReflectValue(vs)\n\t\t\t}\n\t\t\tif !UnsafeDisabled {\n\t\t\t\tvs = vs.Slice(0, numEntries)\n\n\t\t\t\t// Use the existing uint8 slice if it can be\n\t\t\t\t// type asserted.\n\t\t\t\tiface := vs.Interface()\n\t\t\t\tif slice, ok := iface.([]uint8); ok {\n\t\t\t\t\tbuf = slice\n\t\t\t\t\tdoHexDump = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// The underlying data needs to be converted if it can't\n\t\t\t// be type asserted to a uint8 slice.\n\t\t\tdoConvert = true\n\t\t}\n\n\t\t// Copy and convert the underlying type if needed.\n\t\tif doConvert && vt.ConvertibleTo(uint8Type) {\n\t\t\t// Convert and copy each element into a uint8 byte\n\t\t\t// slice.\n\t\t\tbuf = make([]uint8, numEntries)\n\t\t\tfor i := 0; i < numEntries; i++ {\n\t\t\t\tvv := v.Index(i)\n\t\t\t\tbuf[i] = uint8(vv.Convert(uint8Type).Uint())\n\t\t\t}\n\t\t\tdoHexDump = true\n\t\t}\n\t}\n\n\t// Hexdump the entire slice as needed.\n\tif doHexDump {\n\t\tindent := strings.Repeat(d.cs.Indent, d.depth)\n\t\tstr := indent + hex.Dump(buf)\n\t\tstr = strings.Replace(str, \"\\n\", \"\\n\"+indent, -1)\n\t\tstr = strings.TrimRight(str, d.cs.Indent)\n\t\td.w.Write([]byte(str))\n\t\treturn\n\t}\n\n\t// Recursively call dump for each item.\n\tfor i := 0; i < numEntries; i++ {\n\t\td.dump(d.unpackValue(v.Index(i)))\n\t\tif i < (numEntries - 1) {\n\t\t\td.w.Write(commaNewlineBytes)\n\t\t} else {\n\t\t\td.w.Write(newlineBytes)\n\t\t}\n\t}\n}\n\n// dump is the main workhorse for dumping a value.  It uses the passed reflect\n// value to figure out what kind of object we are dealing with and formats it\n// appropriately.  It is a recursive function, however circular data structures\n// are detected and handled properly.\nfunc (d *dumpState) dump(v reflect.Value) {\n\t// Handle invalid reflect values immediately.\n\tkind := v.Kind()\n\tif kind == reflect.Invalid {\n\t\td.w.Write(invalidAngleBytes)\n\t\treturn\n\t}\n\n\t// Handle pointers specially.\n\tif kind == reflect.Ptr {\n\t\td.indent()\n\t\td.dumpPtr(v)\n\t\treturn\n\t}\n\n\t// Print type information unless already handled elsewhere.\n\tif !d.ignoreNextType {\n\t\td.indent()\n\t\td.w.Write(openParenBytes)\n\t\td.w.Write([]byte(v.Type().String()))\n\t\td.w.Write(closeParenBytes)\n\t\td.w.Write(spaceBytes)\n\t}\n\td.ignoreNextType = false\n\n\t// Display length and capacity if the built-in len and cap functions\n\t// work with the value's kind and the len/cap itself is non-zero.\n\tvalueLen, valueCap := 0, 0\n\tswitch v.Kind() {\n\tcase reflect.Array, reflect.Slice, reflect.Chan:\n\t\tvalueLen, valueCap = v.Len(), v.Cap()\n\tcase reflect.Map, reflect.String:\n\t\tvalueLen = v.Len()\n\t}\n\tif valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {\n\t\td.w.Write(openParenBytes)\n\t\tif valueLen != 0 {\n\t\t\td.w.Write(lenEqualsBytes)\n\t\t\tprintInt(d.w, int64(valueLen), 10)\n\t\t}\n\t\tif !d.cs.DisableCapacities && valueCap != 0 {\n\t\t\tif valueLen != 0 {\n\t\t\t\td.w.Write(spaceBytes)\n\t\t\t}\n\t\t\td.w.Write(capEqualsBytes)\n\t\t\tprintInt(d.w, int64(valueCap), 10)\n\t\t}\n\t\td.w.Write(closeParenBytes)\n\t\td.w.Write(spaceBytes)\n\t}\n\n\t// Call Stringer/error interfaces if they exist and the handle methods flag\n\t// is enabled\n\tif !d.cs.DisableMethods {\n\t\tif (kind != reflect.Invalid) && (kind != reflect.Interface) {\n\t\t\tif handled := handleMethods(d.cs, d.w, v); handled {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch kind {\n\tcase reflect.Invalid:\n\t\t// Do nothing.  We should never get here since invalid has already\n\t\t// been handled above.\n\n\tcase reflect.Bool:\n\t\tprintBool(d.w, v.Bool())\n\n\tcase reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:\n\t\tprintInt(d.w, v.Int(), 10)\n\n\tcase reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:\n\t\tprintUint(d.w, v.Uint(), 10)\n\n\tcase reflect.Float32:\n\t\tprintFloat(d.w, v.Float(), 32)\n\n\tcase reflect.Float64:\n\t\tprintFloat(d.w, v.Float(), 64)\n\n\tcase reflect.Complex64:\n\t\tprintComplex(d.w, v.Complex(), 32)\n\n\tcase reflect.Complex128:\n\t\tprintComplex(d.w, v.Complex(), 64)\n\n\tcase reflect.Slice:\n\t\tif v.IsNil() {\n\t\t\td.w.Write(nilAngleBytes)\n\t\t\tbreak\n\t\t}\n\t\tfallthrough\n\n\tcase reflect.Array:\n\t\td.w.Write(openBraceNewlineBytes)\n\t\td.depth++\n\t\tif (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {\n\t\t\td.indent()\n\t\t\td.w.Write(maxNewlineBytes)\n\t\t} else {\n\t\t\td.dumpSlice(v)\n\t\t}\n\t\td.depth--\n\t\td.indent()\n\t\td.w.Write(closeBraceBytes)\n\n\tcase reflect.String:\n\t\td.w.Write([]byte(strconv.Quote(v.String())))\n\n\tcase reflect.Interface:\n\t\t// The only time we should get here is for nil interfaces due to\n\t\t// unpackValue calls.\n\t\tif v.IsNil() {\n\t\t\td.w.Write(nilAngleBytes)\n\t\t}\n\n\tcase reflect.Ptr:\n\t\t// Do nothing.  We should never get here since pointers have already\n\t\t// been handled above.\n\n\tcase reflect.Map:\n\t\t// nil maps should be indicated as different than empty maps\n\t\tif v.IsNil() {\n\t\t\td.w.Write(nilAngleBytes)\n\t\t\tbreak\n\t\t}\n\n\t\td.w.Write(openBraceNewlineBytes)\n\t\td.depth++\n\t\tif (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {\n\t\t\td.indent()\n\t\t\td.w.Write(maxNewlineBytes)\n\t\t} else {\n\t\t\tnumEntries := v.Len()\n\t\t\tkeys := v.MapKeys()\n\t\t\tif d.cs.SortKeys {\n\t\t\t\tsortValues(keys, d.cs)\n\t\t\t}\n\t\t\tfor i, key := range keys {\n\t\t\t\td.dump(d.unpackValue(key))\n\t\t\t\td.w.Write(colonSpaceBytes)\n\t\t\t\td.ignoreNextIndent = true\n\t\t\t\td.dump(d.unpackValue(v.MapIndex(key)))\n\t\t\t\tif i < (numEntries - 1) {\n\t\t\t\t\td.w.Write(commaNewlineBytes)\n\t\t\t\t} else {\n\t\t\t\t\td.w.Write(newlineBytes)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\td.depth--\n\t\td.indent()\n\t\td.w.Write(closeBraceBytes)\n\n\tcase reflect.Struct:\n\t\td.w.Write(openBraceNewlineBytes)\n\t\td.depth++\n\t\tif (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {\n\t\t\td.indent()\n\t\t\td.w.Write(maxNewlineBytes)\n\t\t} else {\n\t\t\tvt := v.Type()\n\t\t\tnumFields := v.NumField()\n\t\t\tfor i := 0; i < numFields; i++ {\n\t\t\t\td.indent()\n\t\t\t\tvtf := vt.Field(i)\n\t\t\t\td.w.Write([]byte(vtf.Name))\n\t\t\t\td.w.Write(colonSpaceBytes)\n\t\t\t\td.ignoreNextIndent = true\n\t\t\t\td.dump(d.unpackValue(v.Field(i)))\n\t\t\t\tif i < (numFields - 1) {\n\t\t\t\t\td.w.Write(commaNewlineBytes)\n\t\t\t\t} else {\n\t\t\t\t\td.w.Write(newlineBytes)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\td.depth--\n\t\td.indent()\n\t\td.w.Write(closeBraceBytes)\n\n\tcase reflect.Uintptr:\n\t\tprintHexPtr(d.w, uintptr(v.Uint()))\n\n\tcase reflect.UnsafePointer, reflect.Chan, reflect.Func:\n\t\tprintHexPtr(d.w, v.Pointer())\n\n\t// There were not any other types at the time this code was written, but\n\t// fall back to letting the default fmt package handle it in case any new\n\t// types are added.\n\tdefault:\n\t\tif v.CanInterface() {\n\t\t\tfmt.Fprintf(d.w, \"%v\", v.Interface())\n\t\t} else {\n\t\t\tfmt.Fprintf(d.w, \"%v\", v.String())\n\t\t}\n\t}\n}\n\n// fdump is a helper function to consolidate the logic from the various public\n// methods which take varying writers and config states.\nfunc fdump(cs *ConfigState, w io.Writer, a ...interface{}) {\n\tfor _, arg := range a {\n\t\tif arg == nil {\n\t\t\tw.Write(interfaceBytes)\n\t\t\tw.Write(spaceBytes)\n\t\t\tw.Write(nilAngleBytes)\n\t\t\tw.Write(newlineBytes)\n\t\t\tcontinue\n\t\t}\n\n\t\td := dumpState{w: w, cs: cs}\n\t\td.pointers = make(map[uintptr]int)\n\t\td.dump(reflect.ValueOf(arg))\n\t\td.w.Write(newlineBytes)\n\t}\n}\n\n// Fdump formats and displays the passed arguments to io.Writer w.  It formats\n// exactly the same as Dump.\nfunc Fdump(w io.Writer, a ...interface{}) {\n\tfdump(&Config, w, a...)\n}\n\n// Sdump returns a string with the passed arguments formatted exactly the same\n// as Dump.\nfunc Sdump(a ...interface{}) string {\n\tvar buf bytes.Buffer\n\tfdump(&Config, &buf, a...)\n\treturn buf.String()\n}\n\n/*\nDump displays the passed parameters to standard out with newlines, customizable\nindentation, and additional debug information such as complete types and all\npointer addresses used to indirect to the final value.  It provides the\nfollowing features over the built-in printing facilities provided by the fmt\npackage:\n\n\t* Pointers are dereferenced and followed\n\t* Circular data structures are detected and handled properly\n\t* Custom Stringer/error interfaces are optionally invoked, including\n\t  on unexported types\n\t* Custom types which only implement the Stringer/error interfaces via\n\t  a pointer receiver are optionally invoked when passing non-pointer\n\t  variables\n\t* Byte arrays and slices are dumped like the hexdump -C command which\n\t  includes offsets, byte values in hex, and ASCII output\n\nThe configuration options are controlled by an exported package global,\nspew.Config.  See ConfigState for options documentation.\n\nSee Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to\nget the formatted result as a string.\n*/\nfunc Dump(a ...interface{}) {\n\tfdump(&Config, os.Stdout, a...)\n}\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/format.go",
    "content": "/*\n * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>\n *\n * Permission to use, copy, modify, and distribute this software for any\n * purpose with or without fee is hereby granted, provided that the above\n * copyright notice and this permission notice appear in all copies.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n */\n\npackage spew\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n)\n\n// supportedFlags is a list of all the character flags supported by fmt package.\nconst supportedFlags = \"0-+# \"\n\n// formatState implements the fmt.Formatter interface and contains information\n// about the state of a formatting operation.  The NewFormatter function can\n// be used to get a new Formatter which can be used directly as arguments\n// in standard fmt package printing calls.\ntype formatState struct {\n\tvalue          interface{}\n\tfs             fmt.State\n\tdepth          int\n\tpointers       map[uintptr]int\n\tignoreNextType bool\n\tcs             *ConfigState\n}\n\n// buildDefaultFormat recreates the original format string without precision\n// and width information to pass in to fmt.Sprintf in the case of an\n// unrecognized type.  Unless new types are added to the language, this\n// function won't ever be called.\nfunc (f *formatState) buildDefaultFormat() (format string) {\n\tbuf := bytes.NewBuffer(percentBytes)\n\n\tfor _, flag := range supportedFlags {\n\t\tif f.fs.Flag(int(flag)) {\n\t\t\tbuf.WriteRune(flag)\n\t\t}\n\t}\n\n\tbuf.WriteRune('v')\n\n\tformat = buf.String()\n\treturn format\n}\n\n// constructOrigFormat recreates the original format string including precision\n// and width information to pass along to the standard fmt package.  This allows\n// automatic deferral of all format strings this package doesn't support.\nfunc (f *formatState) constructOrigFormat(verb rune) (format string) {\n\tbuf := bytes.NewBuffer(percentBytes)\n\n\tfor _, flag := range supportedFlags {\n\t\tif f.fs.Flag(int(flag)) {\n\t\t\tbuf.WriteRune(flag)\n\t\t}\n\t}\n\n\tif width, ok := f.fs.Width(); ok {\n\t\tbuf.WriteString(strconv.Itoa(width))\n\t}\n\n\tif precision, ok := f.fs.Precision(); ok {\n\t\tbuf.Write(precisionBytes)\n\t\tbuf.WriteString(strconv.Itoa(precision))\n\t}\n\n\tbuf.WriteRune(verb)\n\n\tformat = buf.String()\n\treturn format\n}\n\n// unpackValue returns values inside of non-nil interfaces when possible and\n// ensures that types for values which have been unpacked from an interface\n// are displayed when the show types flag is also set.\n// This is useful for data types like structs, arrays, slices, and maps which\n// can contain varying types packed inside an interface.\nfunc (f *formatState) unpackValue(v reflect.Value) reflect.Value {\n\tif v.Kind() == reflect.Interface {\n\t\tf.ignoreNextType = false\n\t\tif !v.IsNil() {\n\t\t\tv = v.Elem()\n\t\t}\n\t}\n\treturn v\n}\n\n// formatPtr handles formatting of pointers by indirecting them as necessary.\nfunc (f *formatState) formatPtr(v reflect.Value) {\n\t// Display nil if top level pointer is nil.\n\tshowTypes := f.fs.Flag('#')\n\tif v.IsNil() && (!showTypes || f.ignoreNextType) {\n\t\tf.fs.Write(nilAngleBytes)\n\t\treturn\n\t}\n\n\t// Remove pointers at or below the current depth from map used to detect\n\t// circular refs.\n\tfor k, depth := range f.pointers {\n\t\tif depth >= f.depth {\n\t\t\tdelete(f.pointers, k)\n\t\t}\n\t}\n\n\t// Keep list of all dereferenced pointers to possibly show later.\n\tpointerChain := make([]uintptr, 0)\n\n\t// Figure out how many levels of indirection there are by derferencing\n\t// pointers and unpacking interfaces down the chain while detecting circular\n\t// references.\n\tnilFound := false\n\tcycleFound := false\n\tindirects := 0\n\tve := v\n\tfor ve.Kind() == reflect.Ptr {\n\t\tif ve.IsNil() {\n\t\t\tnilFound = true\n\t\t\tbreak\n\t\t}\n\t\tindirects++\n\t\taddr := ve.Pointer()\n\t\tpointerChain = append(pointerChain, addr)\n\t\tif pd, ok := f.pointers[addr]; ok && pd < f.depth {\n\t\t\tcycleFound = true\n\t\t\tindirects--\n\t\t\tbreak\n\t\t}\n\t\tf.pointers[addr] = f.depth\n\n\t\tve = ve.Elem()\n\t\tif ve.Kind() == reflect.Interface {\n\t\t\tif ve.IsNil() {\n\t\t\t\tnilFound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tve = ve.Elem()\n\t\t}\n\t}\n\n\t// Display type or indirection level depending on flags.\n\tif showTypes && !f.ignoreNextType {\n\t\tf.fs.Write(openParenBytes)\n\t\tf.fs.Write(bytes.Repeat(asteriskBytes, indirects))\n\t\tf.fs.Write([]byte(ve.Type().String()))\n\t\tf.fs.Write(closeParenBytes)\n\t} else {\n\t\tif nilFound || cycleFound {\n\t\t\tindirects += strings.Count(ve.Type().String(), \"*\")\n\t\t}\n\t\tf.fs.Write(openAngleBytes)\n\t\tf.fs.Write([]byte(strings.Repeat(\"*\", indirects)))\n\t\tf.fs.Write(closeAngleBytes)\n\t}\n\n\t// Display pointer information depending on flags.\n\tif f.fs.Flag('+') && (len(pointerChain) > 0) {\n\t\tf.fs.Write(openParenBytes)\n\t\tfor i, addr := range pointerChain {\n\t\t\tif i > 0 {\n\t\t\t\tf.fs.Write(pointerChainBytes)\n\t\t\t}\n\t\t\tprintHexPtr(f.fs, addr)\n\t\t}\n\t\tf.fs.Write(closeParenBytes)\n\t}\n\n\t// Display dereferenced value.\n\tswitch {\n\tcase nilFound:\n\t\tf.fs.Write(nilAngleBytes)\n\n\tcase cycleFound:\n\t\tf.fs.Write(circularShortBytes)\n\n\tdefault:\n\t\tf.ignoreNextType = true\n\t\tf.format(ve)\n\t}\n}\n\n// format is the main workhorse for providing the Formatter interface.  It\n// uses the passed reflect value to figure out what kind of object we are\n// dealing with and formats it appropriately.  It is a recursive function,\n// however circular data structures are detected and handled properly.\nfunc (f *formatState) format(v reflect.Value) {\n\t// Handle invalid reflect values immediately.\n\tkind := v.Kind()\n\tif kind == reflect.Invalid {\n\t\tf.fs.Write(invalidAngleBytes)\n\t\treturn\n\t}\n\n\t// Handle pointers specially.\n\tif kind == reflect.Ptr {\n\t\tf.formatPtr(v)\n\t\treturn\n\t}\n\n\t// Print type information unless already handled elsewhere.\n\tif !f.ignoreNextType && f.fs.Flag('#') {\n\t\tf.fs.Write(openParenBytes)\n\t\tf.fs.Write([]byte(v.Type().String()))\n\t\tf.fs.Write(closeParenBytes)\n\t}\n\tf.ignoreNextType = false\n\n\t// Call Stringer/error interfaces if they exist and the handle methods\n\t// flag is enabled.\n\tif !f.cs.DisableMethods {\n\t\tif (kind != reflect.Invalid) && (kind != reflect.Interface) {\n\t\t\tif handled := handleMethods(f.cs, f.fs, v); handled {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch kind {\n\tcase reflect.Invalid:\n\t\t// Do nothing.  We should never get here since invalid has already\n\t\t// been handled above.\n\n\tcase reflect.Bool:\n\t\tprintBool(f.fs, v.Bool())\n\n\tcase reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:\n\t\tprintInt(f.fs, v.Int(), 10)\n\n\tcase reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:\n\t\tprintUint(f.fs, v.Uint(), 10)\n\n\tcase reflect.Float32:\n\t\tprintFloat(f.fs, v.Float(), 32)\n\n\tcase reflect.Float64:\n\t\tprintFloat(f.fs, v.Float(), 64)\n\n\tcase reflect.Complex64:\n\t\tprintComplex(f.fs, v.Complex(), 32)\n\n\tcase reflect.Complex128:\n\t\tprintComplex(f.fs, v.Complex(), 64)\n\n\tcase reflect.Slice:\n\t\tif v.IsNil() {\n\t\t\tf.fs.Write(nilAngleBytes)\n\t\t\tbreak\n\t\t}\n\t\tfallthrough\n\n\tcase reflect.Array:\n\t\tf.fs.Write(openBracketBytes)\n\t\tf.depth++\n\t\tif (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {\n\t\t\tf.fs.Write(maxShortBytes)\n\t\t} else {\n\t\t\tnumEntries := v.Len()\n\t\t\tfor i := 0; i < numEntries; i++ {\n\t\t\t\tif i > 0 {\n\t\t\t\t\tf.fs.Write(spaceBytes)\n\t\t\t\t}\n\t\t\t\tf.ignoreNextType = true\n\t\t\t\tf.format(f.unpackValue(v.Index(i)))\n\t\t\t}\n\t\t}\n\t\tf.depth--\n\t\tf.fs.Write(closeBracketBytes)\n\n\tcase reflect.String:\n\t\tf.fs.Write([]byte(v.String()))\n\n\tcase reflect.Interface:\n\t\t// The only time we should get here is for nil interfaces due to\n\t\t// unpackValue calls.\n\t\tif v.IsNil() {\n\t\t\tf.fs.Write(nilAngleBytes)\n\t\t}\n\n\tcase reflect.Ptr:\n\t\t// Do nothing.  We should never get here since pointers have already\n\t\t// been handled above.\n\n\tcase reflect.Map:\n\t\t// nil maps should be indicated as different than empty maps\n\t\tif v.IsNil() {\n\t\t\tf.fs.Write(nilAngleBytes)\n\t\t\tbreak\n\t\t}\n\n\t\tf.fs.Write(openMapBytes)\n\t\tf.depth++\n\t\tif (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {\n\t\t\tf.fs.Write(maxShortBytes)\n\t\t} else {\n\t\t\tkeys := v.MapKeys()\n\t\t\tif f.cs.SortKeys {\n\t\t\t\tsortValues(keys, f.cs)\n\t\t\t}\n\t\t\tfor i, key := range keys {\n\t\t\t\tif i > 0 {\n\t\t\t\t\tf.fs.Write(spaceBytes)\n\t\t\t\t}\n\t\t\t\tf.ignoreNextType = true\n\t\t\t\tf.format(f.unpackValue(key))\n\t\t\t\tf.fs.Write(colonBytes)\n\t\t\t\tf.ignoreNextType = true\n\t\t\t\tf.format(f.unpackValue(v.MapIndex(key)))\n\t\t\t}\n\t\t}\n\t\tf.depth--\n\t\tf.fs.Write(closeMapBytes)\n\n\tcase reflect.Struct:\n\t\tnumFields := v.NumField()\n\t\tf.fs.Write(openBraceBytes)\n\t\tf.depth++\n\t\tif (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {\n\t\t\tf.fs.Write(maxShortBytes)\n\t\t} else {\n\t\t\tvt := v.Type()\n\t\t\tfor i := 0; i < numFields; i++ {\n\t\t\t\tif i > 0 {\n\t\t\t\t\tf.fs.Write(spaceBytes)\n\t\t\t\t}\n\t\t\t\tvtf := vt.Field(i)\n\t\t\t\tif f.fs.Flag('+') || f.fs.Flag('#') {\n\t\t\t\t\tf.fs.Write([]byte(vtf.Name))\n\t\t\t\t\tf.fs.Write(colonBytes)\n\t\t\t\t}\n\t\t\t\tf.format(f.unpackValue(v.Field(i)))\n\t\t\t}\n\t\t}\n\t\tf.depth--\n\t\tf.fs.Write(closeBraceBytes)\n\n\tcase reflect.Uintptr:\n\t\tprintHexPtr(f.fs, uintptr(v.Uint()))\n\n\tcase reflect.UnsafePointer, reflect.Chan, reflect.Func:\n\t\tprintHexPtr(f.fs, v.Pointer())\n\n\t// There were not any other types at the time this code was written, but\n\t// fall back to letting the default fmt package handle it if any get added.\n\tdefault:\n\t\tformat := f.buildDefaultFormat()\n\t\tif v.CanInterface() {\n\t\t\tfmt.Fprintf(f.fs, format, v.Interface())\n\t\t} else {\n\t\t\tfmt.Fprintf(f.fs, format, v.String())\n\t\t}\n\t}\n}\n\n// Format satisfies the fmt.Formatter interface. See NewFormatter for usage\n// details.\nfunc (f *formatState) Format(fs fmt.State, verb rune) {\n\tf.fs = fs\n\n\t// Use standard formatting for verbs that are not v.\n\tif verb != 'v' {\n\t\tformat := f.constructOrigFormat(verb)\n\t\tfmt.Fprintf(fs, format, f.value)\n\t\treturn\n\t}\n\n\tif f.value == nil {\n\t\tif fs.Flag('#') {\n\t\t\tfs.Write(interfaceBytes)\n\t\t}\n\t\tfs.Write(nilAngleBytes)\n\t\treturn\n\t}\n\n\tf.format(reflect.ValueOf(f.value))\n}\n\n// newFormatter is a helper function to consolidate the logic from the various\n// public methods which take varying config states.\nfunc newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {\n\tfs := &formatState{value: v, cs: cs}\n\tfs.pointers = make(map[uintptr]int)\n\treturn fs\n}\n\n/*\nNewFormatter returns a custom formatter that satisfies the fmt.Formatter\ninterface.  As a result, it integrates cleanly with standard fmt package\nprinting functions.  The formatter is useful for inline printing of smaller data\ntypes similar to the standard %v format specifier.\n\nThe custom formatter only responds to the %v (most compact), %+v (adds pointer\naddresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb\ncombinations.  Any other verbs such as %x and %q will be sent to the the\nstandard fmt package for formatting.  In addition, the custom formatter ignores\nthe width and precision arguments (however they will still work on the format\nspecifiers not handled by the custom formatter).\n\nTypically this function shouldn't be called directly.  It is much easier to make\nuse of the custom formatter by calling one of the convenience functions such as\nPrintf, Println, or Fprintf.\n*/\nfunc NewFormatter(v interface{}) fmt.Formatter {\n\treturn newFormatter(&Config, v)\n}\n"
  },
  {
    "path": "vendor/github.com/davecgh/go-spew/spew/spew.go",
    "content": "/*\n * Copyright (c) 2013-2016 Dave Collins <dave@davec.name>\n *\n * Permission to use, copy, modify, and distribute this software for any\n * purpose with or without fee is hereby granted, provided that the above\n * copyright notice and this permission notice appear in all copies.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n */\n\npackage spew\n\nimport (\n\t\"fmt\"\n\t\"io\"\n)\n\n// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the formatted string as a value that satisfies error.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Errorf(format string, a ...interface{}) (err error) {\n\treturn fmt.Errorf(format, convertArgs(a)...)\n}\n\n// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Fprint(w io.Writer, a ...interface{}) (n int, err error) {\n\treturn fmt.Fprint(w, convertArgs(a)...)\n}\n\n// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {\n\treturn fmt.Fprintf(w, format, convertArgs(a)...)\n}\n\n// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it\n// passed with a default Formatter interface returned by NewFormatter.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Fprintln(w io.Writer, a ...interface{}) (n int, err error) {\n\treturn fmt.Fprintln(w, convertArgs(a)...)\n}\n\n// Print is a wrapper for fmt.Print that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Print(a ...interface{}) (n int, err error) {\n\treturn fmt.Print(convertArgs(a)...)\n}\n\n// Printf is a wrapper for fmt.Printf that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Printf(format string, a ...interface{}) (n int, err error) {\n\treturn fmt.Printf(format, convertArgs(a)...)\n}\n\n// Println is a wrapper for fmt.Println that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the number of bytes written and any write error encountered.  See\n// NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Println(a ...interface{}) (n int, err error) {\n\treturn fmt.Println(convertArgs(a)...)\n}\n\n// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the resulting string.  See NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Sprint(a ...interface{}) string {\n\treturn fmt.Sprint(convertArgs(a)...)\n}\n\n// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were\n// passed with a default Formatter interface returned by NewFormatter.  It\n// returns the resulting string.  See NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Sprintf(format string, a ...interface{}) string {\n\treturn fmt.Sprintf(format, convertArgs(a)...)\n}\n\n// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it\n// were passed with a default Formatter interface returned by NewFormatter.  It\n// returns the resulting string.  See NewFormatter for formatting details.\n//\n// This function is shorthand for the following syntax:\n//\n//\tfmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))\nfunc Sprintln(a ...interface{}) string {\n\treturn fmt.Sprintln(convertArgs(a)...)\n}\n\n// convertArgs accepts a slice of arguments and returns a slice of the same\n// length with each argument converted to a default spew Formatter interface.\nfunc convertArgs(args []interface{}) (formatters []interface{}) {\n\tformatters = make([]interface{}, len(args))\n\tfor index, arg := range args {\n\t\tformatters[index] = NewFormatter(arg)\n\t}\n\treturn formatters\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/.gitignore",
    "content": ".idea\n*.sw?\n.vscode\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/.travis.yml",
    "content": "language: go\n\ngo:\n  - 1.10.x\n  - 1.11.x\n  - 1.12.x\n  - 1.13.x\n  - 1.14.x\n\nscript:\n  - go get -d -t ./...\n  - go vet ./...\n  - go test ./...\n  - >\n    go_version=$(go version);\n    if [ ${go_version:13:4} = \"1.12\" ]; then\n      go get -u golang.org/x/tools/cmd/goimports;\n      goimports -d -e ./ | grep '.*' && { echo; echo \"Aborting due to non-empty goimports output.\"; exit 1; } || :;\n    fi\n\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/CHANGELOG.md",
    "content": "# Changelog\n\n## v4.1.2 (2020-06-02)\n\n- fix that handles MethodNotAllowed with path variables, thank you @caseyhadden for your contribution\n- fix to replace nested wildcards correctly in RoutePattern, thank you @@unmultimedio for your contribution\n- History of changes: see https://github.com/go-chi/chi/compare/v4.1.1...v4.1.2\n\n\n## v4.1.1 (2020-04-16)\n\n- fix for issue https://github.com/go-chi/chi/issues/411 which allows for overlapping regexp\n  route to the correct handler through a recursive tree search, thanks to @Jahaja for the PR/fix!\n- new middleware.RouteHeaders as a simple router for request headers with wildcard support\n- History of changes: see https://github.com/go-chi/chi/compare/v4.1.0...v4.1.1\n\n\n## v4.1.0 (2020-04-1)\n\n- middleware.LogEntry: Write method on interface now passes the response header\n  and an extra interface type useful for custom logger implementations.\n- middleware.WrapResponseWriter: minor fix\n- middleware.Recoverer: a bit prettier\n- History of changes: see https://github.com/go-chi/chi/compare/v4.0.4...v4.1.0\n\n\n## v4.0.4 (2020-03-24)\n\n- middleware.Recoverer: new pretty stack trace printing (https://github.com/go-chi/chi/pull/496)\n- a few minor improvements and fixes\n- History of changes: see https://github.com/go-chi/chi/compare/v4.0.3...v4.0.4\n\n\n## v4.0.3 (2020-01-09)\n\n- core: fix regexp routing to include default value when param is not matched\n- middleware: rewrite of middleware.Compress\n- middleware: suppress http.ErrAbortHandler in middleware.Recoverer\n- History of changes: see https://github.com/go-chi/chi/compare/v4.0.2...v4.0.3\n\n\n## v4.0.2 (2019-02-26)\n\n- Minor fixes\n- History of changes: see https://github.com/go-chi/chi/compare/v4.0.1...v4.0.2\n\n\n## v4.0.1 (2019-01-21)\n\n- Fixes issue with compress middleware: #382 #385\n- History of changes: see https://github.com/go-chi/chi/compare/v4.0.0...v4.0.1\n\n\n## v4.0.0 (2019-01-10)\n\n- chi v4 requires Go 1.10.3+ (or Go 1.9.7+) - we have deprecated support for Go 1.7 and 1.8\n- router: respond with 404 on router with no routes (#362)\n- router: additional check to ensure wildcard is at the end of a url pattern (#333)\n- middleware: deprecate use of http.CloseNotifier (#347)\n- middleware: fix RedirectSlashes to include query params on redirect (#334)\n- History of changes: see https://github.com/go-chi/chi/compare/v3.3.4...v4.0.0\n\n\n## v3.3.4 (2019-01-07)\n\n- Minor middleware improvements. No changes to core library/router. Moving v3 into its\n- own branch as a version of chi for Go 1.7, 1.8, 1.9, 1.10, 1.11\n- History of changes: see https://github.com/go-chi/chi/compare/v3.3.3...v3.3.4\n\n\n## v3.3.3 (2018-08-27)\n\n- Minor release\n- See https://github.com/go-chi/chi/compare/v3.3.2...v3.3.3\n\n\n## v3.3.2 (2017-12-22)\n\n- Support to route trailing slashes on mounted sub-routers (#281)\n- middleware: new `ContentCharset` to check matching charsets. Thank you\n  @csucu for your community contribution!\n\n\n## v3.3.1 (2017-11-20)\n\n- middleware: new `AllowContentType` handler for explicit whitelist of accepted request Content-Types\n- middleware: new `SetHeader` handler for short-hand middleware to set a response header key/value\n- Minor bug fixes\n\n\n## v3.3.0 (2017-10-10)\n\n- New chi.RegisterMethod(method) to add support for custom HTTP methods, see _examples/custom-method for usage\n- Deprecated LINK and UNLINK methods from the default list, please use `chi.RegisterMethod(\"LINK\")` and `chi.RegisterMethod(\"UNLINK\")` in an `init()` function\n\n\n## v3.2.1 (2017-08-31)\n\n- Add new `Match(rctx *Context, method, path string) bool` method to `Routes` interface\n  and `Mux`. Match searches the mux's routing tree for a handler that matches the method/path\n- Add new `RouteMethod` to `*Context`\n- Add new `Routes` pointer to `*Context`\n- Add new `middleware.GetHead` to route missing HEAD requests to GET handler\n- Updated benchmarks (see README)\n\n\n## v3.1.5 (2017-08-02)\n\n- Setup golint and go vet for the project\n- As per golint, we've redefined `func ServerBaseContext(h http.Handler, baseCtx context.Context) http.Handler`\n  to `func ServerBaseContext(baseCtx context.Context, h http.Handler) http.Handler`\n\n\n## v3.1.0 (2017-07-10)\n\n- Fix a few minor issues after v3 release\n- Move `docgen` sub-pkg to https://github.com/go-chi/docgen\n- Move `render` sub-pkg to https://github.com/go-chi/render\n- Add new `URLFormat` handler to chi/middleware sub-pkg to make working with url mime \n  suffixes easier, ie. parsing `/articles/1.json` and `/articles/1.xml`. See comments in\n  https://github.com/go-chi/chi/blob/master/middleware/url_format.go for example usage.\n\n\n## v3.0.0 (2017-06-21)\n\n- Major update to chi library with many exciting updates, but also some *breaking changes*\n- URL parameter syntax changed from `/:id` to `/{id}` for even more flexible routing, such as\n  `/articles/{month}-{day}-{year}-{slug}`, `/articles/{id}`, and `/articles/{id}.{ext}` on the\n  same router\n- Support for regexp for routing patterns, in the form of `/{paramKey:regExp}` for example:\n  `r.Get(\"/articles/{name:[a-z]+}\", h)` and `chi.URLParam(r, \"name\")`\n- Add `Method` and `MethodFunc` to `chi.Router` to allow routing definitions such as\n  `r.Method(\"GET\", \"/\", h)` which provides a cleaner interface for custom handlers like\n  in `_examples/custom-handler`\n- Deprecating `mux#FileServer` helper function. Instead, we encourage users to create their\n  own using file handler with the stdlib, see `_examples/fileserver` for an example\n- Add support for LINK/UNLINK http methods via `r.Method()` and `r.MethodFunc()`\n- Moved the chi project to its own organization, to allow chi-related community packages to\n  be easily discovered and supported, at: https://github.com/go-chi\n- *NOTE:* please update your import paths to `\"github.com/go-chi/chi\"`\n- *NOTE:* chi v2 is still available at https://github.com/go-chi/chi/tree/v2\n\n\n## v2.1.0 (2017-03-30)\n\n- Minor improvements and update to the chi core library\n- Introduced a brand new `chi/render` sub-package to complete the story of building\n  APIs to offer a pattern for managing well-defined request / response payloads. Please\n  check out the updated `_examples/rest` example for how it works.\n- Added `MethodNotAllowed(h http.HandlerFunc)` to chi.Router interface\n\n\n## v2.0.0 (2017-01-06)\n\n- After many months of v2 being in an RC state with many companies and users running it in\n  production, the inclusion of some improvements to the middlewares, we are very pleased to\n  announce v2.0.0 of chi.\n\n\n## v2.0.0-rc1 (2016-07-26)\n\n- Huge update! chi v2 is a large refactor targetting Go 1.7+. As of Go 1.7, the popular\n  community `\"net/context\"` package has been included in the standard library as `\"context\"` and\n  utilized by `\"net/http\"` and `http.Request` to managing deadlines, cancelation signals and other\n  request-scoped values. We're very excited about the new context addition and are proud to\n  introduce chi v2, a minimal and powerful routing package for building large HTTP services,\n  with zero external dependencies. Chi focuses on idiomatic design and encourages the use of \n  stdlib HTTP handlers and middlwares.\n- chi v2 deprecates its `chi.Handler` interface and requires `http.Handler` or `http.HandlerFunc`\n- chi v2 stores URL routing parameters and patterns in the standard request context: `r.Context()`\n- chi v2 lower-level routing context is accessible by `chi.RouteContext(r.Context()) *chi.Context`,\n  which provides direct access to URL routing parameters, the routing path and the matching\n  routing patterns.\n- Users upgrading from chi v1 to v2, need to:\n  1. Update the old chi.Handler signature, `func(ctx context.Context, w http.ResponseWriter, r *http.Request)` to\n     the standard http.Handler: `func(w http.ResponseWriter, r *http.Request)`\n  2. Use `chi.URLParam(r *http.Request, paramKey string) string`\n     or `URLParamFromCtx(ctx context.Context, paramKey string) string` to access a url parameter value\n\n\n## v1.0.0 (2016-07-01)\n\n- Released chi v1 stable https://github.com/go-chi/chi/tree/v1.0.0 for Go 1.6 and older.\n\n\n## v0.9.0 (2016-03-31)\n\n- Reuse context objects via sync.Pool for zero-allocation routing [#33](https://github.com/go-chi/chi/pull/33)\n- BREAKING NOTE: due to subtle API changes, previously `chi.URLParams(ctx)[\"id\"]` used to access url parameters\n  has changed to: `chi.URLParam(ctx, \"id\")`\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/CONTRIBUTING.md",
    "content": "# Contributing\n\n## Prerequisites\n\n1. [Install Go][go-install].\n2. Download the sources and switch the working directory:\n\n    ```bash\n    go get -u -d github.com/go-chi/chi\n    cd $GOPATH/src/github.com/go-chi/chi\n    ```\n\n## Submitting a Pull Request\n\nA typical workflow is:\n\n1. [Fork the repository.][fork] [This tip maybe also helpful.][go-fork-tip]\n2. [Create a topic branch.][branch]\n3. Add tests for your change.\n4. Run `go test`. If your tests pass, return to the step 3.\n5. Implement the change and ensure the steps from the previous step pass.\n6. Run `goimports -w .`, to ensure the new code conforms to Go formatting guideline.\n7. [Add, commit and push your changes.][git-help]\n8. [Submit a pull request.][pull-req]\n\n[go-install]: https://golang.org/doc/install\n[go-fork-tip]: http://blog.campoy.cat/2014/03/github-and-go-forking-pull-requests-and.html\n[fork]: https://help.github.com/articles/fork-a-repo\n[branch]: http://learn.github.com/p/branching.html\n[git-help]: https://guides.github.com\n[pull-req]: https://help.github.com/articles/using-pull-requests\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/LICENSE",
    "content": "Copyright (c) 2015-present Peter Kieltyka (https://github.com/pkieltyka), Google Inc.\n\nMIT License\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of\nthe Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR\nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER\nIN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/README.md",
    "content": "# <img alt=\"chi\" src=\"https://cdn.rawgit.com/go-chi/chi/master/_examples/chi.svg\" width=\"220\" />\n\n\n[![GoDoc Widget]][GoDoc] [![Travis Widget]][Travis]\n\n`chi` is a lightweight, idiomatic and composable router for building Go HTTP services. It's\nespecially good at helping you write large REST API services that are kept maintainable as your\nproject grows and changes. `chi` is built on the new `context` package introduced in Go 1.7 to\nhandle signaling, cancelation and request-scoped values across a handler chain.\n\nThe focus of the project has been to seek out an elegant and comfortable design for writing\nREST API servers, written during the development of the Pressly API service that powers our\npublic API service, which in turn powers all of our client-side applications.\n\nThe key considerations of chi's design are: project structure, maintainability, standard http\nhandlers (stdlib-only), developer productivity, and deconstructing a large system into many small\nparts. The core router `github.com/go-chi/chi` is quite small (less than 1000 LOC), but we've also\nincluded some useful/optional subpackages: [middleware](/middleware), [render](https://github.com/go-chi/render) and [docgen](https://github.com/go-chi/docgen). We hope you enjoy it too!\n\n## Install\n\n`go get -u github.com/go-chi/chi`\n\n\n## Features\n\n* **Lightweight** - cloc'd in ~1000 LOC for the chi router\n* **Fast** - yes, see [benchmarks](#benchmarks)\n* **100% compatible with net/http** - use any http or middleware pkg in the ecosystem that is also compatible with `net/http`\n* **Designed for modular/composable APIs** - middlewares, inline middlewares, route groups and subrouter mounting\n* **Context control** - built on new `context` package, providing value chaining, cancellations and timeouts\n* **Robust** - in production at Pressly, CloudFlare, Heroku, 99Designs, and many others (see [discussion](https://github.com/go-chi/chi/issues/91))\n* **Doc generation** - `docgen` auto-generates routing documentation from your source to JSON or Markdown\n* **No external dependencies** - plain ol' Go stdlib + net/http\n\n\n## Examples\n\nSee [_examples/](https://github.com/go-chi/chi/blob/master/_examples/) for a variety of examples.\n\n\n**As easy as:**\n\n```go\npackage main\n\nimport (\n\t\"net/http\"\n\n\t\"github.com/go-chi/chi\"\n\t\"github.com/go-chi/chi/middleware\"\n)\n\nfunc main() {\n\tr := chi.NewRouter()\n\tr.Use(middleware.Logger)\n\tr.Get(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Write([]byte(\"welcome\"))\n\t})\n\thttp.ListenAndServe(\":3000\", r)\n}\n```\n\n**REST Preview:**\n\nHere is a little preview of how routing looks like with chi. Also take a look at the generated routing docs\nin JSON ([routes.json](https://github.com/go-chi/chi/blob/master/_examples/rest/routes.json)) and in\nMarkdown ([routes.md](https://github.com/go-chi/chi/blob/master/_examples/rest/routes.md)).\n\nI highly recommend reading the source of the [examples](https://github.com/go-chi/chi/blob/master/_examples/) listed\nabove, they will show you all the features of chi and serve as a good form of documentation.\n\n```go\nimport (\n  //...\n  \"context\"\n  \"github.com/go-chi/chi\"\n  \"github.com/go-chi/chi/middleware\"\n)\n\nfunc main() {\n  r := chi.NewRouter()\n\n  // A good base middleware stack\n  r.Use(middleware.RequestID)\n  r.Use(middleware.RealIP)\n  r.Use(middleware.Logger)\n  r.Use(middleware.Recoverer)\n\n  // Set a timeout value on the request context (ctx), that will signal\n  // through ctx.Done() that the request has timed out and further\n  // processing should be stopped.\n  r.Use(middleware.Timeout(60 * time.Second))\n\n  r.Get(\"/\", func(w http.ResponseWriter, r *http.Request) {\n    w.Write([]byte(\"hi\"))\n  })\n\n  // RESTy routes for \"articles\" resource\n  r.Route(\"/articles\", func(r chi.Router) {\n    r.With(paginate).Get(\"/\", listArticles)                           // GET /articles\n    r.With(paginate).Get(\"/{month}-{day}-{year}\", listArticlesByDate) // GET /articles/01-16-2017\n\n    r.Post(\"/\", createArticle)                                        // POST /articles\n    r.Get(\"/search\", searchArticles)                                  // GET /articles/search\n\n    // Regexp url parameters:\n    r.Get(\"/{articleSlug:[a-z-]+}\", getArticleBySlug)                // GET /articles/home-is-toronto\n\n    // Subrouters:\n    r.Route(\"/{articleID}\", func(r chi.Router) {\n      r.Use(ArticleCtx)\n      r.Get(\"/\", getArticle)                                          // GET /articles/123\n      r.Put(\"/\", updateArticle)                                       // PUT /articles/123\n      r.Delete(\"/\", deleteArticle)                                    // DELETE /articles/123\n    })\n  })\n\n  // Mount the admin sub-router\n  r.Mount(\"/admin\", adminRouter())\n\n  http.ListenAndServe(\":3333\", r)\n}\n\nfunc ArticleCtx(next http.Handler) http.Handler {\n  return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n    articleID := chi.URLParam(r, \"articleID\")\n    article, err := dbGetArticle(articleID)\n    if err != nil {\n      http.Error(w, http.StatusText(404), 404)\n      return\n    }\n    ctx := context.WithValue(r.Context(), \"article\", article)\n    next.ServeHTTP(w, r.WithContext(ctx))\n  })\n}\n\nfunc getArticle(w http.ResponseWriter, r *http.Request) {\n  ctx := r.Context()\n  article, ok := ctx.Value(\"article\").(*Article)\n  if !ok {\n    http.Error(w, http.StatusText(422), 422)\n    return\n  }\n  w.Write([]byte(fmt.Sprintf(\"title:%s\", article.Title)))\n}\n\n// A completely separate router for administrator routes\nfunc adminRouter() http.Handler {\n  r := chi.NewRouter()\n  r.Use(AdminOnly)\n  r.Get(\"/\", adminIndex)\n  r.Get(\"/accounts\", adminListAccounts)\n  return r\n}\n\nfunc AdminOnly(next http.Handler) http.Handler {\n  return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n    ctx := r.Context()\n    perm, ok := ctx.Value(\"acl.permission\").(YourPermissionType)\n    if !ok || !perm.IsAdmin() {\n      http.Error(w, http.StatusText(403), 403)\n      return\n    }\n    next.ServeHTTP(w, r)\n  })\n}\n```\n\n\n## Router design\n\nchi's router is based on a kind of [Patricia Radix trie](https://en.wikipedia.org/wiki/Radix_tree).\nThe router is fully compatible with `net/http`.\n\nBuilt on top of the tree is the `Router` interface:\n\n```go\n// Router consisting of the core routing methods used by chi's Mux,\n// using only the standard net/http.\ntype Router interface {\n\thttp.Handler\n\tRoutes\n\n\t// Use appends one or more middlewares onto the Router stack.\n\tUse(middlewares ...func(http.Handler) http.Handler)\n\n\t// With adds inline middlewares for an endpoint handler.\n\tWith(middlewares ...func(http.Handler) http.Handler) Router\n\n\t// Group adds a new inline-Router along the current routing\n\t// path, with a fresh middleware stack for the inline-Router.\n\tGroup(fn func(r Router)) Router\n\n\t// Route mounts a sub-Router along a `pattern`` string.\n\tRoute(pattern string, fn func(r Router)) Router\n\n\t// Mount attaches another http.Handler along ./pattern/*\n\tMount(pattern string, h http.Handler)\n\n\t// Handle and HandleFunc adds routes for `pattern` that matches\n\t// all HTTP methods.\n\tHandle(pattern string, h http.Handler)\n\tHandleFunc(pattern string, h http.HandlerFunc)\n\n\t// Method and MethodFunc adds routes for `pattern` that matches\n\t// the `method` HTTP method.\n\tMethod(method, pattern string, h http.Handler)\n\tMethodFunc(method, pattern string, h http.HandlerFunc)\n\n\t// HTTP-method routing along `pattern`\n\tConnect(pattern string, h http.HandlerFunc)\n\tDelete(pattern string, h http.HandlerFunc)\n\tGet(pattern string, h http.HandlerFunc)\n\tHead(pattern string, h http.HandlerFunc)\n\tOptions(pattern string, h http.HandlerFunc)\n\tPatch(pattern string, h http.HandlerFunc)\n\tPost(pattern string, h http.HandlerFunc)\n\tPut(pattern string, h http.HandlerFunc)\n\tTrace(pattern string, h http.HandlerFunc)\n\n\t// NotFound defines a handler to respond whenever a route could\n\t// not be found.\n\tNotFound(h http.HandlerFunc)\n\n\t// MethodNotAllowed defines a handler to respond whenever a method is\n\t// not allowed.\n\tMethodNotAllowed(h http.HandlerFunc)\n}\n\n// Routes interface adds two methods for router traversal, which is also\n// used by the github.com/go-chi/docgen package to generate documentation for Routers.\ntype Routes interface {\n\t// Routes returns the routing tree in an easily traversable structure.\n\tRoutes() []Route\n\n\t// Middlewares returns the list of middlewares in use by the router.\n\tMiddlewares() Middlewares\n\n\t// Match searches the routing tree for a handler that matches\n\t// the method/path - similar to routing a http request, but without\n\t// executing the handler thereafter.\n\tMatch(rctx *Context, method, path string) bool\n}\n```\n\nEach routing method accepts a URL `pattern` and chain of `handlers`. The URL pattern\nsupports named params (ie. `/users/{userID}`) and wildcards (ie. `/admin/*`). URL parameters\ncan be fetched at runtime by calling `chi.URLParam(r, \"userID\")` for named parameters\nand `chi.URLParam(r, \"*\")` for a wildcard parameter.\n\n\n### Middleware handlers\n\nchi's middlewares are just stdlib net/http middleware handlers. There is nothing special\nabout them, which means the router and all the tooling is designed to be compatible and\nfriendly with any middleware in the community. This offers much better extensibility and reuse\nof packages and is at the heart of chi's purpose.\n\nHere is an example of a standard net/http middleware handler using the new request context\navailable in Go. This middleware sets a hypothetical user identifier on the request\ncontext and calls the next handler in the chain.\n\n```go\n// HTTP middleware setting a value on the request context\nfunc MyMiddleware(next http.Handler) http.Handler {\n  return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n    ctx := context.WithValue(r.Context(), \"user\", \"123\")\n    next.ServeHTTP(w, r.WithContext(ctx))\n  })\n}\n```\n\n\n### Request handlers\n\nchi uses standard net/http request handlers. This little snippet is an example of a http.Handler\nfunc that reads a user identifier from the request context - hypothetically, identifying\nthe user sending an authenticated request, validated+set by a previous middleware handler.\n\n```go\n// HTTP handler accessing data from the request context.\nfunc MyRequestHandler(w http.ResponseWriter, r *http.Request) {\n  user := r.Context().Value(\"user\").(string)\n  w.Write([]byte(fmt.Sprintf(\"hi %s\", user)))\n}\n```\n\n\n### URL parameters\n\nchi's router parses and stores URL parameters right onto the request context. Here is\nan example of how to access URL params in your net/http handlers. And of course, middlewares\nare able to access the same information.\n\n```go\n// HTTP handler accessing the url routing parameters.\nfunc MyRequestHandler(w http.ResponseWriter, r *http.Request) {\n  userID := chi.URLParam(r, \"userID\") // from a route like /users/{userID}\n\n  ctx := r.Context()\n  key := ctx.Value(\"key\").(string)\n\n  w.Write([]byte(fmt.Sprintf(\"hi %v, %v\", userID, key)))\n}\n```\n\n\n## Middlewares\n\nchi comes equipped with an optional `middleware` package, providing a suite of standard\n`net/http` middlewares. Please note, any middleware in the ecosystem that is also compatible\nwith `net/http` can be used with chi's mux.\n\n### Core middlewares\n\n-----------------------------------------------------------------------------------------------------------\n| chi/middleware Handler | description                                                                    |\n|:----------------------|:---------------------------------------------------------------------------------\n| AllowContentType      | Explicit whitelist of accepted request Content-Types                            |\n| BasicAuth             | Basic HTTP authentication                                                       |\n| Compress              | Gzip compression for clients that accept compressed responses                   |\n| GetHead               | Automatically route undefined HEAD requests to GET handlers                     |\n| Heartbeat             | Monitoring endpoint to check the servers pulse                                  |\n| Logger                | Logs the start and end of each request with the elapsed processing time         |\n| NoCache               | Sets response headers to prevent clients from caching                           |\n| Profiler              | Easily attach net/http/pprof to your routers                                    |\n| RealIP                | Sets a http.Request's RemoteAddr to either X-Forwarded-For or X-Real-IP         |\n| Recoverer             | Gracefully absorb panics and prints the stack trace                             |\n| RequestID             | Injects a request ID into the context of each request                           |\n| RedirectSlashes       | Redirect slashes on routing paths                                               |\n| SetHeader             | Short-hand middleware to set a response header key/value                        |\n| StripSlashes          | Strip slashes on routing paths                                                  |\n| Throttle              | Puts a ceiling on the number of concurrent requests                             |\n| Timeout               | Signals to the request context when the timeout deadline is reached             |\n| URLFormat             | Parse extension from url and put it on request context                          |\n| WithValue             | Short-hand middleware to set a key/value on the request context                 |\n-----------------------------------------------------------------------------------------------------------\n\n### Extra middlewares & packages\n\nPlease see https://github.com/go-chi for additional packages.\n\n--------------------------------------------------------------------------------------------------------------------\n| package                                            | description                                                 |\n|:---------------------------------------------------|:-------------------------------------------------------------\n| [cors](https://github.com/go-chi/cors)             | Cross-origin resource sharing (CORS)                        |\n| [docgen](https://github.com/go-chi/docgen)         | Print chi.Router routes at runtime                          |\n| [jwtauth](https://github.com/go-chi/jwtauth)       | JWT authentication                                          |\n| [hostrouter](https://github.com/go-chi/hostrouter) | Domain/host based request routing                           |\n| [httplog](https://github.com/go-chi/httplog)       | Small but powerful structured HTTP request logging          |\n| [httprate](https://github.com/go-chi/httprate)     | HTTP request rate limiter                                   |\n| [httptracer](https://github.com/go-chi/httptracer) | HTTP request performance tracing library                    |\n| [httpvcr](https://github.com/go-chi/httpvcr)       | Write deterministic tests for external sources              |\n| [stampede](https://github.com/go-chi/stampede)     | HTTP request coalescer                                      |\n--------------------------------------------------------------------------------------------------------------------\n\nplease [submit a PR](./CONTRIBUTING.md) if you'd like to include a link to a chi-compatible middleware\n\n\n## context?\n\n`context` is a tiny pkg that provides simple interface to signal context across call stacks\nand goroutines. It was originally written by [Sameer Ajmani](https://github.com/Sajmani)\nand is available in stdlib since go1.7.\n\nLearn more at https://blog.golang.org/context\n\nand..\n* Docs: https://golang.org/pkg/context\n* Source: https://github.com/golang/go/tree/master/src/context\n\n\n## Benchmarks\n\nThe benchmark suite: https://github.com/pkieltyka/go-http-routing-benchmark\n\nResults as of Jan 9, 2019 with Go 1.11.4 on Linux X1 Carbon laptop\n\n```shell\nBenchmarkChi_Param            3000000         475 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_Param5           2000000         696 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_Param20          1000000        1275 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_ParamWrite       3000000         505 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_GithubStatic     3000000         508 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_GithubParam      2000000         669 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_GithubAll          10000      134627 ns/op     87699 B/op    609 allocs/op\nBenchmarkChi_GPlusStatic      3000000         402 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_GPlusParam       3000000         500 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_GPlus2Params     3000000         586 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_GPlusAll          200000        7237 ns/op      5616 B/op     39 allocs/op\nBenchmarkChi_ParseStatic      3000000         408 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_ParseParam       3000000         488 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_Parse2Params     3000000         551 ns/op       432 B/op      3 allocs/op\nBenchmarkChi_ParseAll          100000       13508 ns/op     11232 B/op     78 allocs/op\nBenchmarkChi_StaticAll          20000       81933 ns/op     67826 B/op    471 allocs/op\n```\n\nComparison with other routers: https://gist.github.com/pkieltyka/123032f12052520aaccab752bd3e78cc\n\nNOTE: the allocs in the benchmark above are from the calls to http.Request's\n`WithContext(context.Context)` method that clones the http.Request, sets the `Context()`\non the duplicated (alloc'd) request and returns it the new request object. This is just\nhow setting context on a request in Go works.\n\n\n## Credits\n\n* Carl Jackson for https://github.com/zenazn/goji\n  * Parts of chi's thinking comes from goji, and chi's middleware package\n    sources from goji.\n* Armon Dadgar for https://github.com/armon/go-radix\n* Contributions: [@VojtechVitek](https://github.com/VojtechVitek)\n\nWe'll be more than happy to see [your contributions](./CONTRIBUTING.md)!\n\n\n## Beyond REST\n\nchi is just a http router that lets you decompose request handling into many smaller layers.\nMany companies use chi to write REST services for their public APIs. But, REST is just a convention\nfor managing state via HTTP, and there's a lot of other pieces required to write a complete client-server\nsystem or network of microservices.\n\nLooking beyond REST, I also recommend some newer works in the field:\n* [webrpc](https://github.com/webrpc/webrpc) - Web-focused RPC client+server framework with code-gen\n* [gRPC](https://github.com/grpc/grpc-go) - Google's RPC framework via protobufs\n* [graphql](https://github.com/99designs/gqlgen) - Declarative query language\n* [NATS](https://nats.io) - lightweight pub-sub\n\n\n## License\n\nCopyright (c) 2015-present [Peter Kieltyka](https://github.com/pkieltyka)\n\nLicensed under [MIT License](./LICENSE)\n\n[GoDoc]: https://godoc.org/github.com/go-chi/chi\n[GoDoc Widget]: https://godoc.org/github.com/go-chi/chi?status.svg\n[Travis]: https://travis-ci.org/go-chi/chi\n[Travis Widget]: https://travis-ci.org/go-chi/chi.svg?branch=master\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/chain.go",
    "content": "package chi\n\nimport \"net/http\"\n\n// Chain returns a Middlewares type from a slice of middleware handlers.\nfunc Chain(middlewares ...func(http.Handler) http.Handler) Middlewares {\n\treturn Middlewares(middlewares)\n}\n\n// Handler builds and returns a http.Handler from the chain of middlewares,\n// with `h http.Handler` as the final handler.\nfunc (mws Middlewares) Handler(h http.Handler) http.Handler {\n\treturn &ChainHandler{mws, h, chain(mws, h)}\n}\n\n// HandlerFunc builds and returns a http.Handler from the chain of middlewares,\n// with `h http.Handler` as the final handler.\nfunc (mws Middlewares) HandlerFunc(h http.HandlerFunc) http.Handler {\n\treturn &ChainHandler{mws, h, chain(mws, h)}\n}\n\n// ChainHandler is a http.Handler with support for handler composition and\n// execution.\ntype ChainHandler struct {\n\tMiddlewares Middlewares\n\tEndpoint    http.Handler\n\tchain       http.Handler\n}\n\nfunc (c *ChainHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {\n\tc.chain.ServeHTTP(w, r)\n}\n\n// chain builds a http.Handler composed of an inline middleware stack and endpoint\n// handler in the order they are passed.\nfunc chain(middlewares []func(http.Handler) http.Handler, endpoint http.Handler) http.Handler {\n\t// Return ahead of time if there aren't any middlewares for the chain\n\tif len(middlewares) == 0 {\n\t\treturn endpoint\n\t}\n\n\t// Wrap the end handler with the middleware chain\n\th := middlewares[len(middlewares)-1](endpoint)\n\tfor i := len(middlewares) - 2; i >= 0; i-- {\n\t\th = middlewares[i](h)\n\t}\n\n\treturn h\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/chi.go",
    "content": "//\n// Package chi is a small, idiomatic and composable router for building HTTP services.\n//\n// chi requires Go 1.10 or newer.\n//\n// Example:\n//  package main\n//\n//  import (\n//  \t\"net/http\"\n//\n//  \t\"github.com/go-chi/chi\"\n//  \t\"github.com/go-chi/chi/middleware\"\n//  )\n//\n//  func main() {\n//  \tr := chi.NewRouter()\n//  \tr.Use(middleware.Logger)\n//  \tr.Use(middleware.Recoverer)\n//\n//  \tr.Get(\"/\", func(w http.ResponseWriter, r *http.Request) {\n//  \t\tw.Write([]byte(\"root.\"))\n//  \t})\n//\n//  \thttp.ListenAndServe(\":3333\", r)\n//  }\n//\n// See github.com/go-chi/chi/_examples/ for more in-depth examples.\n//\n// URL patterns allow for easy matching of path components in HTTP\n// requests. The matching components can then be accessed using\n// chi.URLParam(). All patterns must begin with a slash.\n//\n// A simple named placeholder {name} matches any sequence of characters\n// up to the next / or the end of the URL. Trailing slashes on paths must\n// be handled explicitly.\n//\n// A placeholder with a name followed by a colon allows a regular\n// expression match, for example {number:\\\\d+}. The regular expression\n// syntax is Go's normal regexp RE2 syntax, except that regular expressions\n// including { or } are not supported, and / will never be\n// matched. An anonymous regexp pattern is allowed, using an empty string\n// before the colon in the placeholder, such as {:\\\\d+}\n//\n// The special placeholder of asterisk matches the rest of the requested\n// URL. Any trailing characters in the pattern are ignored. This is the only\n// placeholder which will match / characters.\n//\n// Examples:\n//  \"/user/{name}\" matches \"/user/jsmith\" but not \"/user/jsmith/info\" or \"/user/jsmith/\"\n//  \"/user/{name}/info\" matches \"/user/jsmith/info\"\n//  \"/page/*\" matches \"/page/intro/latest\"\n//  \"/page/*/index\" also matches \"/page/intro/latest\"\n//  \"/date/{yyyy:\\\\d\\\\d\\\\d\\\\d}/{mm:\\\\d\\\\d}/{dd:\\\\d\\\\d}\" matches \"/date/2017/04/01\"\n//\npackage chi\n\nimport \"net/http\"\n\n// NewRouter returns a new Mux object that implements the Router interface.\nfunc NewRouter() *Mux {\n\treturn NewMux()\n}\n\n// Router consisting of the core routing methods used by chi's Mux,\n// using only the standard net/http.\ntype Router interface {\n\thttp.Handler\n\tRoutes\n\n\t// Use appends one or more middlewares onto the Router stack.\n\tUse(middlewares ...func(http.Handler) http.Handler)\n\n\t// With adds inline middlewares for an endpoint handler.\n\tWith(middlewares ...func(http.Handler) http.Handler) Router\n\n\t// Group adds a new inline-Router along the current routing\n\t// path, with a fresh middleware stack for the inline-Router.\n\tGroup(fn func(r Router)) Router\n\n\t// Route mounts a sub-Router along a `pattern`` string.\n\tRoute(pattern string, fn func(r Router)) Router\n\n\t// Mount attaches another http.Handler along ./pattern/*\n\tMount(pattern string, h http.Handler)\n\n\t// Handle and HandleFunc adds routes for `pattern` that matches\n\t// all HTTP methods.\n\tHandle(pattern string, h http.Handler)\n\tHandleFunc(pattern string, h http.HandlerFunc)\n\n\t// Method and MethodFunc adds routes for `pattern` that matches\n\t// the `method` HTTP method.\n\tMethod(method, pattern string, h http.Handler)\n\tMethodFunc(method, pattern string, h http.HandlerFunc)\n\n\t// HTTP-method routing along `pattern`\n\tConnect(pattern string, h http.HandlerFunc)\n\tDelete(pattern string, h http.HandlerFunc)\n\tGet(pattern string, h http.HandlerFunc)\n\tHead(pattern string, h http.HandlerFunc)\n\tOptions(pattern string, h http.HandlerFunc)\n\tPatch(pattern string, h http.HandlerFunc)\n\tPost(pattern string, h http.HandlerFunc)\n\tPut(pattern string, h http.HandlerFunc)\n\tTrace(pattern string, h http.HandlerFunc)\n\n\t// NotFound defines a handler to respond whenever a route could\n\t// not be found.\n\tNotFound(h http.HandlerFunc)\n\n\t// MethodNotAllowed defines a handler to respond whenever a method is\n\t// not allowed.\n\tMethodNotAllowed(h http.HandlerFunc)\n}\n\n// Routes interface adds two methods for router traversal, which is also\n// used by the `docgen` subpackage to generation documentation for Routers.\ntype Routes interface {\n\t// Routes returns the routing tree in an easily traversable structure.\n\tRoutes() []Route\n\n\t// Middlewares returns the list of middlewares in use by the router.\n\tMiddlewares() Middlewares\n\n\t// Match searches the routing tree for a handler that matches\n\t// the method/path - similar to routing a http request, but without\n\t// executing the handler thereafter.\n\tMatch(rctx *Context, method, path string) bool\n}\n\n// Middlewares type is a slice of standard middleware handlers with methods\n// to compose middleware chains and http.Handler's.\ntype Middlewares []func(http.Handler) http.Handler\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/context.go",
    "content": "package chi\n\nimport (\n\t\"context\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n)\n\n// URLParam returns the url parameter from a http.Request object.\nfunc URLParam(r *http.Request, key string) string {\n\tif rctx := RouteContext(r.Context()); rctx != nil {\n\t\treturn rctx.URLParam(key)\n\t}\n\treturn \"\"\n}\n\n// URLParamFromCtx returns the url parameter from a http.Request Context.\nfunc URLParamFromCtx(ctx context.Context, key string) string {\n\tif rctx := RouteContext(ctx); rctx != nil {\n\t\treturn rctx.URLParam(key)\n\t}\n\treturn \"\"\n}\n\n// RouteContext returns chi's routing Context object from a\n// http.Request Context.\nfunc RouteContext(ctx context.Context) *Context {\n\tval, _ := ctx.Value(RouteCtxKey).(*Context)\n\treturn val\n}\n\n// ServerBaseContext wraps an http.Handler to set the request context to the\n// `baseCtx`.\nfunc ServerBaseContext(baseCtx context.Context, h http.Handler) http.Handler {\n\tfn := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tctx := r.Context()\n\t\tbaseCtx := baseCtx\n\n\t\t// Copy over default net/http server context keys\n\t\tif v, ok := ctx.Value(http.ServerContextKey).(*http.Server); ok {\n\t\t\tbaseCtx = context.WithValue(baseCtx, http.ServerContextKey, v)\n\t\t}\n\t\tif v, ok := ctx.Value(http.LocalAddrContextKey).(net.Addr); ok {\n\t\t\tbaseCtx = context.WithValue(baseCtx, http.LocalAddrContextKey, v)\n\t\t}\n\n\t\th.ServeHTTP(w, r.WithContext(baseCtx))\n\t})\n\treturn fn\n}\n\n// NewRouteContext returns a new routing Context object.\nfunc NewRouteContext() *Context {\n\treturn &Context{}\n}\n\nvar (\n\t// RouteCtxKey is the context.Context key to store the request context.\n\tRouteCtxKey = &contextKey{\"RouteContext\"}\n)\n\n// Context is the default routing context set on the root node of a\n// request context to track route patterns, URL parameters and\n// an optional routing path.\ntype Context struct {\n\tRoutes Routes\n\n\t// Routing path/method override used during the route search.\n\t// See Mux#routeHTTP method.\n\tRoutePath   string\n\tRouteMethod string\n\n\t// Routing pattern stack throughout the lifecycle of the request,\n\t// across all connected routers. It is a record of all matching\n\t// patterns across a stack of sub-routers.\n\tRoutePatterns []string\n\n\t// URLParams are the stack of routeParams captured during the\n\t// routing lifecycle across a stack of sub-routers.\n\tURLParams RouteParams\n\n\t// The endpoint routing pattern that matched the request URI path\n\t// or `RoutePath` of the current sub-router. This value will update\n\t// during the lifecycle of a request passing through a stack of\n\t// sub-routers.\n\troutePattern string\n\n\t// Route parameters matched for the current sub-router. It is\n\t// intentionally unexported so it cant be tampered.\n\trouteParams RouteParams\n\n\t// methodNotAllowed hint\n\tmethodNotAllowed bool\n}\n\n// Reset a routing context to its initial state.\nfunc (x *Context) Reset() {\n\tx.Routes = nil\n\tx.RoutePath = \"\"\n\tx.RouteMethod = \"\"\n\tx.RoutePatterns = x.RoutePatterns[:0]\n\tx.URLParams.Keys = x.URLParams.Keys[:0]\n\tx.URLParams.Values = x.URLParams.Values[:0]\n\n\tx.routePattern = \"\"\n\tx.routeParams.Keys = x.routeParams.Keys[:0]\n\tx.routeParams.Values = x.routeParams.Values[:0]\n\tx.methodNotAllowed = false\n}\n\n// URLParam returns the corresponding URL parameter value from the request\n// routing context.\nfunc (x *Context) URLParam(key string) string {\n\tfor k := len(x.URLParams.Keys) - 1; k >= 0; k-- {\n\t\tif x.URLParams.Keys[k] == key {\n\t\t\treturn x.URLParams.Values[k]\n\t\t}\n\t}\n\treturn \"\"\n}\n\n// RoutePattern builds the routing pattern string for the particular\n// request, at the particular point during routing. This means, the value\n// will change throughout the execution of a request in a router. That is\n// why its advised to only use this value after calling the next handler.\n//\n// For example,\n//\n//   func Instrument(next http.Handler) http.Handler {\n//     return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n//       next.ServeHTTP(w, r)\n//       routePattern := chi.RouteContext(r.Context()).RoutePattern()\n//       measure(w, r, routePattern)\n//   \t })\n//   }\nfunc (x *Context) RoutePattern() string {\n\troutePattern := strings.Join(x.RoutePatterns, \"\")\n\treturn replaceWildcards(routePattern)\n}\n\n// replaceWildcards takes a route pattern and recursively replaces all\n// occurrences of \"/*/\" to \"/\".\nfunc replaceWildcards(p string) string {\n\tif strings.Contains(p, \"/*/\") {\n\t\treturn replaceWildcards(strings.Replace(p, \"/*/\", \"/\", -1))\n\t}\n\n\treturn p\n}\n\n// RouteParams is a structure to track URL routing parameters efficiently.\ntype RouteParams struct {\n\tKeys, Values []string\n}\n\n// Add will append a URL parameter to the end of the route param\nfunc (s *RouteParams) Add(key, value string) {\n\ts.Keys = append(s.Keys, key)\n\ts.Values = append(s.Values, value)\n}\n\n// contextKey is a value for use with context.WithValue. It's used as\n// a pointer so it fits in an interface{} without allocation. This technique\n// for defining context keys was copied from Go 1.7's new use of context in net/http.\ntype contextKey struct {\n\tname string\n}\n\nfunc (k *contextKey) String() string {\n\treturn \"chi context value \" + k.name\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/basic_auth.go",
    "content": "package middleware\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n)\n\n// BasicAuth implements a simple middleware handler for adding basic http auth to a route.\nfunc BasicAuth(realm string, creds map[string]string) func(next http.Handler) http.Handler {\n\treturn func(next http.Handler) http.Handler {\n\t\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\tuser, pass, ok := r.BasicAuth()\n\t\t\tif !ok {\n\t\t\t\tbasicAuthFailed(w, realm)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tcredPass, credUserOk := creds[user]\n\t\t\tif !credUserOk || pass != credPass {\n\t\t\t\tbasicAuthFailed(w, realm)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tnext.ServeHTTP(w, r)\n\t\t})\n\t}\n}\n\nfunc basicAuthFailed(w http.ResponseWriter, realm string) {\n\tw.Header().Add(\"WWW-Authenticate\", fmt.Sprintf(`Basic realm=\"%s\"`, realm))\n\tw.WriteHeader(http.StatusUnauthorized)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/compress.go",
    "content": "package middleware\n\nimport (\n\t\"bufio\"\n\t\"compress/flate\"\n\t\"compress/gzip\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n)\n\nvar defaultCompressibleContentTypes = []string{\n\t\"text/html\",\n\t\"text/css\",\n\t\"text/plain\",\n\t\"text/javascript\",\n\t\"application/javascript\",\n\t\"application/x-javascript\",\n\t\"application/json\",\n\t\"application/atom+xml\",\n\t\"application/rss+xml\",\n\t\"image/svg+xml\",\n}\n\n// Compress is a middleware that compresses response\n// body of a given content types to a data format based\n// on Accept-Encoding request header. It uses a given\n// compression level.\n//\n// NOTE: make sure to set the Content-Type header on your response\n// otherwise this middleware will not compress the response body. For ex, in\n// your handler you should set w.Header().Set(\"Content-Type\", http.DetectContentType(yourBody))\n// or set it manually.\n//\n// Passing a compression level of 5 is sensible value\nfunc Compress(level int, types ...string) func(next http.Handler) http.Handler {\n\tcompressor := NewCompressor(level, types...)\n\treturn compressor.Handler\n}\n\n// Compressor represents a set of encoding configurations.\ntype Compressor struct {\n\tlevel int // The compression level.\n\t// The mapping of encoder names to encoder functions.\n\tencoders map[string]EncoderFunc\n\t// The mapping of pooled encoders to pools.\n\tpooledEncoders map[string]*sync.Pool\n\t// The set of content types allowed to be compressed.\n\tallowedTypes     map[string]struct{}\n\tallowedWildcards map[string]struct{}\n\t// The list of encoders in order of decreasing precedence.\n\tencodingPrecedence []string\n}\n\n// NewCompressor creates a new Compressor that will handle encoding responses.\n//\n// The level should be one of the ones defined in the flate package.\n// The types are the content types that are allowed to be compressed.\nfunc NewCompressor(level int, types ...string) *Compressor {\n\t// If types are provided, set those as the allowed types. If none are\n\t// provided, use the default list.\n\tallowedTypes := make(map[string]struct{})\n\tallowedWildcards := make(map[string]struct{})\n\tif len(types) > 0 {\n\t\tfor _, t := range types {\n\t\t\tif strings.Contains(strings.TrimSuffix(t, \"/*\"), \"*\") {\n\t\t\t\tpanic(fmt.Sprintf(\"middleware/compress: Unsupported content-type wildcard pattern '%s'. Only '/*' supported\", t))\n\t\t\t}\n\t\t\tif strings.HasSuffix(t, \"/*\") {\n\t\t\t\tallowedWildcards[strings.TrimSuffix(t, \"/*\")] = struct{}{}\n\t\t\t} else {\n\t\t\t\tallowedTypes[t] = struct{}{}\n\t\t\t}\n\t\t}\n\t} else {\n\t\tfor _, t := range defaultCompressibleContentTypes {\n\t\t\tallowedTypes[t] = struct{}{}\n\t\t}\n\t}\n\n\tc := &Compressor{\n\t\tlevel:            level,\n\t\tencoders:         make(map[string]EncoderFunc),\n\t\tpooledEncoders:   make(map[string]*sync.Pool),\n\t\tallowedTypes:     allowedTypes,\n\t\tallowedWildcards: allowedWildcards,\n\t}\n\n\t// Set the default encoders.  The precedence order uses the reverse\n\t// ordering that the encoders were added. This means adding new encoders\n\t// will move them to the front of the order.\n\t//\n\t// TODO:\n\t// lzma: Opera.\n\t// sdch: Chrome, Android. Gzip output + dictionary header.\n\t// br:   Brotli, see https://github.com/go-chi/chi/pull/326\n\n\t// HTTP 1.1 \"deflate\" (RFC 2616) stands for DEFLATE data (RFC 1951)\n\t// wrapped with zlib (RFC 1950). The zlib wrapper uses Adler-32\n\t// checksum compared to CRC-32 used in \"gzip\" and thus is faster.\n\t//\n\t// But.. some old browsers (MSIE, Safari 5.1) incorrectly expect\n\t// raw DEFLATE data only, without the mentioned zlib wrapper.\n\t// Because of this major confusion, most modern browsers try it\n\t// both ways, first looking for zlib headers.\n\t// Quote by Mark Adler: http://stackoverflow.com/a/9186091/385548\n\t//\n\t// The list of browsers having problems is quite big, see:\n\t// http://zoompf.com/blog/2012/02/lose-the-wait-http-compression\n\t// https://web.archive.org/web/20120321182910/http://www.vervestudios.co/projects/compression-tests/results\n\t//\n\t// That's why we prefer gzip over deflate. It's just more reliable\n\t// and not significantly slower than gzip.\n\tc.SetEncoder(\"deflate\", encoderDeflate)\n\n\t// TODO: Exception for old MSIE browsers that can't handle non-HTML?\n\t// https://zoompf.com/blog/2012/02/lose-the-wait-http-compression\n\tc.SetEncoder(\"gzip\", encoderGzip)\n\n\t// NOTE: Not implemented, intentionally:\n\t// case \"compress\": // LZW. Deprecated.\n\t// case \"bzip2\":    // Too slow on-the-fly.\n\t// case \"zopfli\":   // Too slow on-the-fly.\n\t// case \"xz\":       // Too slow on-the-fly.\n\treturn c\n}\n\n// SetEncoder can be used to set the implementation of a compression algorithm.\n//\n// The encoding should be a standardised identifier. See:\n// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding\n//\n// For example, add the Brotli algortithm:\n//\n//  import brotli_enc \"gopkg.in/kothar/brotli-go.v0/enc\"\n//\n//  compressor := middleware.NewCompressor(5, \"text/html\")\n//  compressor.SetEncoder(\"br\", func(w http.ResponseWriter, level int) io.Writer {\n//    params := brotli_enc.NewBrotliParams()\n//    params.SetQuality(level)\n//    return brotli_enc.NewBrotliWriter(params, w)\n//  })\nfunc (c *Compressor) SetEncoder(encoding string, fn EncoderFunc) {\n\tencoding = strings.ToLower(encoding)\n\tif encoding == \"\" {\n\t\tpanic(\"the encoding can not be empty\")\n\t}\n\tif fn == nil {\n\t\tpanic(\"attempted to set a nil encoder function\")\n\t}\n\n\t// If we are adding a new encoder that is already registered, we have to\n\t// clear that one out first.\n\tif _, ok := c.pooledEncoders[encoding]; ok {\n\t\tdelete(c.pooledEncoders, encoding)\n\t}\n\tif _, ok := c.encoders[encoding]; ok {\n\t\tdelete(c.encoders, encoding)\n\t}\n\n\t// If the encoder supports Resetting (IoReseterWriter), then it can be pooled.\n\tencoder := fn(ioutil.Discard, c.level)\n\tif encoder != nil {\n\t\tif _, ok := encoder.(ioResetterWriter); ok {\n\t\t\tpool := &sync.Pool{\n\t\t\t\tNew: func() interface{} {\n\t\t\t\t\treturn fn(ioutil.Discard, c.level)\n\t\t\t\t},\n\t\t\t}\n\t\t\tc.pooledEncoders[encoding] = pool\n\t\t}\n\t}\n\t// If the encoder is not in the pooledEncoders, add it to the normal encoders.\n\tif _, ok := c.pooledEncoders[encoding]; !ok {\n\t\tc.encoders[encoding] = fn\n\t}\n\n\tfor i, v := range c.encodingPrecedence {\n\t\tif v == encoding {\n\t\t\tc.encodingPrecedence = append(c.encodingPrecedence[:i], c.encodingPrecedence[i+1:]...)\n\t\t}\n\t}\n\n\tc.encodingPrecedence = append([]string{encoding}, c.encodingPrecedence...)\n}\n\n// Handler returns a new middleware that will compress the response based on the\n// current Compressor.\nfunc (c *Compressor) Handler(next http.Handler) http.Handler {\n\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tencoder, encoding, cleanup := c.selectEncoder(r.Header, w)\n\n\t\tcw := &compressResponseWriter{\n\t\t\tResponseWriter:   w,\n\t\t\tw:                w,\n\t\t\tcontentTypes:     c.allowedTypes,\n\t\t\tcontentWildcards: c.allowedWildcards,\n\t\t\tencoding:         encoding,\n\t\t\tcompressable:     false, // determined in post-handler\n\t\t}\n\t\tif encoder != nil {\n\t\t\tcw.w = encoder\n\t\t}\n\t\t// Re-add the encoder to the pool if applicable.\n\t\tdefer cleanup()\n\t\tdefer cw.Close()\n\n\t\tnext.ServeHTTP(cw, r)\n\t})\n}\n\n// selectEncoder returns the encoder, the name of the encoder, and a closer function.\nfunc (c *Compressor) selectEncoder(h http.Header, w io.Writer) (io.Writer, string, func()) {\n\theader := h.Get(\"Accept-Encoding\")\n\n\t// Parse the names of all accepted algorithms from the header.\n\taccepted := strings.Split(strings.ToLower(header), \",\")\n\n\t// Find supported encoder by accepted list by precedence\n\tfor _, name := range c.encodingPrecedence {\n\t\tif matchAcceptEncoding(accepted, name) {\n\t\t\tif pool, ok := c.pooledEncoders[name]; ok {\n\t\t\t\tencoder := pool.Get().(ioResetterWriter)\n\t\t\t\tcleanup := func() {\n\t\t\t\t\tpool.Put(encoder)\n\t\t\t\t}\n\t\t\t\tencoder.Reset(w)\n\t\t\t\treturn encoder, name, cleanup\n\n\t\t\t}\n\t\t\tif fn, ok := c.encoders[name]; ok {\n\t\t\t\treturn fn(w, c.level), name, func() {}\n\t\t\t}\n\t\t}\n\n\t}\n\n\t// No encoder found to match the accepted encoding\n\treturn nil, \"\", func() {}\n}\n\nfunc matchAcceptEncoding(accepted []string, encoding string) bool {\n\tfor _, v := range accepted {\n\t\tif strings.Contains(v, encoding) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// An EncoderFunc is a function that wraps the provided io.Writer with a\n// streaming compression algorithm and returns it.\n//\n// In case of failure, the function should return nil.\ntype EncoderFunc func(w io.Writer, level int) io.Writer\n\n// Interface for types that allow resetting io.Writers.\ntype ioResetterWriter interface {\n\tio.Writer\n\tReset(w io.Writer)\n}\n\ntype compressResponseWriter struct {\n\thttp.ResponseWriter\n\n\t// The streaming encoder writer to be used if there is one. Otherwise,\n\t// this is just the normal writer.\n\tw                io.Writer\n\tencoding         string\n\tcontentTypes     map[string]struct{}\n\tcontentWildcards map[string]struct{}\n\twroteHeader      bool\n\tcompressable     bool\n}\n\nfunc (cw *compressResponseWriter) isCompressable() bool {\n\t// Parse the first part of the Content-Type response header.\n\tcontentType := cw.Header().Get(\"Content-Type\")\n\tif idx := strings.Index(contentType, \";\"); idx >= 0 {\n\t\tcontentType = contentType[0:idx]\n\t}\n\n\t// Is the content type compressable?\n\tif _, ok := cw.contentTypes[contentType]; ok {\n\t\treturn true\n\t}\n\tif idx := strings.Index(contentType, \"/\"); idx > 0 {\n\t\tcontentType = contentType[0:idx]\n\t\t_, ok := cw.contentWildcards[contentType]\n\t\treturn ok\n\t}\n\treturn false\n}\n\nfunc (cw *compressResponseWriter) WriteHeader(code int) {\n\tif cw.wroteHeader {\n\t\tcw.ResponseWriter.WriteHeader(code) // Allow multiple calls to propagate.\n\t\treturn\n\t}\n\tcw.wroteHeader = true\n\tdefer cw.ResponseWriter.WriteHeader(code)\n\n\t// Already compressed data?\n\tif cw.Header().Get(\"Content-Encoding\") != \"\" {\n\t\treturn\n\t}\n\n\tif !cw.isCompressable() {\n\t\tcw.compressable = false\n\t\treturn\n\t}\n\n\tif cw.encoding != \"\" {\n\t\tcw.compressable = true\n\t\tcw.Header().Set(\"Content-Encoding\", cw.encoding)\n\t\tcw.Header().Set(\"Vary\", \"Accept-Encoding\")\n\n\t\t// The content-length after compression is unknown\n\t\tcw.Header().Del(\"Content-Length\")\n\t}\n}\n\nfunc (cw *compressResponseWriter) Write(p []byte) (int, error) {\n\tif !cw.wroteHeader {\n\t\tcw.WriteHeader(http.StatusOK)\n\t}\n\n\treturn cw.writer().Write(p)\n}\n\nfunc (cw *compressResponseWriter) writer() io.Writer {\n\tif cw.compressable {\n\t\treturn cw.w\n\t} else {\n\t\treturn cw.ResponseWriter\n\t}\n}\n\ntype compressFlusher interface {\n\tFlush() error\n}\n\nfunc (cw *compressResponseWriter) Flush() {\n\tif f, ok := cw.writer().(http.Flusher); ok {\n\t\tf.Flush()\n\t}\n\t// If the underlying writer has a compression flush signature,\n\t// call this Flush() method instead\n\tif f, ok := cw.writer().(compressFlusher); ok {\n\t\tf.Flush()\n\n\t\t// Also flush the underlying response writer\n\t\tif f, ok := cw.ResponseWriter.(http.Flusher); ok {\n\t\t\tf.Flush()\n\t\t}\n\t}\n}\n\nfunc (cw *compressResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {\n\tif hj, ok := cw.writer().(http.Hijacker); ok {\n\t\treturn hj.Hijack()\n\t}\n\treturn nil, nil, errors.New(\"chi/middleware: http.Hijacker is unavailable on the writer\")\n}\n\nfunc (cw *compressResponseWriter) Push(target string, opts *http.PushOptions) error {\n\tif ps, ok := cw.writer().(http.Pusher); ok {\n\t\treturn ps.Push(target, opts)\n\t}\n\treturn errors.New(\"chi/middleware: http.Pusher is unavailable on the writer\")\n}\n\nfunc (cw *compressResponseWriter) Close() error {\n\tif c, ok := cw.writer().(io.WriteCloser); ok {\n\t\treturn c.Close()\n\t}\n\treturn errors.New(\"chi/middleware: io.WriteCloser is unavailable on the writer\")\n}\n\nfunc encoderGzip(w io.Writer, level int) io.Writer {\n\tgw, err := gzip.NewWriterLevel(w, level)\n\tif err != nil {\n\t\treturn nil\n\t}\n\treturn gw\n}\n\nfunc encoderDeflate(w io.Writer, level int) io.Writer {\n\tdw, err := flate.NewWriter(w, level)\n\tif err != nil {\n\t\treturn nil\n\t}\n\treturn dw\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/content_charset.go",
    "content": "package middleware\n\nimport (\n\t\"net/http\"\n\t\"strings\"\n)\n\n// ContentCharset generates a handler that writes a 415 Unsupported Media Type response if none of the charsets match.\n// An empty charset will allow requests with no Content-Type header or no specified charset.\nfunc ContentCharset(charsets ...string) func(next http.Handler) http.Handler {\n\tfor i, c := range charsets {\n\t\tcharsets[i] = strings.ToLower(c)\n\t}\n\n\treturn func(next http.Handler) http.Handler {\n\t\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\tif !contentEncoding(r.Header.Get(\"Content-Type\"), charsets...) {\n\t\t\t\tw.WriteHeader(http.StatusUnsupportedMediaType)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tnext.ServeHTTP(w, r)\n\t\t})\n\t}\n}\n\n// Check the content encoding against a list of acceptable values.\nfunc contentEncoding(ce string, charsets ...string) bool {\n\t_, ce = split(strings.ToLower(ce), \";\")\n\t_, ce = split(ce, \"charset=\")\n\tce, _ = split(ce, \";\")\n\tfor _, c := range charsets {\n\t\tif ce == c {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn false\n}\n\n// Split a string in two parts, cleaning any whitespace.\nfunc split(str, sep string) (string, string) {\n\tvar a, b string\n\tvar parts = strings.SplitN(str, sep, 2)\n\ta = strings.TrimSpace(parts[0])\n\tif len(parts) == 2 {\n\t\tb = strings.TrimSpace(parts[1])\n\t}\n\n\treturn a, b\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/content_encoding.go",
    "content": "package middleware\n\nimport (\n\t\"net/http\"\n\t\"strings\"\n)\n\n// AllowContentEncoding enforces a whitelist of request Content-Encoding otherwise responds\n// with a 415 Unsupported Media Type status.\nfunc AllowContentEncoding(contentEncoding ...string) func(next http.Handler) http.Handler {\n\tallowedEncodings := make(map[string]struct{}, len(contentEncoding))\n\tfor _, encoding := range contentEncoding {\n\t\tallowedEncodings[strings.TrimSpace(strings.ToLower(encoding))] = struct{}{}\n\t}\n\treturn func(next http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\trequestEncodings := r.Header[\"Content-Encoding\"]\n\t\t\t// skip check for empty content body or no Content-Encoding\n\t\t\tif r.ContentLength == 0 {\n\t\t\t\tnext.ServeHTTP(w, r)\n\t\t\t\treturn\n\t\t\t}\n\t\t\t// All encodings in the request must be allowed\n\t\t\tfor _, encoding := range requestEncodings {\n\t\t\t\tif _, ok := allowedEncodings[strings.TrimSpace(strings.ToLower(encoding))]; !ok {\n\t\t\t\t\tw.WriteHeader(http.StatusUnsupportedMediaType)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t\tnext.ServeHTTP(w, r)\n\t\t}\n\t\treturn http.HandlerFunc(fn)\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/content_type.go",
    "content": "package middleware\n\nimport (\n\t\"net/http\"\n\t\"strings\"\n)\n\n// SetHeader is a convenience handler to set a response header key/value\nfunc SetHeader(key, value string) func(next http.Handler) http.Handler {\n\treturn func(next http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\tw.Header().Set(key, value)\n\t\t\tnext.ServeHTTP(w, r)\n\t\t}\n\t\treturn http.HandlerFunc(fn)\n\t}\n}\n\n// AllowContentType enforces a whitelist of request Content-Types otherwise responds\n// with a 415 Unsupported Media Type status.\nfunc AllowContentType(contentTypes ...string) func(next http.Handler) http.Handler {\n\tcT := []string{}\n\tfor _, t := range contentTypes {\n\t\tcT = append(cT, strings.ToLower(t))\n\t}\n\n\treturn func(next http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\tif r.ContentLength == 0 {\n\t\t\t\t// skip check for empty content body\n\t\t\t\tnext.ServeHTTP(w, r)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\ts := strings.ToLower(strings.TrimSpace(r.Header.Get(\"Content-Type\")))\n\t\t\tif i := strings.Index(s, \";\"); i > -1 {\n\t\t\t\ts = s[0:i]\n\t\t\t}\n\n\t\t\tfor _, t := range cT {\n\t\t\t\tif t == s {\n\t\t\t\t\tnext.ServeHTTP(w, r)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tw.WriteHeader(http.StatusUnsupportedMediaType)\n\t\t}\n\t\treturn http.HandlerFunc(fn)\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/get_head.go",
    "content": "package middleware\n\nimport (\n\t\"net/http\"\n\n\t\"github.com/go-chi/chi\"\n)\n\n// GetHead automatically route undefined HEAD requests to GET handlers.\nfunc GetHead(next http.Handler) http.Handler {\n\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tif r.Method == \"HEAD\" {\n\t\t\trctx := chi.RouteContext(r.Context())\n\t\t\troutePath := rctx.RoutePath\n\t\t\tif routePath == \"\" {\n\t\t\t\tif r.URL.RawPath != \"\" {\n\t\t\t\t\troutePath = r.URL.RawPath\n\t\t\t\t} else {\n\t\t\t\t\troutePath = r.URL.Path\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Temporary routing context to look-ahead before routing the request\n\t\t\ttctx := chi.NewRouteContext()\n\n\t\t\t// Attempt to find a HEAD handler for the routing path, if not found, traverse\n\t\t\t// the router as through its a GET route, but proceed with the request\n\t\t\t// with the HEAD method.\n\t\t\tif !rctx.Routes.Match(tctx, \"HEAD\", routePath) {\n\t\t\t\trctx.RouteMethod = \"GET\"\n\t\t\t\trctx.RoutePath = routePath\n\t\t\t\tnext.ServeHTTP(w, r)\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\n\t\tnext.ServeHTTP(w, r)\n\t})\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/heartbeat.go",
    "content": "package middleware\n\nimport (\n\t\"net/http\"\n\t\"strings\"\n)\n\n// Heartbeat endpoint middleware useful to setting up a path like\n// `/ping` that load balancers or uptime testing external services\n// can make a request before hitting any routes. It's also convenient\n// to place this above ACL middlewares as well.\nfunc Heartbeat(endpoint string) func(http.Handler) http.Handler {\n\tf := func(h http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\tif r.Method == \"GET\" && strings.EqualFold(r.URL.Path, endpoint) {\n\t\t\t\tw.Header().Set(\"Content-Type\", \"text/plain\")\n\t\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\t\tw.Write([]byte(\".\"))\n\t\t\t\treturn\n\t\t\t}\n\t\t\th.ServeHTTP(w, r)\n\t\t}\n\t\treturn http.HandlerFunc(fn)\n\t}\n\treturn f\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/logger.go",
    "content": "package middleware\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"log\"\n\t\"net/http\"\n\t\"os\"\n\t\"time\"\n)\n\nvar (\n\t// LogEntryCtxKey is the context.Context key to store the request log entry.\n\tLogEntryCtxKey = &contextKey{\"LogEntry\"}\n\n\t// DefaultLogger is called by the Logger middleware handler to log each request.\n\t// Its made a package-level variable so that it can be reconfigured for custom\n\t// logging configurations.\n\tDefaultLogger = RequestLogger(&DefaultLogFormatter{Logger: log.New(os.Stdout, \"\", log.LstdFlags), NoColor: false})\n)\n\n// Logger is a middleware that logs the start and end of each request, along\n// with some useful data about what was requested, what the response status was,\n// and how long it took to return. When standard output is a TTY, Logger will\n// print in color, otherwise it will print in black and white. Logger prints a\n// request ID if one is provided.\n//\n// Alternatively, look at https://github.com/goware/httplog for a more in-depth\n// http logger with structured logging support.\nfunc Logger(next http.Handler) http.Handler {\n\treturn DefaultLogger(next)\n}\n\n// RequestLogger returns a logger handler using a custom LogFormatter.\nfunc RequestLogger(f LogFormatter) func(next http.Handler) http.Handler {\n\treturn func(next http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\tentry := f.NewLogEntry(r)\n\t\t\tww := NewWrapResponseWriter(w, r.ProtoMajor)\n\n\t\t\tt1 := time.Now()\n\t\t\tdefer func() {\n\t\t\t\tentry.Write(ww.Status(), ww.BytesWritten(), ww.Header(), time.Since(t1), nil)\n\t\t\t}()\n\n\t\t\tnext.ServeHTTP(ww, WithLogEntry(r, entry))\n\t\t}\n\t\treturn http.HandlerFunc(fn)\n\t}\n}\n\n// LogFormatter initiates the beginning of a new LogEntry per request.\n// See DefaultLogFormatter for an example implementation.\ntype LogFormatter interface {\n\tNewLogEntry(r *http.Request) LogEntry\n}\n\n// LogEntry records the final log when a request completes.\n// See defaultLogEntry for an example implementation.\ntype LogEntry interface {\n\tWrite(status, bytes int, header http.Header, elapsed time.Duration, extra interface{})\n\tPanic(v interface{}, stack []byte)\n}\n\n// GetLogEntry returns the in-context LogEntry for a request.\nfunc GetLogEntry(r *http.Request) LogEntry {\n\tentry, _ := r.Context().Value(LogEntryCtxKey).(LogEntry)\n\treturn entry\n}\n\n// WithLogEntry sets the in-context LogEntry for a request.\nfunc WithLogEntry(r *http.Request, entry LogEntry) *http.Request {\n\tr = r.WithContext(context.WithValue(r.Context(), LogEntryCtxKey, entry))\n\treturn r\n}\n\n// LoggerInterface accepts printing to stdlib logger or compatible logger.\ntype LoggerInterface interface {\n\tPrint(v ...interface{})\n}\n\n// DefaultLogFormatter is a simple logger that implements a LogFormatter.\ntype DefaultLogFormatter struct {\n\tLogger  LoggerInterface\n\tNoColor bool\n}\n\n// NewLogEntry creates a new LogEntry for the request.\nfunc (l *DefaultLogFormatter) NewLogEntry(r *http.Request) LogEntry {\n\tuseColor := !l.NoColor\n\tentry := &defaultLogEntry{\n\t\tDefaultLogFormatter: l,\n\t\trequest:             r,\n\t\tbuf:                 &bytes.Buffer{},\n\t\tuseColor:            useColor,\n\t}\n\n\treqID := GetReqID(r.Context())\n\tif reqID != \"\" {\n\t\tcW(entry.buf, useColor, nYellow, \"[%s] \", reqID)\n\t}\n\tcW(entry.buf, useColor, nCyan, \"\\\"\")\n\tcW(entry.buf, useColor, bMagenta, \"%s \", r.Method)\n\n\tscheme := \"http\"\n\tif r.TLS != nil {\n\t\tscheme = \"https\"\n\t}\n\tcW(entry.buf, useColor, nCyan, \"%s://%s%s %s\\\" \", scheme, r.Host, r.RequestURI, r.Proto)\n\n\tentry.buf.WriteString(\"from \")\n\tentry.buf.WriteString(r.RemoteAddr)\n\tentry.buf.WriteString(\" - \")\n\n\treturn entry\n}\n\ntype defaultLogEntry struct {\n\t*DefaultLogFormatter\n\trequest  *http.Request\n\tbuf      *bytes.Buffer\n\tuseColor bool\n}\n\nfunc (l *defaultLogEntry) Write(status, bytes int, header http.Header, elapsed time.Duration, extra interface{}) {\n\tswitch {\n\tcase status < 200:\n\t\tcW(l.buf, l.useColor, bBlue, \"%03d\", status)\n\tcase status < 300:\n\t\tcW(l.buf, l.useColor, bGreen, \"%03d\", status)\n\tcase status < 400:\n\t\tcW(l.buf, l.useColor, bCyan, \"%03d\", status)\n\tcase status < 500:\n\t\tcW(l.buf, l.useColor, bYellow, \"%03d\", status)\n\tdefault:\n\t\tcW(l.buf, l.useColor, bRed, \"%03d\", status)\n\t}\n\n\tcW(l.buf, l.useColor, bBlue, \" %dB\", bytes)\n\n\tl.buf.WriteString(\" in \")\n\tif elapsed < 500*time.Millisecond {\n\t\tcW(l.buf, l.useColor, nGreen, \"%s\", elapsed)\n\t} else if elapsed < 5*time.Second {\n\t\tcW(l.buf, l.useColor, nYellow, \"%s\", elapsed)\n\t} else {\n\t\tcW(l.buf, l.useColor, nRed, \"%s\", elapsed)\n\t}\n\n\tl.Logger.Print(l.buf.String())\n}\n\nfunc (l *defaultLogEntry) Panic(v interface{}, stack []byte) {\n\tPrintPrettyStack(v)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/middleware.go",
    "content": "package middleware\n\nimport \"net/http\"\n\n// New will create a new middleware handler from a http.Handler.\nfunc New(h http.Handler) func(next http.Handler) http.Handler {\n\treturn func(next http.Handler) http.Handler {\n\t\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\t\th.ServeHTTP(w, r)\n\t\t})\n\t}\n}\n\n// contextKey is a value for use with context.WithValue. It's used as\n// a pointer so it fits in an interface{} without allocation. This technique\n// for defining context keys was copied from Go 1.7's new use of context in net/http.\ntype contextKey struct {\n\tname string\n}\n\nfunc (k *contextKey) String() string {\n\treturn \"chi/middleware context value \" + k.name\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/nocache.go",
    "content": "package middleware\n\n// Ported from Goji's middleware, source:\n// https://github.com/zenazn/goji/tree/master/web/middleware\n\nimport (\n\t\"net/http\"\n\t\"time\"\n)\n\n// Unix epoch time\nvar epoch = time.Unix(0, 0).Format(time.RFC1123)\n\n// Taken from https://github.com/mytrile/nocache\nvar noCacheHeaders = map[string]string{\n\t\"Expires\":         epoch,\n\t\"Cache-Control\":   \"no-cache, no-store, no-transform, must-revalidate, private, max-age=0\",\n\t\"Pragma\":          \"no-cache\",\n\t\"X-Accel-Expires\": \"0\",\n}\n\nvar etagHeaders = []string{\n\t\"ETag\",\n\t\"If-Modified-Since\",\n\t\"If-Match\",\n\t\"If-None-Match\",\n\t\"If-Range\",\n\t\"If-Unmodified-Since\",\n}\n\n// NoCache is a simple piece of middleware that sets a number of HTTP headers to prevent\n// a router (or subrouter) from being cached by an upstream proxy and/or client.\n//\n// As per http://wiki.nginx.org/HttpProxyModule - NoCache sets:\n//      Expires: Thu, 01 Jan 1970 00:00:00 UTC\n//      Cache-Control: no-cache, private, max-age=0\n//      X-Accel-Expires: 0\n//      Pragma: no-cache (for HTTP/1.0 proxies/clients)\nfunc NoCache(h http.Handler) http.Handler {\n\tfn := func(w http.ResponseWriter, r *http.Request) {\n\n\t\t// Delete any ETag headers that may have been set\n\t\tfor _, v := range etagHeaders {\n\t\t\tif r.Header.Get(v) != \"\" {\n\t\t\t\tr.Header.Del(v)\n\t\t\t}\n\t\t}\n\n\t\t// Set our NoCache headers\n\t\tfor k, v := range noCacheHeaders {\n\t\t\tw.Header().Set(k, v)\n\t\t}\n\n\t\th.ServeHTTP(w, r)\n\t}\n\n\treturn http.HandlerFunc(fn)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/profiler.go",
    "content": "package middleware\n\nimport (\n\t\"expvar\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/http/pprof\"\n\n\t\"github.com/go-chi/chi\"\n)\n\n// Profiler is a convenient subrouter used for mounting net/http/pprof. ie.\n//\n//  func MyService() http.Handler {\n//    r := chi.NewRouter()\n//    // ..middlewares\n//    r.Mount(\"/debug\", middleware.Profiler())\n//    // ..routes\n//    return r\n//  }\nfunc Profiler() http.Handler {\n\tr := chi.NewRouter()\n\tr.Use(NoCache)\n\n\tr.Get(\"/\", func(w http.ResponseWriter, r *http.Request) {\n\t\thttp.Redirect(w, r, r.RequestURI+\"/pprof/\", 301)\n\t})\n\tr.HandleFunc(\"/pprof\", func(w http.ResponseWriter, r *http.Request) {\n\t\thttp.Redirect(w, r, r.RequestURI+\"/\", 301)\n\t})\n\n\tr.HandleFunc(\"/pprof/*\", pprof.Index)\n\tr.HandleFunc(\"/pprof/cmdline\", pprof.Cmdline)\n\tr.HandleFunc(\"/pprof/profile\", pprof.Profile)\n\tr.HandleFunc(\"/pprof/symbol\", pprof.Symbol)\n\tr.HandleFunc(\"/pprof/trace\", pprof.Trace)\n\tr.HandleFunc(\"/vars\", expVars)\n\n\treturn r\n}\n\n// Replicated from expvar.go as not public.\nfunc expVars(w http.ResponseWriter, r *http.Request) {\n\tfirst := true\n\tw.Header().Set(\"Content-Type\", \"application/json\")\n\tfmt.Fprintf(w, \"{\\n\")\n\texpvar.Do(func(kv expvar.KeyValue) {\n\t\tif !first {\n\t\t\tfmt.Fprintf(w, \",\\n\")\n\t\t}\n\t\tfirst = false\n\t\tfmt.Fprintf(w, \"%q: %s\", kv.Key, kv.Value)\n\t})\n\tfmt.Fprintf(w, \"\\n}\\n\")\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/realip.go",
    "content": "package middleware\n\n// Ported from Goji's middleware, source:\n// https://github.com/zenazn/goji/tree/master/web/middleware\n\nimport (\n\t\"net/http\"\n\t\"strings\"\n)\n\nvar xForwardedFor = http.CanonicalHeaderKey(\"X-Forwarded-For\")\nvar xRealIP = http.CanonicalHeaderKey(\"X-Real-IP\")\n\n// RealIP is a middleware that sets a http.Request's RemoteAddr to the results\n// of parsing either the X-Forwarded-For header or the X-Real-IP header (in that\n// order).\n//\n// This middleware should be inserted fairly early in the middleware stack to\n// ensure that subsequent layers (e.g., request loggers) which examine the\n// RemoteAddr will see the intended value.\n//\n// You should only use this middleware if you can trust the headers passed to\n// you (in particular, the two headers this middleware uses), for example\n// because you have placed a reverse proxy like HAProxy or nginx in front of\n// chi. If your reverse proxies are configured to pass along arbitrary header\n// values from the client, or if you use this middleware without a reverse\n// proxy, malicious clients will be able to make you very sad (or, depending on\n// how you're using RemoteAddr, vulnerable to an attack of some sort).\nfunc RealIP(h http.Handler) http.Handler {\n\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\tif rip := realIP(r); rip != \"\" {\n\t\t\tr.RemoteAddr = rip\n\t\t}\n\t\th.ServeHTTP(w, r)\n\t}\n\n\treturn http.HandlerFunc(fn)\n}\n\nfunc realIP(r *http.Request) string {\n\tvar ip string\n\n\tif xrip := r.Header.Get(xRealIP); xrip != \"\" {\n\t\tip = xrip\n\t} else if xff := r.Header.Get(xForwardedFor); xff != \"\" {\n\t\ti := strings.Index(xff, \", \")\n\t\tif i == -1 {\n\t\t\ti = len(xff)\n\t\t}\n\t\tip = xff[:i]\n\t}\n\n\treturn ip\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/recoverer.go",
    "content": "package middleware\n\n// The original work was derived from Goji's middleware, source:\n// https://github.com/zenazn/goji/tree/master/web/middleware\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"runtime/debug\"\n\t\"strings\"\n)\n\n// Recoverer is a middleware that recovers from panics, logs the panic (and a\n// backtrace), and returns a HTTP 500 (Internal Server Error) status if\n// possible. Recoverer prints a request ID if one is provided.\n//\n// Alternatively, look at https://github.com/pressly/lg middleware pkgs.\nfunc Recoverer(next http.Handler) http.Handler {\n\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\tdefer func() {\n\t\t\tif rvr := recover(); rvr != nil && rvr != http.ErrAbortHandler {\n\n\t\t\t\tlogEntry := GetLogEntry(r)\n\t\t\t\tif logEntry != nil {\n\t\t\t\t\tlogEntry.Panic(rvr, debug.Stack())\n\t\t\t\t} else {\n\t\t\t\t\tPrintPrettyStack(rvr)\n\t\t\t\t}\n\n\t\t\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\t\t}\n\t\t}()\n\n\t\tnext.ServeHTTP(w, r)\n\t}\n\n\treturn http.HandlerFunc(fn)\n}\n\nfunc PrintPrettyStack(rvr interface{}) {\n\tdebugStack := debug.Stack()\n\ts := prettyStack{}\n\tout, err := s.parse(debugStack, rvr)\n\tif err == nil {\n\t\tos.Stderr.Write(out)\n\t} else {\n\t\t// print stdlib output as a fallback\n\t\tos.Stderr.Write(debugStack)\n\t}\n}\n\ntype prettyStack struct {\n}\n\nfunc (s prettyStack) parse(debugStack []byte, rvr interface{}) ([]byte, error) {\n\tvar err error\n\tuseColor := true\n\tbuf := &bytes.Buffer{}\n\n\tcW(buf, false, bRed, \"\\n\")\n\tcW(buf, useColor, bCyan, \" panic: \")\n\tcW(buf, useColor, bBlue, \"%v\", rvr)\n\tcW(buf, false, bWhite, \"\\n \\n\")\n\n\t// process debug stack info\n\tstack := strings.Split(string(debugStack), \"\\n\")\n\tlines := []string{}\n\n\t// locate panic line, as we may have nested panics\n\tfor i := len(stack) - 1; i > 0; i-- {\n\t\tlines = append(lines, stack[i])\n\t\tif strings.HasPrefix(stack[i], \"panic(0x\") {\n\t\t\tlines = lines[0 : len(lines)-2] // remove boilerplate\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// reverse\n\tfor i := len(lines)/2 - 1; i >= 0; i-- {\n\t\topp := len(lines) - 1 - i\n\t\tlines[i], lines[opp] = lines[opp], lines[i]\n\t}\n\n\t// decorate\n\tfor i, line := range lines {\n\t\tlines[i], err = s.decorateLine(line, useColor, i)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tfor _, l := range lines {\n\t\tfmt.Fprintf(buf, \"%s\", l)\n\t}\n\treturn buf.Bytes(), nil\n}\n\nfunc (s prettyStack) decorateLine(line string, useColor bool, num int) (string, error) {\n\tline = strings.TrimSpace(line)\n\tif strings.HasPrefix(line, \"\\t\") || strings.Contains(line, \".go:\") {\n\t\treturn s.decorateSourceLine(line, useColor, num)\n\t} else if strings.HasSuffix(line, \")\") {\n\t\treturn s.decorateFuncCallLine(line, useColor, num)\n\t} else {\n\t\tif strings.HasPrefix(line, \"\\t\") {\n\t\t\treturn strings.Replace(line, \"\\t\", \"      \", 1), nil\n\t\t} else {\n\t\t\treturn fmt.Sprintf(\"    %s\\n\", line), nil\n\t\t}\n\t}\n}\n\nfunc (s prettyStack) decorateFuncCallLine(line string, useColor bool, num int) (string, error) {\n\tidx := strings.LastIndex(line, \"(\")\n\tif idx < 0 {\n\t\treturn \"\", errors.New(\"not a func call line\")\n\t}\n\n\tbuf := &bytes.Buffer{}\n\tpkg := line[0:idx]\n\t// addr := line[idx:]\n\tmethod := \"\"\n\n\tidx = strings.LastIndex(pkg, string(os.PathSeparator))\n\tif idx < 0 {\n\t\tidx = strings.Index(pkg, \".\")\n\t\tmethod = pkg[idx:]\n\t\tpkg = pkg[0:idx]\n\t} else {\n\t\tmethod = pkg[idx+1:]\n\t\tpkg = pkg[0 : idx+1]\n\t\tidx = strings.Index(method, \".\")\n\t\tpkg += method[0:idx]\n\t\tmethod = method[idx:]\n\t}\n\tpkgColor := nYellow\n\tmethodColor := bGreen\n\n\tif num == 0 {\n\t\tcW(buf, useColor, bRed, \" -> \")\n\t\tpkgColor = bMagenta\n\t\tmethodColor = bRed\n\t} else {\n\t\tcW(buf, useColor, bWhite, \"    \")\n\t}\n\tcW(buf, useColor, pkgColor, \"%s\", pkg)\n\tcW(buf, useColor, methodColor, \"%s\\n\", method)\n\t// cW(buf, useColor, nBlack, \"%s\", addr)\n\treturn buf.String(), nil\n}\n\nfunc (s prettyStack) decorateSourceLine(line string, useColor bool, num int) (string, error) {\n\tidx := strings.LastIndex(line, \".go:\")\n\tif idx < 0 {\n\t\treturn \"\", errors.New(\"not a source line\")\n\t}\n\n\tbuf := &bytes.Buffer{}\n\tpath := line[0 : idx+3]\n\tlineno := line[idx+3:]\n\n\tidx = strings.LastIndex(path, string(os.PathSeparator))\n\tdir := path[0 : idx+1]\n\tfile := path[idx+1:]\n\n\tidx = strings.Index(lineno, \" \")\n\tif idx > 0 {\n\t\tlineno = lineno[0:idx]\n\t}\n\tfileColor := bCyan\n\tlineColor := bGreen\n\n\tif num == 1 {\n\t\tcW(buf, useColor, bRed, \" ->   \")\n\t\tfileColor = bRed\n\t\tlineColor = bMagenta\n\t} else {\n\t\tcW(buf, false, bWhite, \"      \")\n\t}\n\tcW(buf, useColor, bWhite, \"%s\", dir)\n\tcW(buf, useColor, fileColor, \"%s\", file)\n\tcW(buf, useColor, lineColor, \"%s\", lineno)\n\tif num == 1 {\n\t\tcW(buf, false, bWhite, \"\\n\")\n\t}\n\tcW(buf, false, bWhite, \"\\n\")\n\n\treturn buf.String(), nil\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/request_id.go",
    "content": "package middleware\n\n// Ported from Goji's middleware, source:\n// https://github.com/zenazn/goji/tree/master/web/middleware\n\nimport (\n\t\"context\"\n\t\"crypto/rand\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"os\"\n\t\"strings\"\n\t\"sync/atomic\"\n)\n\n// Key to use when setting the request ID.\ntype ctxKeyRequestID int\n\n// RequestIDKey is the key that holds the unique request ID in a request context.\nconst RequestIDKey ctxKeyRequestID = 0\n\n// RequestIDHeader is the name of the HTTP Header which contains the request id.\n// Exported so that it can be changed by developers\nvar RequestIDHeader = \"X-Request-Id\"\n\nvar prefix string\nvar reqid uint64\n\n// A quick note on the statistics here: we're trying to calculate the chance that\n// two randomly generated base62 prefixes will collide. We use the formula from\n// http://en.wikipedia.org/wiki/Birthday_problem\n//\n// P[m, n] \\approx 1 - e^{-m^2/2n}\n//\n// We ballpark an upper bound for $m$ by imagining (for whatever reason) a server\n// that restarts every second over 10 years, for $m = 86400 * 365 * 10 = 315360000$\n//\n// For a $k$ character base-62 identifier, we have $n(k) = 62^k$\n//\n// Plugging this in, we find $P[m, n(10)] \\approx 5.75%$, which is good enough for\n// our purposes, and is surely more than anyone would ever need in practice -- a\n// process that is rebooted a handful of times a day for a hundred years has less\n// than a millionth of a percent chance of generating two colliding IDs.\n\nfunc init() {\n\thostname, err := os.Hostname()\n\tif hostname == \"\" || err != nil {\n\t\thostname = \"localhost\"\n\t}\n\tvar buf [12]byte\n\tvar b64 string\n\tfor len(b64) < 10 {\n\t\trand.Read(buf[:])\n\t\tb64 = base64.StdEncoding.EncodeToString(buf[:])\n\t\tb64 = strings.NewReplacer(\"+\", \"\", \"/\", \"\").Replace(b64)\n\t}\n\n\tprefix = fmt.Sprintf(\"%s/%s\", hostname, b64[0:10])\n}\n\n// RequestID is a middleware that injects a request ID into the context of each\n// request. A request ID is a string of the form \"host.example.com/random-0001\",\n// where \"random\" is a base62 random string that uniquely identifies this go\n// process, and where the last number is an atomically incremented request\n// counter.\nfunc RequestID(next http.Handler) http.Handler {\n\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\tctx := r.Context()\n\t\trequestID := r.Header.Get(RequestIDHeader)\n\t\tif requestID == \"\" {\n\t\t\tmyid := atomic.AddUint64(&reqid, 1)\n\t\t\trequestID = fmt.Sprintf(\"%s-%06d\", prefix, myid)\n\t\t}\n\t\tctx = context.WithValue(ctx, RequestIDKey, requestID)\n\t\tnext.ServeHTTP(w, r.WithContext(ctx))\n\t}\n\treturn http.HandlerFunc(fn)\n}\n\n// GetReqID returns a request ID from the given context if one is present.\n// Returns the empty string if a request ID cannot be found.\nfunc GetReqID(ctx context.Context) string {\n\tif ctx == nil {\n\t\treturn \"\"\n\t}\n\tif reqID, ok := ctx.Value(RequestIDKey).(string); ok {\n\t\treturn reqID\n\t}\n\treturn \"\"\n}\n\n// NextRequestID generates the next request ID in the sequence.\nfunc NextRequestID() uint64 {\n\treturn atomic.AddUint64(&reqid, 1)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/route_headers.go",
    "content": "package middleware\n\nimport (\n\t\"net/http\"\n\t\"strings\"\n)\n\n// RouteHeaders is a neat little header-based router that allows you to direct\n// the flow of a request through a middleware stack based on a request header.\n//\n// For example, lets say you'd like to setup multiple routers depending on the\n// request Host header, you could then do something as so:\n//\n// r := chi.NewRouter()\n// rSubdomain := chi.NewRouter()\n//\n// r.Use(middleware.RouteHeaders().\n//   Route(\"Host\", \"example.com\", middleware.New(r)).\n//   Route(\"Host\", \"*.example.com\", middleware.New(rSubdomain)).\n//   Handler)\n//\n// r.Get(\"/\", h)\n// rSubdomain.Get(\"/\", h2)\n//\n//\n// Another example, imagine you want to setup multiple CORS handlers, where for\n// your origin servers you allow authorized requests, but for third-party public\n// requests, authorization is disabled.\n//\n// r := chi.NewRouter()\n//\n// r.Use(middleware.RouteHeaders().\n//   Route(\"Origin\", \"https://app.skyweaver.net\", cors.Handler(cors.Options{\n// \t   AllowedOrigins:   []string{\"https://api.skyweaver.net\"},\n// \t   AllowedMethods:   []string{\"GET\", \"POST\", \"PUT\", \"DELETE\", \"OPTIONS\"},\n// \t   AllowedHeaders:   []string{\"Accept\", \"Authorization\", \"Content-Type\"},\n// \t   AllowCredentials: true, // <----------<<< allow credentials\n//   })).\n//   Route(\"Origin\", \"*\", cors.Handler(cors.Options{\n// \t   AllowedOrigins:   []string{\"*\"},\n// \t   AllowedMethods:   []string{\"GET\", \"POST\", \"PUT\", \"DELETE\", \"OPTIONS\"},\n// \t   AllowedHeaders:   []string{\"Accept\", \"Content-Type\"},\n// \t   AllowCredentials: false, // <----------<<< do not allow credentials\n//   })).\n//   Handler)\n//\nfunc RouteHeaders() HeaderRouter {\n\treturn HeaderRouter{}\n}\n\ntype HeaderRouter map[string][]HeaderRoute\n\nfunc (hr HeaderRouter) Route(header string, match string, middlewareHandler func(next http.Handler) http.Handler) HeaderRouter {\n\theader = strings.ToLower(header)\n\tk := hr[header]\n\tif k == nil {\n\t\thr[header] = []HeaderRoute{}\n\t}\n\thr[header] = append(hr[header], HeaderRoute{MatchOne: NewPattern(match), Middleware: middlewareHandler})\n\treturn hr\n}\n\nfunc (hr HeaderRouter) RouteAny(header string, match []string, middlewareHandler func(next http.Handler) http.Handler) HeaderRouter {\n\theader = strings.ToLower(header)\n\tk := hr[header]\n\tif k == nil {\n\t\thr[header] = []HeaderRoute{}\n\t}\n\tpatterns := []Pattern{}\n\tfor _, m := range match {\n\t\tpatterns = append(patterns, NewPattern(m))\n\t}\n\thr[header] = append(hr[header], HeaderRoute{MatchAny: patterns, Middleware: middlewareHandler})\n\treturn hr\n}\n\nfunc (hr HeaderRouter) RouteDefault(handler func(next http.Handler) http.Handler) HeaderRouter {\n\thr[\"*\"] = []HeaderRoute{{Middleware: handler}}\n\treturn hr\n}\n\nfunc (hr HeaderRouter) Handler(next http.Handler) http.Handler {\n\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tif len(hr) == 0 {\n\t\t\t// skip if no routes set\n\t\t\tnext.ServeHTTP(w, r)\n\t\t}\n\n\t\t// find first matching header route, and continue\n\t\tfor header, matchers := range hr {\n\t\t\theaderValue := r.Header.Get(header)\n\t\t\tif headerValue == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\theaderValue = strings.ToLower(headerValue)\n\t\t\tfor _, matcher := range matchers {\n\t\t\t\tif matcher.IsMatch(headerValue) {\n\t\t\t\t\tmatcher.Middleware(next).ServeHTTP(w, r)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// if no match, check for \"*\" default route\n\t\tmatcher, ok := hr[\"*\"]\n\t\tif !ok || matcher[0].Middleware == nil {\n\t\t\tnext.ServeHTTP(w, r)\n\t\t\treturn\n\t\t}\n\t\tmatcher[0].Middleware(next).ServeHTTP(w, r)\n\t})\n}\n\ntype HeaderRoute struct {\n\tMatchAny   []Pattern\n\tMatchOne   Pattern\n\tMiddleware func(next http.Handler) http.Handler\n}\n\nfunc (r HeaderRoute) IsMatch(value string) bool {\n\tif len(r.MatchAny) > 0 {\n\t\tfor _, m := range r.MatchAny {\n\t\t\tif m.Match(value) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t} else if r.MatchOne.Match(value) {\n\t\treturn true\n\t}\n\treturn false\n}\n\ntype Pattern struct {\n\tprefix   string\n\tsuffix   string\n\twildcard bool\n}\n\nfunc NewPattern(value string) Pattern {\n\tp := Pattern{}\n\tif i := strings.IndexByte(value, '*'); i >= 0 {\n\t\tp.wildcard = true\n\t\tp.prefix = value[0:i]\n\t\tp.suffix = value[i+1:]\n\t} else {\n\t\tp.prefix = value\n\t}\n\treturn p\n}\n\nfunc (p Pattern) Match(v string) bool {\n\tif !p.wildcard {\n\t\tif p.prefix == v {\n\t\t\treturn true\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn len(v) >= len(p.prefix+p.suffix) && strings.HasPrefix(v, p.prefix) && strings.HasSuffix(v, p.suffix)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/strip.go",
    "content": "package middleware\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\n\t\"github.com/go-chi/chi\"\n)\n\n// StripSlashes is a middleware that will match request paths with a trailing\n// slash, strip it from the path and continue routing through the mux, if a route\n// matches, then it will serve the handler.\nfunc StripSlashes(next http.Handler) http.Handler {\n\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\tvar path string\n\t\trctx := chi.RouteContext(r.Context())\n\t\tif rctx.RoutePath != \"\" {\n\t\t\tpath = rctx.RoutePath\n\t\t} else {\n\t\t\tpath = r.URL.Path\n\t\t}\n\t\tif len(path) > 1 && path[len(path)-1] == '/' {\n\t\t\trctx.RoutePath = path[:len(path)-1]\n\t\t}\n\t\tnext.ServeHTTP(w, r)\n\t}\n\treturn http.HandlerFunc(fn)\n}\n\n// RedirectSlashes is a middleware that will match request paths with a trailing\n// slash and redirect to the same path, less the trailing slash.\n//\n// NOTE: RedirectSlashes middleware is *incompatible* with http.FileServer,\n// see https://github.com/go-chi/chi/issues/343\nfunc RedirectSlashes(next http.Handler) http.Handler {\n\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\tvar path string\n\t\trctx := chi.RouteContext(r.Context())\n\t\tif rctx.RoutePath != \"\" {\n\t\t\tpath = rctx.RoutePath\n\t\t} else {\n\t\t\tpath = r.URL.Path\n\t\t}\n\t\tif len(path) > 1 && path[len(path)-1] == '/' {\n\t\t\tif r.URL.RawQuery != \"\" {\n\t\t\t\tpath = fmt.Sprintf(\"%s?%s\", path[:len(path)-1], r.URL.RawQuery)\n\t\t\t} else {\n\t\t\t\tpath = path[:len(path)-1]\n\t\t\t}\n\t\t\thttp.Redirect(w, r, path, 301)\n\t\t\treturn\n\t\t}\n\t\tnext.ServeHTTP(w, r)\n\t}\n\treturn http.HandlerFunc(fn)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/terminal.go",
    "content": "package middleware\n\n// Ported from Goji's middleware, source:\n// https://github.com/zenazn/goji/tree/master/web/middleware\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n)\n\nvar (\n\t// Normal colors\n\tnBlack   = []byte{'\\033', '[', '3', '0', 'm'}\n\tnRed     = []byte{'\\033', '[', '3', '1', 'm'}\n\tnGreen   = []byte{'\\033', '[', '3', '2', 'm'}\n\tnYellow  = []byte{'\\033', '[', '3', '3', 'm'}\n\tnBlue    = []byte{'\\033', '[', '3', '4', 'm'}\n\tnMagenta = []byte{'\\033', '[', '3', '5', 'm'}\n\tnCyan    = []byte{'\\033', '[', '3', '6', 'm'}\n\tnWhite   = []byte{'\\033', '[', '3', '7', 'm'}\n\t// Bright colors\n\tbBlack   = []byte{'\\033', '[', '3', '0', ';', '1', 'm'}\n\tbRed     = []byte{'\\033', '[', '3', '1', ';', '1', 'm'}\n\tbGreen   = []byte{'\\033', '[', '3', '2', ';', '1', 'm'}\n\tbYellow  = []byte{'\\033', '[', '3', '3', ';', '1', 'm'}\n\tbBlue    = []byte{'\\033', '[', '3', '4', ';', '1', 'm'}\n\tbMagenta = []byte{'\\033', '[', '3', '5', ';', '1', 'm'}\n\tbCyan    = []byte{'\\033', '[', '3', '6', ';', '1', 'm'}\n\tbWhite   = []byte{'\\033', '[', '3', '7', ';', '1', 'm'}\n\n\treset = []byte{'\\033', '[', '0', 'm'}\n)\n\nvar IsTTY bool\n\nfunc init() {\n\t// This is sort of cheating: if stdout is a character device, we assume\n\t// that means it's a TTY. Unfortunately, there are many non-TTY\n\t// character devices, but fortunately stdout is rarely set to any of\n\t// them.\n\t//\n\t// We could solve this properly by pulling in a dependency on\n\t// code.google.com/p/go.crypto/ssh/terminal, for instance, but as a\n\t// heuristic for whether to print in color or in black-and-white, I'd\n\t// really rather not.\n\tfi, err := os.Stdout.Stat()\n\tif err == nil {\n\t\tm := os.ModeDevice | os.ModeCharDevice\n\t\tIsTTY = fi.Mode()&m == m\n\t}\n}\n\n// colorWrite\nfunc cW(w io.Writer, useColor bool, color []byte, s string, args ...interface{}) {\n\tif IsTTY && useColor {\n\t\tw.Write(color)\n\t}\n\tfmt.Fprintf(w, s, args...)\n\tif IsTTY && useColor {\n\t\tw.Write(reset)\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/throttle.go",
    "content": "package middleware\n\nimport (\n\t\"net/http\"\n\t\"strconv\"\n\t\"time\"\n)\n\nconst (\n\terrCapacityExceeded = \"Server capacity exceeded.\"\n\terrTimedOut         = \"Timed out while waiting for a pending request to complete.\"\n\terrContextCanceled  = \"Context was canceled.\"\n)\n\nvar (\n\tdefaultBacklogTimeout = time.Second * 60\n)\n\n// ThrottleOpts represents a set of throttling options.\ntype ThrottleOpts struct {\n\tLimit          int\n\tBacklogLimit   int\n\tBacklogTimeout time.Duration\n\tRetryAfterFn   func(ctxDone bool) time.Duration\n}\n\n// Throttle is a middleware that limits number of currently processed requests\n// at a time across all users. Note: Throttle is not a rate-limiter per user,\n// instead it just puts a ceiling on the number of currentl in-flight requests\n// being processed from the point from where the Throttle middleware is mounted.\nfunc Throttle(limit int) func(http.Handler) http.Handler {\n\treturn ThrottleWithOpts(ThrottleOpts{Limit: limit, BacklogTimeout: defaultBacklogTimeout})\n}\n\n// ThrottleBacklog is a middleware that limits number of currently processed\n// requests at a time and provides a backlog for holding a finite number of\n// pending requests.\nfunc ThrottleBacklog(limit int, backlogLimit int, backlogTimeout time.Duration) func(http.Handler) http.Handler {\n\treturn ThrottleWithOpts(ThrottleOpts{Limit: limit, BacklogLimit: backlogLimit, BacklogTimeout: backlogTimeout})\n}\n\n// ThrottleWithOpts is a middleware that limits number of currently processed requests using passed ThrottleOpts.\nfunc ThrottleWithOpts(opts ThrottleOpts) func(http.Handler) http.Handler {\n\tif opts.Limit < 1 {\n\t\tpanic(\"chi/middleware: Throttle expects limit > 0\")\n\t}\n\n\tif opts.BacklogLimit < 0 {\n\t\tpanic(\"chi/middleware: Throttle expects backlogLimit to be positive\")\n\t}\n\n\tt := throttler{\n\t\ttokens:         make(chan token, opts.Limit),\n\t\tbacklogTokens:  make(chan token, opts.Limit+opts.BacklogLimit),\n\t\tbacklogTimeout: opts.BacklogTimeout,\n\t\tretryAfterFn:   opts.RetryAfterFn,\n\t}\n\n\t// Filling tokens.\n\tfor i := 0; i < opts.Limit+opts.BacklogLimit; i++ {\n\t\tif i < opts.Limit {\n\t\t\tt.tokens <- token{}\n\t\t}\n\t\tt.backlogTokens <- token{}\n\t}\n\n\treturn func(next http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\tctx := r.Context()\n\n\t\t\tselect {\n\n\t\t\tcase <-ctx.Done():\n\t\t\t\tt.setRetryAfterHeaderIfNeeded(w, true)\n\t\t\t\thttp.Error(w, errContextCanceled, http.StatusServiceUnavailable)\n\t\t\t\treturn\n\n\t\t\tcase btok := <-t.backlogTokens:\n\t\t\t\ttimer := time.NewTimer(t.backlogTimeout)\n\n\t\t\t\tdefer func() {\n\t\t\t\t\tt.backlogTokens <- btok\n\t\t\t\t}()\n\n\t\t\t\tselect {\n\t\t\t\tcase <-timer.C:\n\t\t\t\t\tt.setRetryAfterHeaderIfNeeded(w, false)\n\t\t\t\t\thttp.Error(w, errTimedOut, http.StatusServiceUnavailable)\n\t\t\t\t\treturn\n\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\ttimer.Stop()\n\t\t\t\t\tt.setRetryAfterHeaderIfNeeded(w, true)\n\t\t\t\t\thttp.Error(w, errContextCanceled, http.StatusServiceUnavailable)\n\t\t\t\t\treturn\n\t\t\t\tcase tok := <-t.tokens:\n\t\t\t\t\tdefer func() {\n\t\t\t\t\t\ttimer.Stop()\n\t\t\t\t\t\tt.tokens <- tok\n\t\t\t\t\t}()\n\t\t\t\t\tnext.ServeHTTP(w, r)\n\t\t\t\t}\n\t\t\t\treturn\n\n\t\t\tdefault:\n\t\t\t\tt.setRetryAfterHeaderIfNeeded(w, false)\n\t\t\t\thttp.Error(w, errCapacityExceeded, http.StatusServiceUnavailable)\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\n\t\treturn http.HandlerFunc(fn)\n\t}\n}\n\n// token represents a request that is being processed.\ntype token struct{}\n\n// throttler limits number of currently processed requests at a time.\ntype throttler struct {\n\ttokens         chan token\n\tbacklogTokens  chan token\n\tbacklogTimeout time.Duration\n\tretryAfterFn   func(ctxDone bool) time.Duration\n}\n\n// setRetryAfterHeaderIfNeeded sets Retry-After HTTP header if corresponding retryAfterFn option of throttler is initialized.\nfunc (t throttler) setRetryAfterHeaderIfNeeded(w http.ResponseWriter, ctxDone bool) {\n\tif t.retryAfterFn == nil {\n\t\treturn\n\t}\n\tw.Header().Set(\"Retry-After\", strconv.Itoa(int(t.retryAfterFn(ctxDone).Seconds())))\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/timeout.go",
    "content": "package middleware\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"time\"\n)\n\n// Timeout is a middleware that cancels ctx after a given timeout and return\n// a 504 Gateway Timeout error to the client.\n//\n// It's required that you select the ctx.Done() channel to check for the signal\n// if the context has reached its deadline and return, otherwise the timeout\n// signal will be just ignored.\n//\n// ie. a route/handler may look like:\n//\n//  r.Get(\"/long\", func(w http.ResponseWriter, r *http.Request) {\n// \t ctx := r.Context()\n// \t processTime := time.Duration(rand.Intn(4)+1) * time.Second\n//\n// \t select {\n// \t case <-ctx.Done():\n// \t \treturn\n//\n// \t case <-time.After(processTime):\n// \t \t // The above channel simulates some hard work.\n// \t }\n//\n// \t w.Write([]byte(\"done\"))\n//  })\n//\nfunc Timeout(timeout time.Duration) func(next http.Handler) http.Handler {\n\treturn func(next http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\tctx, cancel := context.WithTimeout(r.Context(), timeout)\n\t\t\tdefer func() {\n\t\t\t\tcancel()\n\t\t\t\tif ctx.Err() == context.DeadlineExceeded {\n\t\t\t\t\tw.WriteHeader(http.StatusGatewayTimeout)\n\t\t\t\t}\n\t\t\t}()\n\n\t\t\tr = r.WithContext(ctx)\n\t\t\tnext.ServeHTTP(w, r)\n\t\t}\n\t\treturn http.HandlerFunc(fn)\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/url_format.go",
    "content": "package middleware\n\nimport (\n\t\"context\"\n\t\"net/http\"\n\t\"strings\"\n\n\t\"github.com/go-chi/chi\"\n)\n\nvar (\n\t// URLFormatCtxKey is the context.Context key to store the URL format data\n\t// for a request.\n\tURLFormatCtxKey = &contextKey{\"URLFormat\"}\n)\n\n// URLFormat is a middleware that parses the url extension from a request path and stores it\n// on the context as a string under the key `middleware.URLFormatCtxKey`. The middleware will\n// trim the suffix from the routing path and continue routing.\n//\n// Routers should not include a url parameter for the suffix when using this middleware.\n//\n// Sample usage.. for url paths: `/articles/1`, `/articles/1.json` and `/articles/1.xml`\n//\n//  func routes() http.Handler {\n//    r := chi.NewRouter()\n//    r.Use(middleware.URLFormat)\n//\n//    r.Get(\"/articles/{id}\", ListArticles)\n//\n//    return r\n//  }\n//\n//  func ListArticles(w http.ResponseWriter, r *http.Request) {\n// \t  urlFormat, _ := r.Context().Value(middleware.URLFormatCtxKey).(string)\n//\n// \t  switch urlFormat {\n// \t  case \"json\":\n// \t  \trender.JSON(w, r, articles)\n// \t  case \"xml:\"\n// \t  \trender.XML(w, r, articles)\n// \t  default:\n// \t  \trender.JSON(w, r, articles)\n// \t  }\n// }\n//\nfunc URLFormat(next http.Handler) http.Handler {\n\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\tctx := r.Context()\n\n\t\tvar format string\n\t\tpath := r.URL.Path\n\n\t\tif strings.Index(path, \".\") > 0 {\n\t\t\tbase := strings.LastIndex(path, \"/\")\n\t\t\tidx := strings.Index(path[base:], \".\")\n\n\t\t\tif idx > 0 {\n\t\t\t\tidx += base\n\t\t\t\tformat = path[idx+1:]\n\n\t\t\t\trctx := chi.RouteContext(r.Context())\n\t\t\t\trctx.RoutePath = path[:idx]\n\t\t\t}\n\t\t}\n\n\t\tr = r.WithContext(context.WithValue(ctx, URLFormatCtxKey, format))\n\n\t\tnext.ServeHTTP(w, r)\n\t}\n\treturn http.HandlerFunc(fn)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/value.go",
    "content": "package middleware\n\nimport (\n\t\"context\"\n\t\"net/http\"\n)\n\n// WithValue is a middleware that sets a given key/value in a context chain.\nfunc WithValue(key interface{}, val interface{}) func(next http.Handler) http.Handler {\n\treturn func(next http.Handler) http.Handler {\n\t\tfn := func(w http.ResponseWriter, r *http.Request) {\n\t\t\tr = r.WithContext(context.WithValue(r.Context(), key, val))\n\t\t\tnext.ServeHTTP(w, r)\n\t\t}\n\t\treturn http.HandlerFunc(fn)\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/middleware/wrap_writer.go",
    "content": "package middleware\n\n// The original work was derived from Goji's middleware, source:\n// https://github.com/zenazn/goji/tree/master/web/middleware\n\nimport (\n\t\"bufio\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n)\n\n// NewWrapResponseWriter wraps an http.ResponseWriter, returning a proxy that allows you to\n// hook into various parts of the response process.\nfunc NewWrapResponseWriter(w http.ResponseWriter, protoMajor int) WrapResponseWriter {\n\t_, fl := w.(http.Flusher)\n\n\tbw := basicWriter{ResponseWriter: w}\n\n\tif protoMajor == 2 {\n\t\t_, ps := w.(http.Pusher)\n\t\tif fl && ps {\n\t\t\treturn &http2FancyWriter{bw}\n\t\t}\n\t} else {\n\t\t_, hj := w.(http.Hijacker)\n\t\t_, rf := w.(io.ReaderFrom)\n\t\tif fl && hj && rf {\n\t\t\treturn &httpFancyWriter{bw}\n\t\t}\n\t}\n\tif fl {\n\t\treturn &flushWriter{bw}\n\t}\n\n\treturn &bw\n}\n\n// WrapResponseWriter is a proxy around an http.ResponseWriter that allows you to hook\n// into various parts of the response process.\ntype WrapResponseWriter interface {\n\thttp.ResponseWriter\n\t// Status returns the HTTP status of the request, or 0 if one has not\n\t// yet been sent.\n\tStatus() int\n\t// BytesWritten returns the total number of bytes sent to the client.\n\tBytesWritten() int\n\t// Tee causes the response body to be written to the given io.Writer in\n\t// addition to proxying the writes through. Only one io.Writer can be\n\t// tee'd to at once: setting a second one will overwrite the first.\n\t// Writes will be sent to the proxy before being written to this\n\t// io.Writer. It is illegal for the tee'd writer to be modified\n\t// concurrently with writes.\n\tTee(io.Writer)\n\t// Unwrap returns the original proxied target.\n\tUnwrap() http.ResponseWriter\n}\n\n// basicWriter wraps a http.ResponseWriter that implements the minimal\n// http.ResponseWriter interface.\ntype basicWriter struct {\n\thttp.ResponseWriter\n\twroteHeader bool\n\tcode        int\n\tbytes       int\n\ttee         io.Writer\n}\n\nfunc (b *basicWriter) WriteHeader(code int) {\n\tif !b.wroteHeader {\n\t\tb.code = code\n\t\tb.wroteHeader = true\n\t\tb.ResponseWriter.WriteHeader(code)\n\t}\n}\n\nfunc (b *basicWriter) Write(buf []byte) (int, error) {\n\tb.maybeWriteHeader()\n\tn, err := b.ResponseWriter.Write(buf)\n\tif b.tee != nil {\n\t\t_, err2 := b.tee.Write(buf[:n])\n\t\t// Prefer errors generated by the proxied writer.\n\t\tif err == nil {\n\t\t\terr = err2\n\t\t}\n\t}\n\tb.bytes += n\n\treturn n, err\n}\n\nfunc (b *basicWriter) maybeWriteHeader() {\n\tif !b.wroteHeader {\n\t\tb.WriteHeader(http.StatusOK)\n\t}\n}\n\nfunc (b *basicWriter) Status() int {\n\treturn b.code\n}\n\nfunc (b *basicWriter) BytesWritten() int {\n\treturn b.bytes\n}\n\nfunc (b *basicWriter) Tee(w io.Writer) {\n\tb.tee = w\n}\n\nfunc (b *basicWriter) Unwrap() http.ResponseWriter {\n\treturn b.ResponseWriter\n}\n\ntype flushWriter struct {\n\tbasicWriter\n}\n\nfunc (f *flushWriter) Flush() {\n\tf.wroteHeader = true\n\tfl := f.basicWriter.ResponseWriter.(http.Flusher)\n\tfl.Flush()\n}\n\nvar _ http.Flusher = &flushWriter{}\n\n// httpFancyWriter is a HTTP writer that additionally satisfies\n// http.Flusher, http.Hijacker, and io.ReaderFrom. It exists for the common case\n// of wrapping the http.ResponseWriter that package http gives you, in order to\n// make the proxied object support the full method set of the proxied object.\ntype httpFancyWriter struct {\n\tbasicWriter\n}\n\nfunc (f *httpFancyWriter) Flush() {\n\tf.wroteHeader = true\n\tfl := f.basicWriter.ResponseWriter.(http.Flusher)\n\tfl.Flush()\n}\n\nfunc (f *httpFancyWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {\n\thj := f.basicWriter.ResponseWriter.(http.Hijacker)\n\treturn hj.Hijack()\n}\n\nfunc (f *http2FancyWriter) Push(target string, opts *http.PushOptions) error {\n\treturn f.basicWriter.ResponseWriter.(http.Pusher).Push(target, opts)\n}\n\nfunc (f *httpFancyWriter) ReadFrom(r io.Reader) (int64, error) {\n\tif f.basicWriter.tee != nil {\n\t\tn, err := io.Copy(&f.basicWriter, r)\n\t\tf.basicWriter.bytes += int(n)\n\t\treturn n, err\n\t}\n\trf := f.basicWriter.ResponseWriter.(io.ReaderFrom)\n\tf.basicWriter.maybeWriteHeader()\n\tn, err := rf.ReadFrom(r)\n\tf.basicWriter.bytes += int(n)\n\treturn n, err\n}\n\nvar _ http.Flusher = &httpFancyWriter{}\nvar _ http.Hijacker = &httpFancyWriter{}\nvar _ http.Pusher = &http2FancyWriter{}\nvar _ io.ReaderFrom = &httpFancyWriter{}\n\n// http2FancyWriter is a HTTP2 writer that additionally satisfies\n// http.Flusher, and io.ReaderFrom. It exists for the common case\n// of wrapping the http.ResponseWriter that package http gives you, in order to\n// make the proxied object support the full method set of the proxied object.\ntype http2FancyWriter struct {\n\tbasicWriter\n}\n\nfunc (f *http2FancyWriter) Flush() {\n\tf.wroteHeader = true\n\tfl := f.basicWriter.ResponseWriter.(http.Flusher)\n\tfl.Flush()\n}\n\nvar _ http.Flusher = &http2FancyWriter{}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/mux.go",
    "content": "package chi\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n)\n\nvar _ Router = &Mux{}\n\n// Mux is a simple HTTP route multiplexer that parses a request path,\n// records any URL params, and executes an end handler. It implements\n// the http.Handler interface and is friendly with the standard library.\n//\n// Mux is designed to be fast, minimal and offer a powerful API for building\n// modular and composable HTTP services with a large set of handlers. It's\n// particularly useful for writing large REST API services that break a handler\n// into many smaller parts composed of middlewares and end handlers.\ntype Mux struct {\n\t// The radix trie router\n\ttree *node\n\n\t// The middleware stack\n\tmiddlewares []func(http.Handler) http.Handler\n\n\t// Controls the behaviour of middleware chain generation when a mux\n\t// is registered as an inline group inside another mux.\n\tinline bool\n\tparent *Mux\n\n\t// The computed mux handler made of the chained middleware stack and\n\t// the tree router\n\thandler http.Handler\n\n\t// Routing context pool\n\tpool *sync.Pool\n\n\t// Custom route not found handler\n\tnotFoundHandler http.HandlerFunc\n\n\t// Custom method not allowed handler\n\tmethodNotAllowedHandler http.HandlerFunc\n}\n\n// NewMux returns a newly initialized Mux object that implements the Router\n// interface.\nfunc NewMux() *Mux {\n\tmux := &Mux{tree: &node{}, pool: &sync.Pool{}}\n\tmux.pool.New = func() interface{} {\n\t\treturn NewRouteContext()\n\t}\n\treturn mux\n}\n\n// ServeHTTP is the single method of the http.Handler interface that makes\n// Mux interoperable with the standard library. It uses a sync.Pool to get and\n// reuse routing contexts for each request.\nfunc (mx *Mux) ServeHTTP(w http.ResponseWriter, r *http.Request) {\n\t// Ensure the mux has some routes defined on the mux\n\tif mx.handler == nil {\n\t\tmx.NotFoundHandler().ServeHTTP(w, r)\n\t\treturn\n\t}\n\n\t// Check if a routing context already exists from a parent router.\n\trctx, _ := r.Context().Value(RouteCtxKey).(*Context)\n\tif rctx != nil {\n\t\tmx.handler.ServeHTTP(w, r)\n\t\treturn\n\t}\n\n\t// Fetch a RouteContext object from the sync pool, and call the computed\n\t// mx.handler that is comprised of mx.middlewares + mx.routeHTTP.\n\t// Once the request is finished, reset the routing context and put it back\n\t// into the pool for reuse from another request.\n\trctx = mx.pool.Get().(*Context)\n\trctx.Reset()\n\trctx.Routes = mx\n\n\t// NOTE: r.WithContext() causes 2 allocations and context.WithValue() causes 1 allocation\n\tr = r.WithContext(context.WithValue(r.Context(), RouteCtxKey, rctx))\n\n\t// Serve the request and once its done, put the request context back in the sync pool\n\tmx.handler.ServeHTTP(w, r)\n\tmx.pool.Put(rctx)\n}\n\n// Use appends a middleware handler to the Mux middleware stack.\n//\n// The middleware stack for any Mux will execute before searching for a matching\n// route to a specific handler, which provides opportunity to respond early,\n// change the course of the request execution, or set request-scoped values for\n// the next http.Handler.\nfunc (mx *Mux) Use(middlewares ...func(http.Handler) http.Handler) {\n\tif mx.handler != nil {\n\t\tpanic(\"chi: all middlewares must be defined before routes on a mux\")\n\t}\n\tmx.middlewares = append(mx.middlewares, middlewares...)\n}\n\n// Handle adds the route `pattern` that matches any http method to\n// execute the `handler` http.Handler.\nfunc (mx *Mux) Handle(pattern string, handler http.Handler) {\n\tmx.handle(mALL, pattern, handler)\n}\n\n// HandleFunc adds the route `pattern` that matches any http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) HandleFunc(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mALL, pattern, handlerFn)\n}\n\n// Method adds the route `pattern` that matches `method` http method to\n// execute the `handler` http.Handler.\nfunc (mx *Mux) Method(method, pattern string, handler http.Handler) {\n\tm, ok := methodMap[strings.ToUpper(method)]\n\tif !ok {\n\t\tpanic(fmt.Sprintf(\"chi: '%s' http method is not supported.\", method))\n\t}\n\tmx.handle(m, pattern, handler)\n}\n\n// MethodFunc adds the route `pattern` that matches `method` http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) MethodFunc(method, pattern string, handlerFn http.HandlerFunc) {\n\tmx.Method(method, pattern, handlerFn)\n}\n\n// Connect adds the route `pattern` that matches a CONNECT http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Connect(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mCONNECT, pattern, handlerFn)\n}\n\n// Delete adds the route `pattern` that matches a DELETE http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Delete(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mDELETE, pattern, handlerFn)\n}\n\n// Get adds the route `pattern` that matches a GET http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Get(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mGET, pattern, handlerFn)\n}\n\n// Head adds the route `pattern` that matches a HEAD http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Head(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mHEAD, pattern, handlerFn)\n}\n\n// Options adds the route `pattern` that matches a OPTIONS http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Options(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mOPTIONS, pattern, handlerFn)\n}\n\n// Patch adds the route `pattern` that matches a PATCH http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Patch(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mPATCH, pattern, handlerFn)\n}\n\n// Post adds the route `pattern` that matches a POST http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Post(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mPOST, pattern, handlerFn)\n}\n\n// Put adds the route `pattern` that matches a PUT http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Put(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mPUT, pattern, handlerFn)\n}\n\n// Trace adds the route `pattern` that matches a TRACE http method to\n// execute the `handlerFn` http.HandlerFunc.\nfunc (mx *Mux) Trace(pattern string, handlerFn http.HandlerFunc) {\n\tmx.handle(mTRACE, pattern, handlerFn)\n}\n\n// NotFound sets a custom http.HandlerFunc for routing paths that could\n// not be found. The default 404 handler is `http.NotFound`.\nfunc (mx *Mux) NotFound(handlerFn http.HandlerFunc) {\n\t// Build NotFound handler chain\n\tm := mx\n\thFn := handlerFn\n\tif mx.inline && mx.parent != nil {\n\t\tm = mx.parent\n\t\thFn = Chain(mx.middlewares...).HandlerFunc(hFn).ServeHTTP\n\t}\n\n\t// Update the notFoundHandler from this point forward\n\tm.notFoundHandler = hFn\n\tm.updateSubRoutes(func(subMux *Mux) {\n\t\tif subMux.notFoundHandler == nil {\n\t\t\tsubMux.NotFound(hFn)\n\t\t}\n\t})\n}\n\n// MethodNotAllowed sets a custom http.HandlerFunc for routing paths where the\n// method is unresolved. The default handler returns a 405 with an empty body.\nfunc (mx *Mux) MethodNotAllowed(handlerFn http.HandlerFunc) {\n\t// Build MethodNotAllowed handler chain\n\tm := mx\n\thFn := handlerFn\n\tif mx.inline && mx.parent != nil {\n\t\tm = mx.parent\n\t\thFn = Chain(mx.middlewares...).HandlerFunc(hFn).ServeHTTP\n\t}\n\n\t// Update the methodNotAllowedHandler from this point forward\n\tm.methodNotAllowedHandler = hFn\n\tm.updateSubRoutes(func(subMux *Mux) {\n\t\tif subMux.methodNotAllowedHandler == nil {\n\t\t\tsubMux.MethodNotAllowed(hFn)\n\t\t}\n\t})\n}\n\n// With adds inline middlewares for an endpoint handler.\nfunc (mx *Mux) With(middlewares ...func(http.Handler) http.Handler) Router {\n\t// Similarly as in handle(), we must build the mux handler once additional\n\t// middleware registration isn't allowed for this stack, like now.\n\tif !mx.inline && mx.handler == nil {\n\t\tmx.buildRouteHandler()\n\t}\n\n\t// Copy middlewares from parent inline muxs\n\tvar mws Middlewares\n\tif mx.inline {\n\t\tmws = make(Middlewares, len(mx.middlewares))\n\t\tcopy(mws, mx.middlewares)\n\t}\n\tmws = append(mws, middlewares...)\n\n\tim := &Mux{\n\t\tpool: mx.pool, inline: true, parent: mx, tree: mx.tree, middlewares: mws,\n\t\tnotFoundHandler: mx.notFoundHandler, methodNotAllowedHandler: mx.methodNotAllowedHandler,\n\t}\n\n\treturn im\n}\n\n// Group creates a new inline-Mux with a fresh middleware stack. It's useful\n// for a group of handlers along the same routing path that use an additional\n// set of middlewares. See _examples/.\nfunc (mx *Mux) Group(fn func(r Router)) Router {\n\tim := mx.With().(*Mux)\n\tif fn != nil {\n\t\tfn(im)\n\t}\n\treturn im\n}\n\n// Route creates a new Mux with a fresh middleware stack and mounts it\n// along the `pattern` as a subrouter. Effectively, this is a short-hand\n// call to Mount. See _examples/.\nfunc (mx *Mux) Route(pattern string, fn func(r Router)) Router {\n\tsubRouter := NewRouter()\n\tif fn != nil {\n\t\tfn(subRouter)\n\t}\n\tmx.Mount(pattern, subRouter)\n\treturn subRouter\n}\n\n// Mount attaches another http.Handler or chi Router as a subrouter along a routing\n// path. It's very useful to split up a large API as many independent routers and\n// compose them as a single service using Mount. See _examples/.\n//\n// Note that Mount() simply sets a wildcard along the `pattern` that will continue\n// routing at the `handler`, which in most cases is another chi.Router. As a result,\n// if you define two Mount() routes on the exact same pattern the mount will panic.\nfunc (mx *Mux) Mount(pattern string, handler http.Handler) {\n\t// Provide runtime safety for ensuring a pattern isn't mounted on an existing\n\t// routing pattern.\n\tif mx.tree.findPattern(pattern+\"*\") || mx.tree.findPattern(pattern+\"/*\") {\n\t\tpanic(fmt.Sprintf(\"chi: attempting to Mount() a handler on an existing path, '%s'\", pattern))\n\t}\n\n\t// Assign sub-Router's with the parent not found & method not allowed handler if not specified.\n\tsubr, ok := handler.(*Mux)\n\tif ok && subr.notFoundHandler == nil && mx.notFoundHandler != nil {\n\t\tsubr.NotFound(mx.notFoundHandler)\n\t}\n\tif ok && subr.methodNotAllowedHandler == nil && mx.methodNotAllowedHandler != nil {\n\t\tsubr.MethodNotAllowed(mx.methodNotAllowedHandler)\n\t}\n\n\tmountHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\trctx := RouteContext(r.Context())\n\t\trctx.RoutePath = mx.nextRoutePath(rctx)\n\t\thandler.ServeHTTP(w, r)\n\t})\n\n\tif pattern == \"\" || pattern[len(pattern)-1] != '/' {\n\t\tmx.handle(mALL|mSTUB, pattern, mountHandler)\n\t\tmx.handle(mALL|mSTUB, pattern+\"/\", mountHandler)\n\t\tpattern += \"/\"\n\t}\n\n\tmethod := mALL\n\tsubroutes, _ := handler.(Routes)\n\tif subroutes != nil {\n\t\tmethod |= mSTUB\n\t}\n\tn := mx.handle(method, pattern+\"*\", mountHandler)\n\n\tif subroutes != nil {\n\t\tn.subroutes = subroutes\n\t}\n}\n\n// Routes returns a slice of routing information from the tree,\n// useful for traversing available routes of a router.\nfunc (mx *Mux) Routes() []Route {\n\treturn mx.tree.routes()\n}\n\n// Middlewares returns a slice of middleware handler functions.\nfunc (mx *Mux) Middlewares() Middlewares {\n\treturn mx.middlewares\n}\n\n// Match searches the routing tree for a handler that matches the method/path.\n// It's similar to routing a http request, but without executing the handler\n// thereafter.\n//\n// Note: the *Context state is updated during execution, so manage\n// the state carefully or make a NewRouteContext().\nfunc (mx *Mux) Match(rctx *Context, method, path string) bool {\n\tm, ok := methodMap[method]\n\tif !ok {\n\t\treturn false\n\t}\n\n\tnode, _, h := mx.tree.FindRoute(rctx, m, path)\n\n\tif node != nil && node.subroutes != nil {\n\t\trctx.RoutePath = mx.nextRoutePath(rctx)\n\t\treturn node.subroutes.Match(rctx, method, rctx.RoutePath)\n\t}\n\n\treturn h != nil\n}\n\n// NotFoundHandler returns the default Mux 404 responder whenever a route\n// cannot be found.\nfunc (mx *Mux) NotFoundHandler() http.HandlerFunc {\n\tif mx.notFoundHandler != nil {\n\t\treturn mx.notFoundHandler\n\t}\n\treturn http.NotFound\n}\n\n// MethodNotAllowedHandler returns the default Mux 405 responder whenever\n// a method cannot be resolved for a route.\nfunc (mx *Mux) MethodNotAllowedHandler() http.HandlerFunc {\n\tif mx.methodNotAllowedHandler != nil {\n\t\treturn mx.methodNotAllowedHandler\n\t}\n\treturn methodNotAllowedHandler\n}\n\n// buildRouteHandler builds the single mux handler that is a chain of the middleware\n// stack, as defined by calls to Use(), and the tree router (Mux) itself. After this\n// point, no other middlewares can be registered on this Mux's stack. But you can still\n// compose additional middlewares via Group()'s or using a chained middleware handler.\nfunc (mx *Mux) buildRouteHandler() {\n\tmx.handler = chain(mx.middlewares, http.HandlerFunc(mx.routeHTTP))\n}\n\n// handle registers a http.Handler in the routing tree for a particular http method\n// and routing pattern.\nfunc (mx *Mux) handle(method methodTyp, pattern string, handler http.Handler) *node {\n\tif len(pattern) == 0 || pattern[0] != '/' {\n\t\tpanic(fmt.Sprintf(\"chi: routing pattern must begin with '/' in '%s'\", pattern))\n\t}\n\n\t// Build the computed routing handler for this routing pattern.\n\tif !mx.inline && mx.handler == nil {\n\t\tmx.buildRouteHandler()\n\t}\n\n\t// Build endpoint handler with inline middlewares for the route\n\tvar h http.Handler\n\tif mx.inline {\n\t\tmx.handler = http.HandlerFunc(mx.routeHTTP)\n\t\th = Chain(mx.middlewares...).Handler(handler)\n\t} else {\n\t\th = handler\n\t}\n\n\t// Add the endpoint to the tree and return the node\n\treturn mx.tree.InsertRoute(method, pattern, h)\n}\n\n// routeHTTP routes a http.Request through the Mux routing tree to serve\n// the matching handler for a particular http method.\nfunc (mx *Mux) routeHTTP(w http.ResponseWriter, r *http.Request) {\n\t// Grab the route context object\n\trctx := r.Context().Value(RouteCtxKey).(*Context)\n\n\t// The request routing path\n\troutePath := rctx.RoutePath\n\tif routePath == \"\" {\n\t\tif r.URL.RawPath != \"\" {\n\t\t\troutePath = r.URL.RawPath\n\t\t} else {\n\t\t\troutePath = r.URL.Path\n\t\t}\n\t}\n\n\t// Check if method is supported by chi\n\tif rctx.RouteMethod == \"\" {\n\t\trctx.RouteMethod = r.Method\n\t}\n\tmethod, ok := methodMap[rctx.RouteMethod]\n\tif !ok {\n\t\tmx.MethodNotAllowedHandler().ServeHTTP(w, r)\n\t\treturn\n\t}\n\n\t// Find the route\n\tif _, _, h := mx.tree.FindRoute(rctx, method, routePath); h != nil {\n\t\th.ServeHTTP(w, r)\n\t\treturn\n\t}\n\tif rctx.methodNotAllowed {\n\t\tmx.MethodNotAllowedHandler().ServeHTTP(w, r)\n\t} else {\n\t\tmx.NotFoundHandler().ServeHTTP(w, r)\n\t}\n}\n\nfunc (mx *Mux) nextRoutePath(rctx *Context) string {\n\troutePath := \"/\"\n\tnx := len(rctx.routeParams.Keys) - 1 // index of last param in list\n\tif nx >= 0 && rctx.routeParams.Keys[nx] == \"*\" && len(rctx.routeParams.Values) > nx {\n\t\troutePath = \"/\" + rctx.routeParams.Values[nx]\n\t}\n\treturn routePath\n}\n\n// Recursively update data on child routers.\nfunc (mx *Mux) updateSubRoutes(fn func(subMux *Mux)) {\n\tfor _, r := range mx.tree.routes() {\n\t\tsubMux, ok := r.SubRoutes.(*Mux)\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\t\tfn(subMux)\n\t}\n}\n\n// methodNotAllowedHandler is a helper function to respond with a 405,\n// method not allowed.\nfunc methodNotAllowedHandler(w http.ResponseWriter, r *http.Request) {\n\tw.WriteHeader(405)\n\tw.Write(nil)\n}\n"
  },
  {
    "path": "vendor/github.com/go-chi/chi/tree.go",
    "content": "package chi\n\n// Radix tree implementation below is a based on the original work by\n// Armon Dadgar in https://github.com/armon/go-radix/blob/master/radix.go\n// (MIT licensed). It's been heavily modified for use as a HTTP routing tree.\n\nimport (\n\t\"fmt\"\n\t\"math\"\n\t\"net/http\"\n\t\"regexp\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n)\n\ntype methodTyp int\n\nconst (\n\tmSTUB methodTyp = 1 << iota\n\tmCONNECT\n\tmDELETE\n\tmGET\n\tmHEAD\n\tmOPTIONS\n\tmPATCH\n\tmPOST\n\tmPUT\n\tmTRACE\n)\n\nvar mALL = mCONNECT | mDELETE | mGET | mHEAD |\n\tmOPTIONS | mPATCH | mPOST | mPUT | mTRACE\n\nvar methodMap = map[string]methodTyp{\n\thttp.MethodConnect: mCONNECT,\n\thttp.MethodDelete:  mDELETE,\n\thttp.MethodGet:     mGET,\n\thttp.MethodHead:    mHEAD,\n\thttp.MethodOptions: mOPTIONS,\n\thttp.MethodPatch:   mPATCH,\n\thttp.MethodPost:    mPOST,\n\thttp.MethodPut:     mPUT,\n\thttp.MethodTrace:   mTRACE,\n}\n\n// RegisterMethod adds support for custom HTTP method handlers, available\n// via Router#Method and Router#MethodFunc\nfunc RegisterMethod(method string) {\n\tif method == \"\" {\n\t\treturn\n\t}\n\tmethod = strings.ToUpper(method)\n\tif _, ok := methodMap[method]; ok {\n\t\treturn\n\t}\n\tn := len(methodMap)\n\tif n > strconv.IntSize {\n\t\tpanic(fmt.Sprintf(\"chi: max number of methods reached (%d)\", strconv.IntSize))\n\t}\n\tmt := methodTyp(math.Exp2(float64(n)))\n\tmethodMap[method] = mt\n\tmALL |= mt\n}\n\ntype nodeTyp uint8\n\nconst (\n\tntStatic   nodeTyp = iota // /home\n\tntRegexp                  // /{id:[0-9]+}\n\tntParam                   // /{user}\n\tntCatchAll                // /api/v1/*\n)\n\ntype node struct {\n\t// node type: static, regexp, param, catchAll\n\ttyp nodeTyp\n\n\t// first byte of the prefix\n\tlabel byte\n\n\t// first byte of the child prefix\n\ttail byte\n\n\t// prefix is the common prefix we ignore\n\tprefix string\n\n\t// regexp matcher for regexp nodes\n\trex *regexp.Regexp\n\n\t// HTTP handler endpoints on the leaf node\n\tendpoints endpoints\n\n\t// subroutes on the leaf node\n\tsubroutes Routes\n\n\t// child nodes should be stored in-order for iteration,\n\t// in groups of the node type.\n\tchildren [ntCatchAll + 1]nodes\n}\n\n// endpoints is a mapping of http method constants to handlers\n// for a given route.\ntype endpoints map[methodTyp]*endpoint\n\ntype endpoint struct {\n\t// endpoint handler\n\thandler http.Handler\n\n\t// pattern is the routing pattern for handler nodes\n\tpattern string\n\n\t// parameter keys recorded on handler nodes\n\tparamKeys []string\n}\n\nfunc (s endpoints) Value(method methodTyp) *endpoint {\n\tmh, ok := s[method]\n\tif !ok {\n\t\tmh = &endpoint{}\n\t\ts[method] = mh\n\t}\n\treturn mh\n}\n\nfunc (n *node) InsertRoute(method methodTyp, pattern string, handler http.Handler) *node {\n\tvar parent *node\n\tsearch := pattern\n\n\tfor {\n\t\t// Handle key exhaustion\n\t\tif len(search) == 0 {\n\t\t\t// Insert or update the node's leaf handler\n\t\t\tn.setEndpoint(method, handler, pattern)\n\t\t\treturn n\n\t\t}\n\n\t\t// We're going to be searching for a wild node next,\n\t\t// in this case, we need to get the tail\n\t\tvar label = search[0]\n\t\tvar segTail byte\n\t\tvar segEndIdx int\n\t\tvar segTyp nodeTyp\n\t\tvar segRexpat string\n\t\tif label == '{' || label == '*' {\n\t\t\tsegTyp, _, segRexpat, segTail, _, segEndIdx = patNextSegment(search)\n\t\t}\n\n\t\tvar prefix string\n\t\tif segTyp == ntRegexp {\n\t\t\tprefix = segRexpat\n\t\t}\n\n\t\t// Look for the edge to attach to\n\t\tparent = n\n\t\tn = n.getEdge(segTyp, label, segTail, prefix)\n\n\t\t// No edge, create one\n\t\tif n == nil {\n\t\t\tchild := &node{label: label, tail: segTail, prefix: search}\n\t\t\thn := parent.addChild(child, search)\n\t\t\thn.setEndpoint(method, handler, pattern)\n\n\t\t\treturn hn\n\t\t}\n\n\t\t// Found an edge to match the pattern\n\n\t\tif n.typ > ntStatic {\n\t\t\t// We found a param node, trim the param from the search path and continue.\n\t\t\t// This param/wild pattern segment would already be on the tree from a previous\n\t\t\t// call to addChild when creating a new node.\n\t\t\tsearch = search[segEndIdx:]\n\t\t\tcontinue\n\t\t}\n\n\t\t// Static nodes fall below here.\n\t\t// Determine longest prefix of the search key on match.\n\t\tcommonPrefix := longestPrefix(search, n.prefix)\n\t\tif commonPrefix == len(n.prefix) {\n\t\t\t// the common prefix is as long as the current node's prefix we're attempting to insert.\n\t\t\t// keep the search going.\n\t\t\tsearch = search[commonPrefix:]\n\t\t\tcontinue\n\t\t}\n\n\t\t// Split the node\n\t\tchild := &node{\n\t\t\ttyp:    ntStatic,\n\t\t\tprefix: search[:commonPrefix],\n\t\t}\n\t\tparent.replaceChild(search[0], segTail, child)\n\n\t\t// Restore the existing node\n\t\tn.label = n.prefix[commonPrefix]\n\t\tn.prefix = n.prefix[commonPrefix:]\n\t\tchild.addChild(n, n.prefix)\n\n\t\t// If the new key is a subset, set the method/handler on this node and finish.\n\t\tsearch = search[commonPrefix:]\n\t\tif len(search) == 0 {\n\t\t\tchild.setEndpoint(method, handler, pattern)\n\t\t\treturn child\n\t\t}\n\n\t\t// Create a new edge for the node\n\t\tsubchild := &node{\n\t\t\ttyp:    ntStatic,\n\t\t\tlabel:  search[0],\n\t\t\tprefix: search,\n\t\t}\n\t\thn := child.addChild(subchild, search)\n\t\thn.setEndpoint(method, handler, pattern)\n\t\treturn hn\n\t}\n}\n\n// addChild appends the new `child` node to the tree using the `pattern` as the trie key.\n// For a URL router like chi's, we split the static, param, regexp and wildcard segments\n// into different nodes. In addition, addChild will recursively call itself until every\n// pattern segment is added to the url pattern tree as individual nodes, depending on type.\nfunc (n *node) addChild(child *node, prefix string) *node {\n\tsearch := prefix\n\n\t// handler leaf node added to the tree is the child.\n\t// this may be overridden later down the flow\n\thn := child\n\n\t// Parse next segment\n\tsegTyp, _, segRexpat, segTail, segStartIdx, segEndIdx := patNextSegment(search)\n\n\t// Add child depending on next up segment\n\tswitch segTyp {\n\n\tcase ntStatic:\n\t\t// Search prefix is all static (that is, has no params in path)\n\t\t// noop\n\n\tdefault:\n\t\t// Search prefix contains a param, regexp or wildcard\n\n\t\tif segTyp == ntRegexp {\n\t\t\trex, err := regexp.Compile(segRexpat)\n\t\t\tif err != nil {\n\t\t\t\tpanic(fmt.Sprintf(\"chi: invalid regexp pattern '%s' in route param\", segRexpat))\n\t\t\t}\n\t\t\tchild.prefix = segRexpat\n\t\t\tchild.rex = rex\n\t\t}\n\n\t\tif segStartIdx == 0 {\n\t\t\t// Route starts with a param\n\t\t\tchild.typ = segTyp\n\n\t\t\tif segTyp == ntCatchAll {\n\t\t\t\tsegStartIdx = -1\n\t\t\t} else {\n\t\t\t\tsegStartIdx = segEndIdx\n\t\t\t}\n\t\t\tif segStartIdx < 0 {\n\t\t\t\tsegStartIdx = len(search)\n\t\t\t}\n\t\t\tchild.tail = segTail // for params, we set the tail\n\n\t\t\tif segStartIdx != len(search) {\n\t\t\t\t// add static edge for the remaining part, split the end.\n\t\t\t\t// its not possible to have adjacent param nodes, so its certainly\n\t\t\t\t// going to be a static node next.\n\n\t\t\t\tsearch = search[segStartIdx:] // advance search position\n\n\t\t\t\tnn := &node{\n\t\t\t\t\ttyp:    ntStatic,\n\t\t\t\t\tlabel:  search[0],\n\t\t\t\t\tprefix: search,\n\t\t\t\t}\n\t\t\t\thn = child.addChild(nn, search)\n\t\t\t}\n\n\t\t} else if segStartIdx > 0 {\n\t\t\t// Route has some param\n\n\t\t\t// starts with a static segment\n\t\t\tchild.typ = ntStatic\n\t\t\tchild.prefix = search[:segStartIdx]\n\t\t\tchild.rex = nil\n\n\t\t\t// add the param edge node\n\t\t\tsearch = search[segStartIdx:]\n\n\t\t\tnn := &node{\n\t\t\t\ttyp:   segTyp,\n\t\t\t\tlabel: search[0],\n\t\t\t\ttail:  segTail,\n\t\t\t}\n\t\t\thn = child.addChild(nn, search)\n\n\t\t}\n\t}\n\n\tn.children[child.typ] = append(n.children[child.typ], child)\n\tn.children[child.typ].Sort()\n\treturn hn\n}\n\nfunc (n *node) replaceChild(label, tail byte, child *node) {\n\tfor i := 0; i < len(n.children[child.typ]); i++ {\n\t\tif n.children[child.typ][i].label == label && n.children[child.typ][i].tail == tail {\n\t\t\tn.children[child.typ][i] = child\n\t\t\tn.children[child.typ][i].label = label\n\t\t\tn.children[child.typ][i].tail = tail\n\t\t\treturn\n\t\t}\n\t}\n\tpanic(\"chi: replacing missing child\")\n}\n\nfunc (n *node) getEdge(ntyp nodeTyp, label, tail byte, prefix string) *node {\n\tnds := n.children[ntyp]\n\tfor i := 0; i < len(nds); i++ {\n\t\tif nds[i].label == label && nds[i].tail == tail {\n\t\t\tif ntyp == ntRegexp && nds[i].prefix != prefix {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\treturn nds[i]\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (n *node) setEndpoint(method methodTyp, handler http.Handler, pattern string) {\n\t// Set the handler for the method type on the node\n\tif n.endpoints == nil {\n\t\tn.endpoints = make(endpoints)\n\t}\n\n\tparamKeys := patParamKeys(pattern)\n\n\tif method&mSTUB == mSTUB {\n\t\tn.endpoints.Value(mSTUB).handler = handler\n\t}\n\tif method&mALL == mALL {\n\t\th := n.endpoints.Value(mALL)\n\t\th.handler = handler\n\t\th.pattern = pattern\n\t\th.paramKeys = paramKeys\n\t\tfor _, m := range methodMap {\n\t\t\th := n.endpoints.Value(m)\n\t\t\th.handler = handler\n\t\t\th.pattern = pattern\n\t\t\th.paramKeys = paramKeys\n\t\t}\n\t} else {\n\t\th := n.endpoints.Value(method)\n\t\th.handler = handler\n\t\th.pattern = pattern\n\t\th.paramKeys = paramKeys\n\t}\n}\n\nfunc (n *node) FindRoute(rctx *Context, method methodTyp, path string) (*node, endpoints, http.Handler) {\n\t// Reset the context routing pattern and params\n\trctx.routePattern = \"\"\n\trctx.routeParams.Keys = rctx.routeParams.Keys[:0]\n\trctx.routeParams.Values = rctx.routeParams.Values[:0]\n\n\t// Find the routing handlers for the path\n\trn := n.findRoute(rctx, method, path)\n\tif rn == nil {\n\t\treturn nil, nil, nil\n\t}\n\n\t// Record the routing params in the request lifecycle\n\trctx.URLParams.Keys = append(rctx.URLParams.Keys, rctx.routeParams.Keys...)\n\trctx.URLParams.Values = append(rctx.URLParams.Values, rctx.routeParams.Values...)\n\n\t// Record the routing pattern in the request lifecycle\n\tif rn.endpoints[method].pattern != \"\" {\n\t\trctx.routePattern = rn.endpoints[method].pattern\n\t\trctx.RoutePatterns = append(rctx.RoutePatterns, rctx.routePattern)\n\t}\n\n\treturn rn, rn.endpoints, rn.endpoints[method].handler\n}\n\n// Recursive edge traversal by checking all nodeTyp groups along the way.\n// It's like searching through a multi-dimensional radix trie.\nfunc (n *node) findRoute(rctx *Context, method methodTyp, path string) *node {\n\tnn := n\n\tsearch := path\n\n\tfor t, nds := range nn.children {\n\t\tntyp := nodeTyp(t)\n\t\tif len(nds) == 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\tvar xn *node\n\t\txsearch := search\n\n\t\tvar label byte\n\t\tif search != \"\" {\n\t\t\tlabel = search[0]\n\t\t}\n\n\t\tswitch ntyp {\n\t\tcase ntStatic:\n\t\t\txn = nds.findEdge(label)\n\t\t\tif xn == nil || !strings.HasPrefix(xsearch, xn.prefix) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\txsearch = xsearch[len(xn.prefix):]\n\n\t\tcase ntParam, ntRegexp:\n\t\t\t// short-circuit and return no matching route for empty param values\n\t\t\tif xsearch == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// serially loop through each node grouped by the tail delimiter\n\t\t\tfor idx := 0; idx < len(nds); idx++ {\n\t\t\t\txn = nds[idx]\n\n\t\t\t\t// label for param nodes is the delimiter byte\n\t\t\t\tp := strings.IndexByte(xsearch, xn.tail)\n\n\t\t\t\tif p < 0 {\n\t\t\t\t\tif xn.tail == '/' {\n\t\t\t\t\t\tp = len(xsearch)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif ntyp == ntRegexp && xn.rex != nil {\n\t\t\t\t\tif !xn.rex.Match([]byte(xsearch[:p])) {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t} else if strings.IndexByte(xsearch[:p], '/') != -1 {\n\t\t\t\t\t// avoid a match across path segments\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\tprevlen := len(rctx.routeParams.Values)\n\t\t\t\trctx.routeParams.Values = append(rctx.routeParams.Values, xsearch[:p])\n\t\t\t\txsearch = xsearch[p:]\n\n\t\t\t\tif len(xsearch) == 0 {\n\t\t\t\t\tif xn.isLeaf() {\n\t\t\t\t\t\th := xn.endpoints[method]\n\t\t\t\t\t\tif h != nil && h.handler != nil {\n\t\t\t\t\t\t\trctx.routeParams.Keys = append(rctx.routeParams.Keys, h.paramKeys...)\n\t\t\t\t\t\t\treturn xn\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t// flag that the routing context found a route, but not a corresponding\n\t\t\t\t\t\t// supported method\n\t\t\t\t\t\trctx.methodNotAllowed = true\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// recursively find the next node on this branch\n\t\t\t\tfin := xn.findRoute(rctx, method, xsearch)\n\t\t\t\tif fin != nil {\n\t\t\t\t\treturn fin\n\t\t\t\t}\n\n\t\t\t\t// not found on this branch, reset vars\n\t\t\t\trctx.routeParams.Values = rctx.routeParams.Values[:prevlen]\n\t\t\t\txsearch = search\n\t\t\t}\n\n\t\t\trctx.routeParams.Values = append(rctx.routeParams.Values, \"\")\n\n\t\tdefault:\n\t\t\t// catch-all nodes\n\t\t\trctx.routeParams.Values = append(rctx.routeParams.Values, search)\n\t\t\txn = nds[0]\n\t\t\txsearch = \"\"\n\t\t}\n\n\t\tif xn == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\t// did we find it yet?\n\t\tif len(xsearch) == 0 {\n\t\t\tif xn.isLeaf() {\n\t\t\t\th := xn.endpoints[method]\n\t\t\t\tif h != nil && h.handler != nil {\n\t\t\t\t\trctx.routeParams.Keys = append(rctx.routeParams.Keys, h.paramKeys...)\n\t\t\t\t\treturn xn\n\t\t\t\t}\n\n\t\t\t\t// flag that the routing context found a route, but not a corresponding\n\t\t\t\t// supported method\n\t\t\t\trctx.methodNotAllowed = true\n\t\t\t}\n\t\t}\n\n\t\t// recursively find the next node..\n\t\tfin := xn.findRoute(rctx, method, xsearch)\n\t\tif fin != nil {\n\t\t\treturn fin\n\t\t}\n\n\t\t// Did not find final handler, let's remove the param here if it was set\n\t\tif xn.typ > ntStatic {\n\t\t\tif len(rctx.routeParams.Values) > 0 {\n\t\t\t\trctx.routeParams.Values = rctx.routeParams.Values[:len(rctx.routeParams.Values)-1]\n\t\t\t}\n\t\t}\n\n\t}\n\n\treturn nil\n}\n\nfunc (n *node) findEdge(ntyp nodeTyp, label byte) *node {\n\tnds := n.children[ntyp]\n\tnum := len(nds)\n\tidx := 0\n\n\tswitch ntyp {\n\tcase ntStatic, ntParam, ntRegexp:\n\t\ti, j := 0, num-1\n\t\tfor i <= j {\n\t\t\tidx = i + (j-i)/2\n\t\t\tif label > nds[idx].label {\n\t\t\t\ti = idx + 1\n\t\t\t} else if label < nds[idx].label {\n\t\t\t\tj = idx - 1\n\t\t\t} else {\n\t\t\t\ti = num // breaks cond\n\t\t\t}\n\t\t}\n\t\tif nds[idx].label != label {\n\t\t\treturn nil\n\t\t}\n\t\treturn nds[idx]\n\n\tdefault: // catch all\n\t\treturn nds[idx]\n\t}\n}\n\nfunc (n *node) isLeaf() bool {\n\treturn n.endpoints != nil\n}\n\nfunc (n *node) findPattern(pattern string) bool {\n\tnn := n\n\tfor _, nds := range nn.children {\n\t\tif len(nds) == 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\tn = nn.findEdge(nds[0].typ, pattern[0])\n\t\tif n == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tvar idx int\n\t\tvar xpattern string\n\n\t\tswitch n.typ {\n\t\tcase ntStatic:\n\t\t\tidx = longestPrefix(pattern, n.prefix)\n\t\t\tif idx < len(n.prefix) {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\tcase ntParam, ntRegexp:\n\t\t\tidx = strings.IndexByte(pattern, '}') + 1\n\n\t\tcase ntCatchAll:\n\t\t\tidx = longestPrefix(pattern, \"*\")\n\n\t\tdefault:\n\t\t\tpanic(\"chi: unknown node type\")\n\t\t}\n\n\t\txpattern = pattern[idx:]\n\t\tif len(xpattern) == 0 {\n\t\t\treturn true\n\t\t}\n\n\t\treturn n.findPattern(xpattern)\n\t}\n\treturn false\n}\n\nfunc (n *node) routes() []Route {\n\trts := []Route{}\n\n\tn.walk(func(eps endpoints, subroutes Routes) bool {\n\t\tif eps[mSTUB] != nil && eps[mSTUB].handler != nil && subroutes == nil {\n\t\t\treturn false\n\t\t}\n\n\t\t// Group methodHandlers by unique patterns\n\t\tpats := make(map[string]endpoints)\n\n\t\tfor mt, h := range eps {\n\t\t\tif h.pattern == \"\" {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tp, ok := pats[h.pattern]\n\t\t\tif !ok {\n\t\t\t\tp = endpoints{}\n\t\t\t\tpats[h.pattern] = p\n\t\t\t}\n\t\t\tp[mt] = h\n\t\t}\n\n\t\tfor p, mh := range pats {\n\t\t\ths := make(map[string]http.Handler)\n\t\t\tif mh[mALL] != nil && mh[mALL].handler != nil {\n\t\t\t\ths[\"*\"] = mh[mALL].handler\n\t\t\t}\n\n\t\t\tfor mt, h := range mh {\n\t\t\t\tif h.handler == nil {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tm := methodTypString(mt)\n\t\t\t\tif m == \"\" {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\ths[m] = h.handler\n\t\t\t}\n\n\t\t\trt := Route{p, hs, subroutes}\n\t\t\trts = append(rts, rt)\n\t\t}\n\n\t\treturn false\n\t})\n\n\treturn rts\n}\n\nfunc (n *node) walk(fn func(eps endpoints, subroutes Routes) bool) bool {\n\t// Visit the leaf values if any\n\tif (n.endpoints != nil || n.subroutes != nil) && fn(n.endpoints, n.subroutes) {\n\t\treturn true\n\t}\n\n\t// Recurse on the children\n\tfor _, ns := range n.children {\n\t\tfor _, cn := range ns {\n\t\t\tif cn.walk(fn) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n\n// patNextSegment returns the next segment details from a pattern:\n// node type, param key, regexp string, param tail byte, param starting index, param ending index\nfunc patNextSegment(pattern string) (nodeTyp, string, string, byte, int, int) {\n\tps := strings.Index(pattern, \"{\")\n\tws := strings.Index(pattern, \"*\")\n\n\tif ps < 0 && ws < 0 {\n\t\treturn ntStatic, \"\", \"\", 0, 0, len(pattern) // we return the entire thing\n\t}\n\n\t// Sanity check\n\tif ps >= 0 && ws >= 0 && ws < ps {\n\t\tpanic(\"chi: wildcard '*' must be the last pattern in a route, otherwise use a '{param}'\")\n\t}\n\n\tvar tail byte = '/' // Default endpoint tail to / byte\n\n\tif ps >= 0 {\n\t\t// Param/Regexp pattern is next\n\t\tnt := ntParam\n\n\t\t// Read to closing } taking into account opens and closes in curl count (cc)\n\t\tcc := 0\n\t\tpe := ps\n\t\tfor i, c := range pattern[ps:] {\n\t\t\tif c == '{' {\n\t\t\t\tcc++\n\t\t\t} else if c == '}' {\n\t\t\t\tcc--\n\t\t\t\tif cc == 0 {\n\t\t\t\t\tpe = ps + i\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif pe == ps {\n\t\t\tpanic(\"chi: route param closing delimiter '}' is missing\")\n\t\t}\n\n\t\tkey := pattern[ps+1 : pe]\n\t\tpe++ // set end to next position\n\n\t\tif pe < len(pattern) {\n\t\t\ttail = pattern[pe]\n\t\t}\n\n\t\tvar rexpat string\n\t\tif idx := strings.Index(key, \":\"); idx >= 0 {\n\t\t\tnt = ntRegexp\n\t\t\trexpat = key[idx+1:]\n\t\t\tkey = key[:idx]\n\t\t}\n\n\t\tif len(rexpat) > 0 {\n\t\t\tif rexpat[0] != '^' {\n\t\t\t\trexpat = \"^\" + rexpat\n\t\t\t}\n\t\t\tif rexpat[len(rexpat)-1] != '$' {\n\t\t\t\trexpat += \"$\"\n\t\t\t}\n\t\t}\n\n\t\treturn nt, key, rexpat, tail, ps, pe\n\t}\n\n\t// Wildcard pattern as finale\n\tif ws < len(pattern)-1 {\n\t\tpanic(\"chi: wildcard '*' must be the last value in a route. trim trailing text or use a '{param}' instead\")\n\t}\n\treturn ntCatchAll, \"*\", \"\", 0, ws, len(pattern)\n}\n\nfunc patParamKeys(pattern string) []string {\n\tpat := pattern\n\tparamKeys := []string{}\n\tfor {\n\t\tptyp, paramKey, _, _, _, e := patNextSegment(pat)\n\t\tif ptyp == ntStatic {\n\t\t\treturn paramKeys\n\t\t}\n\t\tfor i := 0; i < len(paramKeys); i++ {\n\t\t\tif paramKeys[i] == paramKey {\n\t\t\t\tpanic(fmt.Sprintf(\"chi: routing pattern '%s' contains duplicate param key, '%s'\", pattern, paramKey))\n\t\t\t}\n\t\t}\n\t\tparamKeys = append(paramKeys, paramKey)\n\t\tpat = pat[e:]\n\t}\n}\n\n// longestPrefix finds the length of the shared prefix\n// of two strings\nfunc longestPrefix(k1, k2 string) int {\n\tmax := len(k1)\n\tif l := len(k2); l < max {\n\t\tmax = l\n\t}\n\tvar i int\n\tfor i = 0; i < max; i++ {\n\t\tif k1[i] != k2[i] {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn i\n}\n\nfunc methodTypString(method methodTyp) string {\n\tfor s, t := range methodMap {\n\t\tif method == t {\n\t\t\treturn s\n\t\t}\n\t}\n\treturn \"\"\n}\n\ntype nodes []*node\n\n// Sort the list of nodes by label\nfunc (ns nodes) Sort()              { sort.Sort(ns); ns.tailSort() }\nfunc (ns nodes) Len() int           { return len(ns) }\nfunc (ns nodes) Swap(i, j int)      { ns[i], ns[j] = ns[j], ns[i] }\nfunc (ns nodes) Less(i, j int) bool { return ns[i].label < ns[j].label }\n\n// tailSort pushes nodes with '/' as the tail to the end of the list for param nodes.\n// The list order determines the traversal order.\nfunc (ns nodes) tailSort() {\n\tfor i := len(ns) - 1; i >= 0; i-- {\n\t\tif ns[i].typ > ntStatic && ns[i].tail == '/' {\n\t\t\tns.Swap(i, len(ns)-1)\n\t\t\treturn\n\t\t}\n\t}\n}\n\nfunc (ns nodes) findEdge(label byte) *node {\n\tnum := len(ns)\n\tidx := 0\n\ti, j := 0, num-1\n\tfor i <= j {\n\t\tidx = i + (j-i)/2\n\t\tif label > ns[idx].label {\n\t\t\ti = idx + 1\n\t\t} else if label < ns[idx].label {\n\t\t\tj = idx - 1\n\t\t} else {\n\t\t\ti = num // breaks cond\n\t\t}\n\t}\n\tif ns[idx].label != label {\n\t\treturn nil\n\t}\n\treturn ns[idx]\n}\n\n// Route describes the details of a routing handler.\n// Handlers map key is an HTTP method\ntype Route struct {\n\tPattern   string\n\tHandlers  map[string]http.Handler\n\tSubRoutes Routes\n}\n\n// WalkFunc is the type of the function called for each method and route visited by Walk.\ntype WalkFunc func(method string, route string, handler http.Handler, middlewares ...func(http.Handler) http.Handler) error\n\n// Walk walks any router tree that implements Routes interface.\nfunc Walk(r Routes, walkFn WalkFunc) error {\n\treturn walk(r, walkFn, \"\")\n}\n\nfunc walk(r Routes, walkFn WalkFunc, parentRoute string, parentMw ...func(http.Handler) http.Handler) error {\n\tfor _, route := range r.Routes() {\n\t\tmws := make([]func(http.Handler) http.Handler, len(parentMw))\n\t\tcopy(mws, parentMw)\n\t\tmws = append(mws, r.Middlewares()...)\n\n\t\tif route.SubRoutes != nil {\n\t\t\tif err := walk(route.SubRoutes, walkFn, parentRoute+route.Pattern, mws...); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tfor method, handler := range route.Handlers {\n\t\t\tif method == \"*\" {\n\t\t\t\t// Ignore a \"catchAll\" method, since we pass down all the specific methods for each route.\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tfullRoute := parentRoute + route.Pattern\n\t\t\tfullRoute = strings.Replace(fullRoute, \"/*/\", \"/\", -1)\n\n\t\t\tif chain, ok := handler.(*ChainHandler); ok {\n\t\t\t\tif err := walkFn(method, fullRoute, chain.Endpoint, append(mws, chain.Middlewares...)...); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif err := walkFn(method, fullRoute, handler, mws...); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/goware/cors/LICENSE",
    "content": "Copyright (c) 2014 Olivier Poitrey <rs@dailymotion.com>\nCopyright (c) 2016-Present https://github.com/go-chi authors\n\nMIT License\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of\nthe Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR\nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER\nIN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/goware/cors/README.md",
    "content": "# CORS net/http middleware\n\n[go-chi/cors](https://github.com/go-chi/cors) is a fork of [github.com/rs/cors](https://github.com/rs/cors) that\nprovides a `net/http` compatible middleware for performing preflight CORS checks on the server side. These headers\nare required for using the browser native [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API).\n\nThis middleware is designed to be used as a top-level middleware on the [chi](https://github.com/go-chi/chi) router.\nApplying with within a `r.Group()` or using `With()` will not work without routes matching `OPTIONS` added.\n\n## Usage\n\n```go\nfunc main() {\n  r := chi.NewRouter()\n\n  // Basic CORS\n  // for more ideas, see: https://developer.github.com/v3/#cross-origin-resource-sharing\n  r.Use(cors.Handler(cors.Options{\n    // AllowedOrigins: []string{\"https://foo.com\"}, // Use this to allow specific origin hosts\n    AllowedOrigins:   []string{\"*\"},\n    // AllowOriginFunc:  func(r *http.Request, origin string) bool { return true },\n    AllowedMethods:   []string{\"GET\", \"POST\", \"PUT\", \"DELETE\", \"OPTIONS\"},\n    AllowedHeaders:   []string{\"Accept\", \"Authorization\", \"Content-Type\", \"X-CSRF-Token\"},\n    ExposedHeaders:   []string{\"Link\"},\n    AllowCredentials: false,\n    MaxAge:           300, // Maximum value not ignored by any of major browsers\n  }))\n\n  r.Get(\"/\", func(w http.ResponseWriter, r *http.Request) {\n    w.Write([]byte(\"welcome\"))\n  })\n\n  http.ListenAndServe(\":3000\", r)\n}\n```\n\n## Credits\n\nAll credit for the original work of this middleware goes out to [github.com/rs](github.com/rs).\n"
  },
  {
    "path": "vendor/github.com/goware/cors/cors.go",
    "content": "// cors package is net/http handler to handle CORS related requests\n// as defined by http://www.w3.org/TR/cors/\n//\n// You can configure it by passing an option struct to cors.New:\n//\n//     c := cors.New(cors.Options{\n//         AllowedOrigins: []string{\"foo.com\"},\n//         AllowedMethods: []string{\"GET\", \"POST\", \"DELETE\"},\n//         AllowCredentials: true,\n//     })\n//\n// Then insert the handler in the chain:\n//\n//     handler = c.Handler(handler)\n//\n// See Options documentation for more options.\n//\n// The resulting handler is a standard net/http handler.\npackage cors\n\nimport (\n\t\"log\"\n\t\"net/http\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n)\n\n// Options is a configuration container to setup the CORS middleware.\ntype Options struct {\n\t// AllowedOrigins is a list of origins a cross-domain request can be executed from.\n\t// If the special \"*\" value is present in the list, all origins will be allowed.\n\t// An origin may contain a wildcard (*) to replace 0 or more characters\n\t// (i.e.: http://*.domain.com). Usage of wildcards implies a small performance penalty.\n\t// Only one wildcard can be used per origin.\n\t// Default value is [\"*\"]\n\tAllowedOrigins []string\n\n\t// AllowOriginFunc is a custom function to validate the origin. It takes the origin\n\t// as argument and returns true if allowed or false otherwise. If this option is\n\t// set, the content of AllowedOrigins is ignored.\n\tAllowOriginFunc func(r *http.Request, origin string) bool\n\n\t// AllowedMethods is a list of methods the client is allowed to use with\n\t// cross-domain requests. Default value is simple methods (HEAD, GET and POST).\n\tAllowedMethods []string\n\n\t// AllowedHeaders is list of non simple headers the client is allowed to use with\n\t// cross-domain requests.\n\t// If the special \"*\" value is present in the list, all headers will be allowed.\n\t// Default value is [] but \"Origin\" is always appended to the list.\n\tAllowedHeaders []string\n\n\t// ExposedHeaders indicates which headers are safe to expose to the API of a CORS\n\t// API specification\n\tExposedHeaders []string\n\n\t// AllowCredentials indicates whether the request can include user credentials like\n\t// cookies, HTTP authentication or client side SSL certificates.\n\tAllowCredentials bool\n\n\t// MaxAge indicates how long (in seconds) the results of a preflight request\n\t// can be cached\n\tMaxAge int\n\n\t// OptionsPassthrough instructs preflight to let other potential next handlers to\n\t// process the OPTIONS method. Turn this on if your application handles OPTIONS.\n\tOptionsPassthrough bool\n\n\t// Debugging flag adds additional output to debug server side CORS issues\n\tDebug bool\n}\n\n// Logger generic interface for logger\ntype Logger interface {\n\tPrintf(string, ...interface{})\n}\n\n// Cors http handler\ntype Cors struct {\n\t// Debug logger\n\tLog Logger\n\n\t// Normalized list of plain allowed origins\n\tallowedOrigins []string\n\n\t// List of allowed origins containing wildcards\n\tallowedWOrigins []wildcard\n\n\t// Optional origin validator function\n\tallowOriginFunc func(r *http.Request, origin string) bool\n\n\t// Normalized list of allowed headers\n\tallowedHeaders []string\n\n\t// Normalized list of allowed methods\n\tallowedMethods []string\n\n\t// Normalized list of exposed headers\n\texposedHeaders []string\n\tmaxAge         int\n\n\t// Set to true when allowed origins contains a \"*\"\n\tallowedOriginsAll bool\n\n\t// Set to true when allowed headers contains a \"*\"\n\tallowedHeadersAll bool\n\n\tallowCredentials  bool\n\toptionPassthrough bool\n}\n\n// New creates a new Cors handler with the provided options.\nfunc New(options Options) *Cors {\n\tc := &Cors{\n\t\texposedHeaders:    convert(options.ExposedHeaders, http.CanonicalHeaderKey),\n\t\tallowOriginFunc:   options.AllowOriginFunc,\n\t\tallowCredentials:  options.AllowCredentials,\n\t\tmaxAge:            options.MaxAge,\n\t\toptionPassthrough: options.OptionsPassthrough,\n\t}\n\tif options.Debug && c.Log == nil {\n\t\tc.Log = log.New(os.Stdout, \"[cors] \", log.LstdFlags)\n\t}\n\n\t// Normalize options\n\t// Note: for origins and methods matching, the spec requires a case-sensitive matching.\n\t// As it may error prone, we chose to ignore the spec here.\n\n\t// Allowed Origins\n\tif len(options.AllowedOrigins) == 0 {\n\t\tif options.AllowOriginFunc == nil {\n\t\t\t// Default is all origins\n\t\t\tc.allowedOriginsAll = true\n\t\t}\n\t} else {\n\t\tc.allowedOrigins = []string{}\n\t\tc.allowedWOrigins = []wildcard{}\n\t\tfor _, origin := range options.AllowedOrigins {\n\t\t\t// Normalize\n\t\t\torigin = strings.ToLower(origin)\n\t\t\tif origin == \"*\" {\n\t\t\t\t// If \"*\" is present in the list, turn the whole list into a match all\n\t\t\t\tc.allowedOriginsAll = true\n\t\t\t\tc.allowedOrigins = nil\n\t\t\t\tc.allowedWOrigins = nil\n\t\t\t\tbreak\n\t\t\t} else if i := strings.IndexByte(origin, '*'); i >= 0 {\n\t\t\t\t// Split the origin in two: start and end string without the *\n\t\t\t\tw := wildcard{origin[0:i], origin[i+1:]}\n\t\t\t\tc.allowedWOrigins = append(c.allowedWOrigins, w)\n\t\t\t} else {\n\t\t\t\tc.allowedOrigins = append(c.allowedOrigins, origin)\n\t\t\t}\n\t\t}\n\t}\n\n\t// Allowed Headers\n\tif len(options.AllowedHeaders) == 0 {\n\t\t// Use sensible defaults\n\t\tc.allowedHeaders = []string{\"Origin\", \"Accept\", \"Content-Type\"}\n\t} else {\n\t\t// Origin is always appended as some browsers will always request for this header at preflight\n\t\tc.allowedHeaders = convert(append(options.AllowedHeaders, \"Origin\"), http.CanonicalHeaderKey)\n\t\tfor _, h := range options.AllowedHeaders {\n\t\t\tif h == \"*\" {\n\t\t\t\tc.allowedHeadersAll = true\n\t\t\t\tc.allowedHeaders = nil\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\t// Allowed Methods\n\tif len(options.AllowedMethods) == 0 {\n\t\t// Default is spec's \"simple\" methods\n\t\tc.allowedMethods = []string{http.MethodGet, http.MethodPost, http.MethodHead}\n\t} else {\n\t\tc.allowedMethods = convert(options.AllowedMethods, strings.ToUpper)\n\t}\n\n\treturn c\n}\n\n// Handler creates a new Cors handler with passed options.\nfunc Handler(options Options) func(next http.Handler) http.Handler {\n\tc := New(options)\n\treturn c.Handler\n}\n\n// AllowAll create a new Cors handler with permissive configuration allowing all\n// origins with all standard methods with any header and credentials.\nfunc AllowAll() *Cors {\n\treturn New(Options{\n\t\tAllowedOrigins: []string{\"*\"},\n\t\tAllowedMethods: []string{\n\t\t\thttp.MethodHead,\n\t\t\thttp.MethodGet,\n\t\t\thttp.MethodPost,\n\t\t\thttp.MethodPut,\n\t\t\thttp.MethodPatch,\n\t\t\thttp.MethodDelete,\n\t\t},\n\t\tAllowedHeaders:   []string{\"*\"},\n\t\tAllowCredentials: false,\n\t})\n}\n\n// Handler apply the CORS specification on the request, and add relevant CORS headers\n// as necessary.\nfunc (c *Cors) Handler(next http.Handler) http.Handler {\n\treturn http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tif r.Method == http.MethodOptions && r.Header.Get(\"Access-Control-Request-Method\") != \"\" {\n\t\t\tc.logf(\"Handler: Preflight request\")\n\t\t\tc.handlePreflight(w, r)\n\t\t\t// Preflight requests are standalone and should stop the chain as some other\n\t\t\t// middleware may not handle OPTIONS requests correctly. One typical example\n\t\t\t// is authentication middleware ; OPTIONS requests won't carry authentication\n\t\t\t// headers (see #1)\n\t\t\tif c.optionPassthrough {\n\t\t\t\tnext.ServeHTTP(w, r)\n\t\t\t} else {\n\t\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\t}\n\t\t} else {\n\t\t\tc.logf(\"Handler: Actual request\")\n\t\t\tc.handleActualRequest(w, r)\n\t\t\tnext.ServeHTTP(w, r)\n\t\t}\n\t})\n}\n\n// handlePreflight handles pre-flight CORS requests\nfunc (c *Cors) handlePreflight(w http.ResponseWriter, r *http.Request) {\n\theaders := w.Header()\n\torigin := r.Header.Get(\"Origin\")\n\n\tif r.Method != http.MethodOptions {\n\t\tc.logf(\"Preflight aborted: %s!=OPTIONS\", r.Method)\n\t\treturn\n\t}\n\t// Always set Vary headers\n\t// see https://github.com/rs/cors/issues/10,\n\t//     https://github.com/rs/cors/commit/dbdca4d95feaa7511a46e6f1efb3b3aa505bc43f#commitcomment-12352001\n\theaders.Add(\"Vary\", \"Origin\")\n\theaders.Add(\"Vary\", \"Access-Control-Request-Method\")\n\theaders.Add(\"Vary\", \"Access-Control-Request-Headers\")\n\n\tif origin == \"\" {\n\t\tc.logf(\"Preflight aborted: empty origin\")\n\t\treturn\n\t}\n\tif !c.isOriginAllowed(r, origin) {\n\t\tc.logf(\"Preflight aborted: origin '%s' not allowed\", origin)\n\t\treturn\n\t}\n\n\treqMethod := r.Header.Get(\"Access-Control-Request-Method\")\n\tif !c.isMethodAllowed(reqMethod) {\n\t\tc.logf(\"Preflight aborted: method '%s' not allowed\", reqMethod)\n\t\treturn\n\t}\n\treqHeaders := parseHeaderList(r.Header.Get(\"Access-Control-Request-Headers\"))\n\tif !c.areHeadersAllowed(reqHeaders) {\n\t\tc.logf(\"Preflight aborted: headers '%v' not allowed\", reqHeaders)\n\t\treturn\n\t}\n\tif c.allowedOriginsAll {\n\t\theaders.Set(\"Access-Control-Allow-Origin\", \"*\")\n\t} else {\n\t\theaders.Set(\"Access-Control-Allow-Origin\", origin)\n\t}\n\t// Spec says: Since the list of methods can be unbounded, simply returning the method indicated\n\t// by Access-Control-Request-Method (if supported) can be enough\n\theaders.Set(\"Access-Control-Allow-Methods\", strings.ToUpper(reqMethod))\n\tif len(reqHeaders) > 0 {\n\n\t\t// Spec says: Since the list of headers can be unbounded, simply returning supported headers\n\t\t// from Access-Control-Request-Headers can be enough\n\t\theaders.Set(\"Access-Control-Allow-Headers\", strings.Join(reqHeaders, \", \"))\n\t}\n\tif c.allowCredentials {\n\t\theaders.Set(\"Access-Control-Allow-Credentials\", \"true\")\n\t}\n\tif c.maxAge > 0 {\n\t\theaders.Set(\"Access-Control-Max-Age\", strconv.Itoa(c.maxAge))\n\t}\n\tc.logf(\"Preflight response headers: %v\", headers)\n}\n\n// handleActualRequest handles simple cross-origin requests, actual request or redirects\nfunc (c *Cors) handleActualRequest(w http.ResponseWriter, r *http.Request) {\n\theaders := w.Header()\n\torigin := r.Header.Get(\"Origin\")\n\n\t// Always set Vary, see https://github.com/rs/cors/issues/10\n\theaders.Add(\"Vary\", \"Origin\")\n\tif origin == \"\" {\n\t\tc.logf(\"Actual request no headers added: missing origin\")\n\t\treturn\n\t}\n\tif !c.isOriginAllowed(r, origin) {\n\t\tc.logf(\"Actual request no headers added: origin '%s' not allowed\", origin)\n\t\treturn\n\t}\n\n\t// Note that spec does define a way to specifically disallow a simple method like GET or\n\t// POST. Access-Control-Allow-Methods is only used for pre-flight requests and the\n\t// spec doesn't instruct to check the allowed methods for simple cross-origin requests.\n\t// We think it's a nice feature to be able to have control on those methods though.\n\tif !c.isMethodAllowed(r.Method) {\n\t\tc.logf(\"Actual request no headers added: method '%s' not allowed\", r.Method)\n\n\t\treturn\n\t}\n\tif c.allowedOriginsAll {\n\t\theaders.Set(\"Access-Control-Allow-Origin\", \"*\")\n\t} else {\n\t\theaders.Set(\"Access-Control-Allow-Origin\", origin)\n\t}\n\tif len(c.exposedHeaders) > 0 {\n\t\theaders.Set(\"Access-Control-Expose-Headers\", strings.Join(c.exposedHeaders, \", \"))\n\t}\n\tif c.allowCredentials {\n\t\theaders.Set(\"Access-Control-Allow-Credentials\", \"true\")\n\t}\n\tc.logf(\"Actual response added headers: %v\", headers)\n}\n\n// convenience method. checks if a logger is set.\nfunc (c *Cors) logf(format string, a ...interface{}) {\n\tif c.Log != nil {\n\t\tc.Log.Printf(format, a...)\n\t}\n}\n\n// isOriginAllowed checks if a given origin is allowed to perform cross-domain requests\n// on the endpoint\nfunc (c *Cors) isOriginAllowed(r *http.Request, origin string) bool {\n\tif c.allowOriginFunc != nil {\n\t\treturn c.allowOriginFunc(r, origin)\n\t}\n\tif c.allowedOriginsAll {\n\t\treturn true\n\t}\n\torigin = strings.ToLower(origin)\n\tfor _, o := range c.allowedOrigins {\n\t\tif o == origin {\n\t\t\treturn true\n\t\t}\n\t}\n\tfor _, w := range c.allowedWOrigins {\n\t\tif w.match(origin) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// isMethodAllowed checks if a given method can be used as part of a cross-domain request\n// on the endpoint\nfunc (c *Cors) isMethodAllowed(method string) bool {\n\tif len(c.allowedMethods) == 0 {\n\t\t// If no method allowed, always return false, even for preflight request\n\t\treturn false\n\t}\n\tmethod = strings.ToUpper(method)\n\tif method == http.MethodOptions {\n\t\t// Always allow preflight requests\n\t\treturn true\n\t}\n\tfor _, m := range c.allowedMethods {\n\t\tif m == method {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// areHeadersAllowed checks if a given list of headers are allowed to used within\n// a cross-domain request.\nfunc (c *Cors) areHeadersAllowed(requestedHeaders []string) bool {\n\tif c.allowedHeadersAll || len(requestedHeaders) == 0 {\n\t\treturn true\n\t}\n\tfor _, header := range requestedHeaders {\n\t\theader = http.CanonicalHeaderKey(header)\n\t\tfound := false\n\t\tfor _, h := range c.allowedHeaders {\n\t\t\tif h == header {\n\t\t\t\tfound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !found {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n"
  },
  {
    "path": "vendor/github.com/goware/cors/utils.go",
    "content": "package cors\n\nimport \"strings\"\n\nconst toLower = 'a' - 'A'\n\ntype converter func(string) string\n\ntype wildcard struct {\n\tprefix string\n\tsuffix string\n}\n\nfunc (w wildcard) match(s string) bool {\n\treturn len(s) >= len(w.prefix+w.suffix) && strings.HasPrefix(s, w.prefix) && strings.HasSuffix(s, w.suffix)\n}\n\n// convert converts a list of string using the passed converter function\nfunc convert(s []string, c converter) []string {\n\tout := []string{}\n\tfor _, i := range s {\n\t\tout = append(out, c(i))\n\t}\n\treturn out\n}\n\n// parseHeaderList tokenize + normalize a string containing a list of headers\nfunc parseHeaderList(headerList string) []string {\n\tl := len(headerList)\n\th := make([]byte, 0, l)\n\tupper := true\n\t// Estimate the number headers in order to allocate the right splice size\n\tt := 0\n\tfor i := 0; i < l; i++ {\n\t\tif headerList[i] == ',' {\n\t\t\tt++\n\t\t}\n\t}\n\theaders := make([]string, 0, t)\n\tfor i := 0; i < l; i++ {\n\t\tb := headerList[i]\n\t\tif b >= 'a' && b <= 'z' {\n\t\t\tif upper {\n\t\t\t\th = append(h, b-toLower)\n\t\t\t} else {\n\t\t\t\th = append(h, b)\n\t\t\t}\n\t\t} else if b >= 'A' && b <= 'Z' {\n\t\t\tif !upper {\n\t\t\t\th = append(h, b+toLower)\n\t\t\t} else {\n\t\t\t\th = append(h, b)\n\t\t\t}\n\t\t} else if b == '-' || (b >= '0' && b <= '9') {\n\t\t\th = append(h, b)\n\t\t}\n\n\t\tif b == ' ' || b == ',' || i == l-1 {\n\t\t\tif len(h) > 0 {\n\t\t\t\t// Flush the found header\n\t\t\t\theaders = append(headers, string(h))\n\t\t\t\th = h[:0]\n\t\t\t\tupper = true\n\t\t\t}\n\t\t} else {\n\t\t\tupper = b == '-'\n\t\t}\n\t}\n\treturn headers\n}\n"
  },
  {
    "path": "vendor/github.com/jinzhu/inflection/LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2015 - Jinzhu\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/jinzhu/inflection/README.md",
    "content": "# Inflection\n\nInflection pluralizes and singularizes English nouns\n\n[![wercker status](https://app.wercker.com/status/f8c7432b097d1f4ce636879670be0930/s/master \"wercker status\")](https://app.wercker.com/project/byKey/f8c7432b097d1f4ce636879670be0930)\n\n## Basic Usage\n\n```go\ninflection.Plural(\"person\") => \"people\"\ninflection.Plural(\"Person\") => \"People\"\ninflection.Plural(\"PERSON\") => \"PEOPLE\"\ninflection.Plural(\"bus\")    => \"buses\"\ninflection.Plural(\"BUS\")    => \"BUSES\"\ninflection.Plural(\"Bus\")    => \"Buses\"\n\ninflection.Singular(\"people\") => \"person\"\ninflection.Singular(\"People\") => \"Person\"\ninflection.Singular(\"PEOPLE\") => \"PERSON\"\ninflection.Singular(\"buses\")  => \"bus\"\ninflection.Singular(\"BUSES\")  => \"BUS\"\ninflection.Singular(\"Buses\")  => \"Bus\"\n\ninflection.Plural(\"FancyPerson\") => \"FancyPeople\"\ninflection.Singular(\"FancyPeople\") => \"FancyPerson\"\n```\n\n## Register Rules\n\nStandard rules are from Rails's ActiveSupport (https://github.com/rails/rails/blob/master/activesupport/lib/active_support/inflections.rb)\n\nIf you want to register more rules, follow:\n\n```\ninflection.AddUncountable(\"fish\")\ninflection.AddIrregular(\"person\", \"people\")\ninflection.AddPlural(\"(bu)s$\", \"${1}ses\") # \"bus\" => \"buses\" / \"BUS\" => \"BUSES\" / \"Bus\" => \"Buses\"\ninflection.AddSingular(\"(bus)(es)?$\", \"${1}\") # \"buses\" => \"bus\" / \"Buses\" => \"Bus\" / \"BUSES\" => \"BUS\"\n```\n\n## Contributing\n\nYou can help to make the project better, check out [http://gorm.io/contribute.html](http://gorm.io/contribute.html) for things you can do.\n\n## Author\n\n**jinzhu**\n\n* <http://github.com/jinzhu>\n* <wosmvp@gmail.com>\n* <http://twitter.com/zhangjinzhu>\n\n## License\n\nReleased under the [MIT License](http://www.opensource.org/licenses/MIT).\n"
  },
  {
    "path": "vendor/github.com/jinzhu/inflection/inflections.go",
    "content": "/*\nPackage inflection pluralizes and singularizes English nouns.\n\n\t\tinflection.Plural(\"person\") => \"people\"\n\t\tinflection.Plural(\"Person\") => \"People\"\n\t\tinflection.Plural(\"PERSON\") => \"PEOPLE\"\n\n\t\tinflection.Singular(\"people\") => \"person\"\n\t\tinflection.Singular(\"People\") => \"Person\"\n\t\tinflection.Singular(\"PEOPLE\") => \"PERSON\"\n\n\t\tinflection.Plural(\"FancyPerson\") => \"FancydPeople\"\n\t\tinflection.Singular(\"FancyPeople\") => \"FancydPerson\"\n\nStandard rules are from Rails's ActiveSupport (https://github.com/rails/rails/blob/master/activesupport/lib/active_support/inflections.rb)\n\nIf you want to register more rules, follow:\n\n\t\tinflection.AddUncountable(\"fish\")\n\t\tinflection.AddIrregular(\"person\", \"people\")\n\t\tinflection.AddPlural(\"(bu)s$\", \"${1}ses\") # \"bus\" => \"buses\" / \"BUS\" => \"BUSES\" / \"Bus\" => \"Buses\"\n\t\tinflection.AddSingular(\"(bus)(es)?$\", \"${1}\") # \"buses\" => \"bus\" / \"Buses\" => \"Bus\" / \"BUSES\" => \"BUS\"\n*/\npackage inflection\n\nimport (\n\t\"regexp\"\n\t\"strings\"\n)\n\ntype inflection struct {\n\tregexp  *regexp.Regexp\n\treplace string\n}\n\n// Regular is a regexp find replace inflection\ntype Regular struct {\n\tfind    string\n\treplace string\n}\n\n// Irregular is a hard replace inflection,\n// containing both singular and plural forms\ntype Irregular struct {\n\tsingular string\n\tplural   string\n}\n\n// RegularSlice is a slice of Regular inflections\ntype RegularSlice []Regular\n\n// IrregularSlice is a slice of Irregular inflections\ntype IrregularSlice []Irregular\n\nvar pluralInflections = RegularSlice{\n\t{\"([a-z])$\", \"${1}s\"},\n\t{\"s$\", \"s\"},\n\t{\"^(ax|test)is$\", \"${1}es\"},\n\t{\"(octop|vir)us$\", \"${1}i\"},\n\t{\"(octop|vir)i$\", \"${1}i\"},\n\t{\"(alias|status)$\", \"${1}es\"},\n\t{\"(bu)s$\", \"${1}ses\"},\n\t{\"(buffal|tomat)o$\", \"${1}oes\"},\n\t{\"([ti])um$\", \"${1}a\"},\n\t{\"([ti])a$\", \"${1}a\"},\n\t{\"sis$\", \"ses\"},\n\t{\"(?:([^f])fe|([lr])f)$\", \"${1}${2}ves\"},\n\t{\"(hive)$\", \"${1}s\"},\n\t{\"([^aeiouy]|qu)y$\", \"${1}ies\"},\n\t{\"(x|ch|ss|sh)$\", \"${1}es\"},\n\t{\"(matr|vert|ind)(?:ix|ex)$\", \"${1}ices\"},\n\t{\"^(m|l)ouse$\", \"${1}ice\"},\n\t{\"^(m|l)ice$\", \"${1}ice\"},\n\t{\"^(ox)$\", \"${1}en\"},\n\t{\"^(oxen)$\", \"${1}\"},\n\t{\"(quiz)$\", \"${1}zes\"},\n}\n\nvar singularInflections = RegularSlice{\n\t{\"s$\", \"\"},\n\t{\"(ss)$\", \"${1}\"},\n\t{\"(n)ews$\", \"${1}ews\"},\n\t{\"([ti])a$\", \"${1}um\"},\n\t{\"((a)naly|(b)a|(d)iagno|(p)arenthe|(p)rogno|(s)ynop|(t)he)(sis|ses)$\", \"${1}sis\"},\n\t{\"(^analy)(sis|ses)$\", \"${1}sis\"},\n\t{\"([^f])ves$\", \"${1}fe\"},\n\t{\"(hive)s$\", \"${1}\"},\n\t{\"(tive)s$\", \"${1}\"},\n\t{\"([lr])ves$\", \"${1}f\"},\n\t{\"([^aeiouy]|qu)ies$\", \"${1}y\"},\n\t{\"(s)eries$\", \"${1}eries\"},\n\t{\"(m)ovies$\", \"${1}ovie\"},\n\t{\"(c)ookies$\", \"${1}ookie\"},\n\t{\"(x|ch|ss|sh)es$\", \"${1}\"},\n\t{\"^(m|l)ice$\", \"${1}ouse\"},\n\t{\"(bus)(es)?$\", \"${1}\"},\n\t{\"(o)es$\", \"${1}\"},\n\t{\"(shoe)s$\", \"${1}\"},\n\t{\"(cris|test)(is|es)$\", \"${1}is\"},\n\t{\"^(a)x[ie]s$\", \"${1}xis\"},\n\t{\"(octop|vir)(us|i)$\", \"${1}us\"},\n\t{\"(alias|status)(es)?$\", \"${1}\"},\n\t{\"^(ox)en\", \"${1}\"},\n\t{\"(vert|ind)ices$\", \"${1}ex\"},\n\t{\"(matr)ices$\", \"${1}ix\"},\n\t{\"(quiz)zes$\", \"${1}\"},\n\t{\"(database)s$\", \"${1}\"},\n}\n\nvar irregularInflections = IrregularSlice{\n\t{\"person\", \"people\"},\n\t{\"man\", \"men\"},\n\t{\"child\", \"children\"},\n\t{\"sex\", \"sexes\"},\n\t{\"move\", \"moves\"},\n\t{\"mombie\", \"mombies\"},\n}\n\nvar uncountableInflections = []string{\"equipment\", \"information\", \"rice\", \"money\", \"species\", \"series\", \"fish\", \"sheep\", \"jeans\", \"police\"}\n\nvar compiledPluralMaps []inflection\nvar compiledSingularMaps []inflection\n\nfunc compile() {\n\tcompiledPluralMaps = []inflection{}\n\tcompiledSingularMaps = []inflection{}\n\tfor _, uncountable := range uncountableInflections {\n\t\tinf := inflection{\n\t\t\tregexp:  regexp.MustCompile(\"^(?i)(\" + uncountable + \")$\"),\n\t\t\treplace: \"${1}\",\n\t\t}\n\t\tcompiledPluralMaps = append(compiledPluralMaps, inf)\n\t\tcompiledSingularMaps = append(compiledSingularMaps, inf)\n\t}\n\n\tfor _, value := range irregularInflections {\n\t\tinfs := []inflection{\n\t\t\tinflection{regexp: regexp.MustCompile(strings.ToUpper(value.singular) + \"$\"), replace: strings.ToUpper(value.plural)},\n\t\t\tinflection{regexp: regexp.MustCompile(strings.Title(value.singular) + \"$\"), replace: strings.Title(value.plural)},\n\t\t\tinflection{regexp: regexp.MustCompile(value.singular + \"$\"), replace: value.plural},\n\t\t}\n\t\tcompiledPluralMaps = append(compiledPluralMaps, infs...)\n\t}\n\n\tfor _, value := range irregularInflections {\n\t\tinfs := []inflection{\n\t\t\tinflection{regexp: regexp.MustCompile(strings.ToUpper(value.plural) + \"$\"), replace: strings.ToUpper(value.singular)},\n\t\t\tinflection{regexp: regexp.MustCompile(strings.Title(value.plural) + \"$\"), replace: strings.Title(value.singular)},\n\t\t\tinflection{regexp: regexp.MustCompile(value.plural + \"$\"), replace: value.singular},\n\t\t}\n\t\tcompiledSingularMaps = append(compiledSingularMaps, infs...)\n\t}\n\n\tfor i := len(pluralInflections) - 1; i >= 0; i-- {\n\t\tvalue := pluralInflections[i]\n\t\tinfs := []inflection{\n\t\t\tinflection{regexp: regexp.MustCompile(strings.ToUpper(value.find)), replace: strings.ToUpper(value.replace)},\n\t\t\tinflection{regexp: regexp.MustCompile(value.find), replace: value.replace},\n\t\t\tinflection{regexp: regexp.MustCompile(\"(?i)\" + value.find), replace: value.replace},\n\t\t}\n\t\tcompiledPluralMaps = append(compiledPluralMaps, infs...)\n\t}\n\n\tfor i := len(singularInflections) - 1; i >= 0; i-- {\n\t\tvalue := singularInflections[i]\n\t\tinfs := []inflection{\n\t\t\tinflection{regexp: regexp.MustCompile(strings.ToUpper(value.find)), replace: strings.ToUpper(value.replace)},\n\t\t\tinflection{regexp: regexp.MustCompile(value.find), replace: value.replace},\n\t\t\tinflection{regexp: regexp.MustCompile(\"(?i)\" + value.find), replace: value.replace},\n\t\t}\n\t\tcompiledSingularMaps = append(compiledSingularMaps, infs...)\n\t}\n}\n\nfunc init() {\n\tcompile()\n}\n\n// AddPlural adds a plural inflection\nfunc AddPlural(find, replace string) {\n\tpluralInflections = append(pluralInflections, Regular{find, replace})\n\tcompile()\n}\n\n// AddSingular adds a singular inflection\nfunc AddSingular(find, replace string) {\n\tsingularInflections = append(singularInflections, Regular{find, replace})\n\tcompile()\n}\n\n// AddIrregular adds an irregular inflection\nfunc AddIrregular(singular, plural string) {\n\tirregularInflections = append(irregularInflections, Irregular{singular, plural})\n\tcompile()\n}\n\n// AddUncountable adds an uncountable inflection\nfunc AddUncountable(values ...string) {\n\tuncountableInflections = append(uncountableInflections, values...)\n\tcompile()\n}\n\n// GetPlural retrieves the plural inflection values\nfunc GetPlural() RegularSlice {\n\tplurals := make(RegularSlice, len(pluralInflections))\n\tcopy(plurals, pluralInflections)\n\treturn plurals\n}\n\n// GetSingular retrieves the singular inflection values\nfunc GetSingular() RegularSlice {\n\tsingulars := make(RegularSlice, len(singularInflections))\n\tcopy(singulars, singularInflections)\n\treturn singulars\n}\n\n// GetIrregular retrieves the irregular inflection values\nfunc GetIrregular() IrregularSlice {\n\tirregular := make(IrregularSlice, len(irregularInflections))\n\tcopy(irregular, irregularInflections)\n\treturn irregular\n}\n\n// GetUncountable retrieves the uncountable inflection values\nfunc GetUncountable() []string {\n\tuncountables := make([]string, len(uncountableInflections))\n\tcopy(uncountables, uncountableInflections)\n\treturn uncountables\n}\n\n// SetPlural sets the plural inflections slice\nfunc SetPlural(inflections RegularSlice) {\n\tpluralInflections = inflections\n\tcompile()\n}\n\n// SetSingular sets the singular inflections slice\nfunc SetSingular(inflections RegularSlice) {\n\tsingularInflections = inflections\n\tcompile()\n}\n\n// SetIrregular sets the irregular inflections slice\nfunc SetIrregular(inflections IrregularSlice) {\n\tirregularInflections = inflections\n\tcompile()\n}\n\n// SetUncountable sets the uncountable inflections slice\nfunc SetUncountable(inflections []string) {\n\tuncountableInflections = inflections\n\tcompile()\n}\n\n// Plural converts a word to its plural form\nfunc Plural(str string) string {\n\tfor _, inflection := range compiledPluralMaps {\n\t\tif inflection.regexp.MatchString(str) {\n\t\t\treturn inflection.regexp.ReplaceAllString(str, inflection.replace)\n\t\t}\n\t}\n\treturn str\n}\n\n// Singular converts a word to its singular form\nfunc Singular(str string) string {\n\tfor _, inflection := range compiledSingularMaps {\n\t\tif inflection.regexp.MatchString(str) {\n\t\t\treturn inflection.regexp.ReplaceAllString(str, inflection.replace)\n\t\t}\n\t}\n\treturn str\n}\n"
  },
  {
    "path": "vendor/github.com/jinzhu/inflection/wercker.yml",
    "content": "box: golang\n\nbuild:\n  steps:\n    - setup-go-workspace\n\n    # Gets the dependencies\n    - script:\n        name: go get\n        code: |\n          go get\n\n    # Build the project\n    - script:\n        name: go build\n        code: |\n          go build ./...\n\n    # Test the project\n    - script:\n        name: go test\n        code: |\n          go test ./...\n"
  },
  {
    "path": "vendor/github.com/lib/pq/.gitignore",
    "content": ".db\n*.test\n*~\n*.swp\n.idea\n.vscode"
  },
  {
    "path": "vendor/github.com/lib/pq/LICENSE.md",
    "content": "Copyright (c) 2011-2013, 'pq' Contributors\nPortions Copyright (C) 2011 Blake Mizerany\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/lib/pq/README.md",
    "content": "# pq - A pure Go postgres driver for Go's database/sql package\n\n[![GoDoc](https://godoc.org/github.com/lib/pq?status.svg)](https://pkg.go.dev/github.com/lib/pq?tab=doc)\n\n## Install\n\n\tgo get github.com/lib/pq\n\n## Features\n\n* SSL\n* Handles bad connections for `database/sql`\n* Scan `time.Time` correctly (i.e. `timestamp[tz]`, `time[tz]`, `date`)\n* Scan binary blobs correctly (i.e. `bytea`)\n* Package for `hstore` support\n* COPY FROM support\n* pq.ParseURL for converting urls to connection strings for sql.Open.\n* Many libpq compatible environment variables\n* Unix socket support\n* Notifications: `LISTEN`/`NOTIFY`\n* pgpass support\n* GSS (Kerberos) auth\n\n## Tests\n\n`go test` is used for testing.  See [TESTS.md](TESTS.md) for more details.\n\n## Status\n\nThis package is currently in maintenance mode, which means:\n1.   It generally does not accept new features.\n2.   It does accept bug fixes and version compatability changes provided by the community.\n3.   Maintainers usually do not resolve reported issues.\n4.   Community members are encouraged to help each other with reported issues.\n\nFor users that require new features or reliable resolution of reported bugs, we recommend using [pgx](https://github.com/jackc/pgx) which is under active development.\n"
  },
  {
    "path": "vendor/github.com/lib/pq/TESTS.md",
    "content": "# Tests\n\n## Running Tests\n\n`go test` is used for testing. A running PostgreSQL\nserver is required, with the ability to log in. The\ndatabase to connect to test with is \"pqgotest,\" on\n\"localhost\" but these can be overridden using [environment\nvariables](https://www.postgresql.org/docs/9.3/static/libpq-envars.html).\n\nExample:\n\n\tPGHOST=/run/postgresql go test\n\n## Benchmarks\n\nA benchmark suite can be run as part of the tests:\n\n\tgo test -bench .\n\n## Example setup (Docker)\n\nRun a postgres container:\n\n```\ndocker run --expose 5432:5432 postgres\n```\n\nRun tests:\n\n```\nPGHOST=localhost PGPORT=5432 PGUSER=postgres PGSSLMODE=disable PGDATABASE=postgres go test\n```\n"
  },
  {
    "path": "vendor/github.com/lib/pq/array.go",
    "content": "package pq\n\nimport (\n\t\"bytes\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/hex\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"strings\"\n)\n\nvar typeByteSlice = reflect.TypeOf([]byte{})\nvar typeDriverValuer = reflect.TypeOf((*driver.Valuer)(nil)).Elem()\nvar typeSQLScanner = reflect.TypeOf((*sql.Scanner)(nil)).Elem()\n\n// Array returns the optimal driver.Valuer and sql.Scanner for an array or\n// slice of any dimension.\n//\n// For example:\n//  db.Query(`SELECT * FROM t WHERE id = ANY($1)`, pq.Array([]int{235, 401}))\n//\n//  var x []sql.NullInt64\n//  db.QueryRow(`SELECT ARRAY[235, 401]`).Scan(pq.Array(&x))\n//\n// Scanning multi-dimensional arrays is not supported.  Arrays where the lower\n// bound is not one (such as `[0:0]={1}') are not supported.\nfunc Array(a interface{}) interface {\n\tdriver.Valuer\n\tsql.Scanner\n} {\n\tswitch a := a.(type) {\n\tcase []bool:\n\t\treturn (*BoolArray)(&a)\n\tcase []float64:\n\t\treturn (*Float64Array)(&a)\n\tcase []float32:\n\t\treturn (*Float32Array)(&a)\n\tcase []int64:\n\t\treturn (*Int64Array)(&a)\n\tcase []int32:\n\t\treturn (*Int32Array)(&a)\n\tcase []string:\n\t\treturn (*StringArray)(&a)\n\tcase [][]byte:\n\t\treturn (*ByteaArray)(&a)\n\n\tcase *[]bool:\n\t\treturn (*BoolArray)(a)\n\tcase *[]float64:\n\t\treturn (*Float64Array)(a)\n\tcase *[]float32:\n\t\treturn (*Float32Array)(a)\n\tcase *[]int64:\n\t\treturn (*Int64Array)(a)\n\tcase *[]int32:\n\t\treturn (*Int32Array)(a)\n\tcase *[]string:\n\t\treturn (*StringArray)(a)\n\tcase *[][]byte:\n\t\treturn (*ByteaArray)(a)\n\t}\n\n\treturn GenericArray{a}\n}\n\n// ArrayDelimiter may be optionally implemented by driver.Valuer or sql.Scanner\n// to override the array delimiter used by GenericArray.\ntype ArrayDelimiter interface {\n\t// ArrayDelimiter returns the delimiter character(s) for this element's type.\n\tArrayDelimiter() string\n}\n\n// BoolArray represents a one-dimensional array of the PostgreSQL boolean type.\ntype BoolArray []bool\n\n// Scan implements the sql.Scanner interface.\nfunc (a *BoolArray) Scan(src interface{}) error {\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src))\n\tcase nil:\n\t\t*a = nil\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to BoolArray\", src)\n}\n\nfunc (a *BoolArray) scanBytes(src []byte) error {\n\telems, err := scanLinearArray(src, []byte{','}, \"BoolArray\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif *a != nil && len(elems) == 0 {\n\t\t*a = (*a)[:0]\n\t} else {\n\t\tb := make(BoolArray, len(elems))\n\t\tfor i, v := range elems {\n\t\t\tif len(v) != 1 {\n\t\t\t\treturn fmt.Errorf(\"pq: could not parse boolean array index %d: invalid boolean %q\", i, v)\n\t\t\t}\n\t\t\tswitch v[0] {\n\t\t\tcase 't':\n\t\t\t\tb[i] = true\n\t\t\tcase 'f':\n\t\t\t\tb[i] = false\n\t\t\tdefault:\n\t\t\t\treturn fmt.Errorf(\"pq: could not parse boolean array index %d: invalid boolean %q\", i, v)\n\t\t\t}\n\t\t}\n\t\t*a = b\n\t}\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface.\nfunc (a BoolArray) Value() (driver.Value, error) {\n\tif a == nil {\n\t\treturn nil, nil\n\t}\n\n\tif n := len(a); n > 0 {\n\t\t// There will be exactly two curly brackets, N bytes of values,\n\t\t// and N-1 bytes of delimiters.\n\t\tb := make([]byte, 1+2*n)\n\n\t\tfor i := 0; i < n; i++ {\n\t\t\tb[2*i] = ','\n\t\t\tif a[i] {\n\t\t\t\tb[1+2*i] = 't'\n\t\t\t} else {\n\t\t\t\tb[1+2*i] = 'f'\n\t\t\t}\n\t\t}\n\n\t\tb[0] = '{'\n\t\tb[2*n] = '}'\n\n\t\treturn string(b), nil\n\t}\n\n\treturn \"{}\", nil\n}\n\n// ByteaArray represents a one-dimensional array of the PostgreSQL bytea type.\ntype ByteaArray [][]byte\n\n// Scan implements the sql.Scanner interface.\nfunc (a *ByteaArray) Scan(src interface{}) error {\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src))\n\tcase nil:\n\t\t*a = nil\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to ByteaArray\", src)\n}\n\nfunc (a *ByteaArray) scanBytes(src []byte) error {\n\telems, err := scanLinearArray(src, []byte{','}, \"ByteaArray\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif *a != nil && len(elems) == 0 {\n\t\t*a = (*a)[:0]\n\t} else {\n\t\tb := make(ByteaArray, len(elems))\n\t\tfor i, v := range elems {\n\t\t\tb[i], err = parseBytea(v)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"could not parse bytea array index %d: %s\", i, err.Error())\n\t\t\t}\n\t\t}\n\t\t*a = b\n\t}\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface. It uses the \"hex\" format which\n// is only supported on PostgreSQL 9.0 or newer.\nfunc (a ByteaArray) Value() (driver.Value, error) {\n\tif a == nil {\n\t\treturn nil, nil\n\t}\n\n\tif n := len(a); n > 0 {\n\t\t// There will be at least two curly brackets, 2*N bytes of quotes,\n\t\t// 3*N bytes of hex formatting, and N-1 bytes of delimiters.\n\t\tsize := 1 + 6*n\n\t\tfor _, x := range a {\n\t\t\tsize += hex.EncodedLen(len(x))\n\t\t}\n\n\t\tb := make([]byte, size)\n\n\t\tfor i, s := 0, b; i < n; i++ {\n\t\t\to := copy(s, `,\"\\\\x`)\n\t\t\to += hex.Encode(s[o:], a[i])\n\t\t\ts[o] = '\"'\n\t\t\ts = s[o+1:]\n\t\t}\n\n\t\tb[0] = '{'\n\t\tb[size-1] = '}'\n\n\t\treturn string(b), nil\n\t}\n\n\treturn \"{}\", nil\n}\n\n// Float64Array represents a one-dimensional array of the PostgreSQL double\n// precision type.\ntype Float64Array []float64\n\n// Scan implements the sql.Scanner interface.\nfunc (a *Float64Array) Scan(src interface{}) error {\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src))\n\tcase nil:\n\t\t*a = nil\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to Float64Array\", src)\n}\n\nfunc (a *Float64Array) scanBytes(src []byte) error {\n\telems, err := scanLinearArray(src, []byte{','}, \"Float64Array\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif *a != nil && len(elems) == 0 {\n\t\t*a = (*a)[:0]\n\t} else {\n\t\tb := make(Float64Array, len(elems))\n\t\tfor i, v := range elems {\n\t\t\tif b[i], err = strconv.ParseFloat(string(v), 64); err != nil {\n\t\t\t\treturn fmt.Errorf(\"pq: parsing array element index %d: %v\", i, err)\n\t\t\t}\n\t\t}\n\t\t*a = b\n\t}\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface.\nfunc (a Float64Array) Value() (driver.Value, error) {\n\tif a == nil {\n\t\treturn nil, nil\n\t}\n\n\tif n := len(a); n > 0 {\n\t\t// There will be at least two curly brackets, N bytes of values,\n\t\t// and N-1 bytes of delimiters.\n\t\tb := make([]byte, 1, 1+2*n)\n\t\tb[0] = '{'\n\n\t\tb = strconv.AppendFloat(b, a[0], 'f', -1, 64)\n\t\tfor i := 1; i < n; i++ {\n\t\t\tb = append(b, ',')\n\t\t\tb = strconv.AppendFloat(b, a[i], 'f', -1, 64)\n\t\t}\n\n\t\treturn string(append(b, '}')), nil\n\t}\n\n\treturn \"{}\", nil\n}\n\n// Float32Array represents a one-dimensional array of the PostgreSQL double\n// precision type.\ntype Float32Array []float32\n\n// Scan implements the sql.Scanner interface.\nfunc (a *Float32Array) Scan(src interface{}) error {\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src))\n\tcase nil:\n\t\t*a = nil\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to Float32Array\", src)\n}\n\nfunc (a *Float32Array) scanBytes(src []byte) error {\n\telems, err := scanLinearArray(src, []byte{','}, \"Float32Array\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif *a != nil && len(elems) == 0 {\n\t\t*a = (*a)[:0]\n\t} else {\n\t\tb := make(Float32Array, len(elems))\n\t\tfor i, v := range elems {\n\t\t\tvar x float64\n\t\t\tif x, err = strconv.ParseFloat(string(v), 32); err != nil {\n\t\t\t\treturn fmt.Errorf(\"pq: parsing array element index %d: %v\", i, err)\n\t\t\t}\n\t\t\tb[i] = float32(x)\n\t\t}\n\t\t*a = b\n\t}\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface.\nfunc (a Float32Array) Value() (driver.Value, error) {\n\tif a == nil {\n\t\treturn nil, nil\n\t}\n\n\tif n := len(a); n > 0 {\n\t\t// There will be at least two curly brackets, N bytes of values,\n\t\t// and N-1 bytes of delimiters.\n\t\tb := make([]byte, 1, 1+2*n)\n\t\tb[0] = '{'\n\n\t\tb = strconv.AppendFloat(b, float64(a[0]), 'f', -1, 32)\n\t\tfor i := 1; i < n; i++ {\n\t\t\tb = append(b, ',')\n\t\t\tb = strconv.AppendFloat(b, float64(a[i]), 'f', -1, 32)\n\t\t}\n\n\t\treturn string(append(b, '}')), nil\n\t}\n\n\treturn \"{}\", nil\n}\n\n// GenericArray implements the driver.Valuer and sql.Scanner interfaces for\n// an array or slice of any dimension.\ntype GenericArray struct{ A interface{} }\n\nfunc (GenericArray) evaluateDestination(rt reflect.Type) (reflect.Type, func([]byte, reflect.Value) error, string) {\n\tvar assign func([]byte, reflect.Value) error\n\tvar del = \",\"\n\n\t// TODO calculate the assign function for other types\n\t// TODO repeat this section on the element type of arrays or slices (multidimensional)\n\t{\n\t\tif reflect.PtrTo(rt).Implements(typeSQLScanner) {\n\t\t\t// dest is always addressable because it is an element of a slice.\n\t\t\tassign = func(src []byte, dest reflect.Value) (err error) {\n\t\t\t\tss := dest.Addr().Interface().(sql.Scanner)\n\t\t\t\tif src == nil {\n\t\t\t\t\terr = ss.Scan(nil)\n\t\t\t\t} else {\n\t\t\t\t\terr = ss.Scan(src)\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\tgoto FoundType\n\t\t}\n\n\t\tassign = func([]byte, reflect.Value) error {\n\t\t\treturn fmt.Errorf(\"pq: scanning to %s is not implemented; only sql.Scanner\", rt)\n\t\t}\n\t}\n\nFoundType:\n\n\tif ad, ok := reflect.Zero(rt).Interface().(ArrayDelimiter); ok {\n\t\tdel = ad.ArrayDelimiter()\n\t}\n\n\treturn rt, assign, del\n}\n\n// Scan implements the sql.Scanner interface.\nfunc (a GenericArray) Scan(src interface{}) error {\n\tdpv := reflect.ValueOf(a.A)\n\tswitch {\n\tcase dpv.Kind() != reflect.Ptr:\n\t\treturn fmt.Errorf(\"pq: destination %T is not a pointer to array or slice\", a.A)\n\tcase dpv.IsNil():\n\t\treturn fmt.Errorf(\"pq: destination %T is nil\", a.A)\n\t}\n\n\tdv := dpv.Elem()\n\tswitch dv.Kind() {\n\tcase reflect.Slice:\n\tcase reflect.Array:\n\tdefault:\n\t\treturn fmt.Errorf(\"pq: destination %T is not a pointer to array or slice\", a.A)\n\t}\n\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src, dv)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src), dv)\n\tcase nil:\n\t\tif dv.Kind() == reflect.Slice {\n\t\t\tdv.Set(reflect.Zero(dv.Type()))\n\t\t\treturn nil\n\t\t}\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to %s\", src, dv.Type())\n}\n\nfunc (a GenericArray) scanBytes(src []byte, dv reflect.Value) error {\n\tdtype, assign, del := a.evaluateDestination(dv.Type().Elem())\n\tdims, elems, err := parseArray(src, []byte(del))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// TODO allow multidimensional\n\n\tif len(dims) > 1 {\n\t\treturn fmt.Errorf(\"pq: scanning from multidimensional ARRAY%s is not implemented\",\n\t\t\tstrings.Replace(fmt.Sprint(dims), \" \", \"][\", -1))\n\t}\n\n\t// Treat a zero-dimensional array like an array with a single dimension of zero.\n\tif len(dims) == 0 {\n\t\tdims = append(dims, 0)\n\t}\n\n\tfor i, rt := 0, dv.Type(); i < len(dims); i, rt = i+1, rt.Elem() {\n\t\tswitch rt.Kind() {\n\t\tcase reflect.Slice:\n\t\tcase reflect.Array:\n\t\t\tif rt.Len() != dims[i] {\n\t\t\t\treturn fmt.Errorf(\"pq: cannot convert ARRAY%s to %s\",\n\t\t\t\t\tstrings.Replace(fmt.Sprint(dims), \" \", \"][\", -1), dv.Type())\n\t\t\t}\n\t\tdefault:\n\t\t\t// TODO handle multidimensional\n\t\t}\n\t}\n\n\tvalues := reflect.MakeSlice(reflect.SliceOf(dtype), len(elems), len(elems))\n\tfor i, e := range elems {\n\t\tif err := assign(e, values.Index(i)); err != nil {\n\t\t\treturn fmt.Errorf(\"pq: parsing array element index %d: %v\", i, err)\n\t\t}\n\t}\n\n\t// TODO handle multidimensional\n\n\tswitch dv.Kind() {\n\tcase reflect.Slice:\n\t\tdv.Set(values.Slice(0, dims[0]))\n\tcase reflect.Array:\n\t\tfor i := 0; i < dims[0]; i++ {\n\t\t\tdv.Index(i).Set(values.Index(i))\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface.\nfunc (a GenericArray) Value() (driver.Value, error) {\n\tif a.A == nil {\n\t\treturn nil, nil\n\t}\n\n\trv := reflect.ValueOf(a.A)\n\n\tswitch rv.Kind() {\n\tcase reflect.Slice:\n\t\tif rv.IsNil() {\n\t\t\treturn nil, nil\n\t\t}\n\tcase reflect.Array:\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"pq: Unable to convert %T to array\", a.A)\n\t}\n\n\tif n := rv.Len(); n > 0 {\n\t\t// There will be at least two curly brackets, N bytes of values,\n\t\t// and N-1 bytes of delimiters.\n\t\tb := make([]byte, 0, 1+2*n)\n\n\t\tb, _, err := appendArray(b, rv, n)\n\t\treturn string(b), err\n\t}\n\n\treturn \"{}\", nil\n}\n\n// Int64Array represents a one-dimensional array of the PostgreSQL integer types.\ntype Int64Array []int64\n\n// Scan implements the sql.Scanner interface.\nfunc (a *Int64Array) Scan(src interface{}) error {\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src))\n\tcase nil:\n\t\t*a = nil\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to Int64Array\", src)\n}\n\nfunc (a *Int64Array) scanBytes(src []byte) error {\n\telems, err := scanLinearArray(src, []byte{','}, \"Int64Array\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif *a != nil && len(elems) == 0 {\n\t\t*a = (*a)[:0]\n\t} else {\n\t\tb := make(Int64Array, len(elems))\n\t\tfor i, v := range elems {\n\t\t\tif b[i], err = strconv.ParseInt(string(v), 10, 64); err != nil {\n\t\t\t\treturn fmt.Errorf(\"pq: parsing array element index %d: %v\", i, err)\n\t\t\t}\n\t\t}\n\t\t*a = b\n\t}\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface.\nfunc (a Int64Array) Value() (driver.Value, error) {\n\tif a == nil {\n\t\treturn nil, nil\n\t}\n\n\tif n := len(a); n > 0 {\n\t\t// There will be at least two curly brackets, N bytes of values,\n\t\t// and N-1 bytes of delimiters.\n\t\tb := make([]byte, 1, 1+2*n)\n\t\tb[0] = '{'\n\n\t\tb = strconv.AppendInt(b, a[0], 10)\n\t\tfor i := 1; i < n; i++ {\n\t\t\tb = append(b, ',')\n\t\t\tb = strconv.AppendInt(b, a[i], 10)\n\t\t}\n\n\t\treturn string(append(b, '}')), nil\n\t}\n\n\treturn \"{}\", nil\n}\n\n// Int32Array represents a one-dimensional array of the PostgreSQL integer types.\ntype Int32Array []int32\n\n// Scan implements the sql.Scanner interface.\nfunc (a *Int32Array) Scan(src interface{}) error {\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src))\n\tcase nil:\n\t\t*a = nil\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to Int32Array\", src)\n}\n\nfunc (a *Int32Array) scanBytes(src []byte) error {\n\telems, err := scanLinearArray(src, []byte{','}, \"Int32Array\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif *a != nil && len(elems) == 0 {\n\t\t*a = (*a)[:0]\n\t} else {\n\t\tb := make(Int32Array, len(elems))\n\t\tfor i, v := range elems {\n\t\t\tx, err := strconv.ParseInt(string(v), 10, 32)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"pq: parsing array element index %d: %v\", i, err)\n\t\t\t}\n\t\t\tb[i] = int32(x)\n\t\t}\n\t\t*a = b\n\t}\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface.\nfunc (a Int32Array) Value() (driver.Value, error) {\n\tif a == nil {\n\t\treturn nil, nil\n\t}\n\n\tif n := len(a); n > 0 {\n\t\t// There will be at least two curly brackets, N bytes of values,\n\t\t// and N-1 bytes of delimiters.\n\t\tb := make([]byte, 1, 1+2*n)\n\t\tb[0] = '{'\n\n\t\tb = strconv.AppendInt(b, int64(a[0]), 10)\n\t\tfor i := 1; i < n; i++ {\n\t\t\tb = append(b, ',')\n\t\t\tb = strconv.AppendInt(b, int64(a[i]), 10)\n\t\t}\n\n\t\treturn string(append(b, '}')), nil\n\t}\n\n\treturn \"{}\", nil\n}\n\n// StringArray represents a one-dimensional array of the PostgreSQL character types.\ntype StringArray []string\n\n// Scan implements the sql.Scanner interface.\nfunc (a *StringArray) Scan(src interface{}) error {\n\tswitch src := src.(type) {\n\tcase []byte:\n\t\treturn a.scanBytes(src)\n\tcase string:\n\t\treturn a.scanBytes([]byte(src))\n\tcase nil:\n\t\t*a = nil\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"pq: cannot convert %T to StringArray\", src)\n}\n\nfunc (a *StringArray) scanBytes(src []byte) error {\n\telems, err := scanLinearArray(src, []byte{','}, \"StringArray\")\n\tif err != nil {\n\t\treturn err\n\t}\n\tif *a != nil && len(elems) == 0 {\n\t\t*a = (*a)[:0]\n\t} else {\n\t\tb := make(StringArray, len(elems))\n\t\tfor i, v := range elems {\n\t\t\tif b[i] = string(v); v == nil {\n\t\t\t\treturn fmt.Errorf(\"pq: parsing array element index %d: cannot convert nil to string\", i)\n\t\t\t}\n\t\t}\n\t\t*a = b\n\t}\n\treturn nil\n}\n\n// Value implements the driver.Valuer interface.\nfunc (a StringArray) Value() (driver.Value, error) {\n\tif a == nil {\n\t\treturn nil, nil\n\t}\n\n\tif n := len(a); n > 0 {\n\t\t// There will be at least two curly brackets, 2*N bytes of quotes,\n\t\t// and N-1 bytes of delimiters.\n\t\tb := make([]byte, 1, 1+3*n)\n\t\tb[0] = '{'\n\n\t\tb = appendArrayQuotedBytes(b, []byte(a[0]))\n\t\tfor i := 1; i < n; i++ {\n\t\t\tb = append(b, ',')\n\t\t\tb = appendArrayQuotedBytes(b, []byte(a[i]))\n\t\t}\n\n\t\treturn string(append(b, '}')), nil\n\t}\n\n\treturn \"{}\", nil\n}\n\n// appendArray appends rv to the buffer, returning the extended buffer and\n// the delimiter used between elements.\n//\n// It panics when n <= 0 or rv's Kind is not reflect.Array nor reflect.Slice.\nfunc appendArray(b []byte, rv reflect.Value, n int) ([]byte, string, error) {\n\tvar del string\n\tvar err error\n\n\tb = append(b, '{')\n\n\tif b, del, err = appendArrayElement(b, rv.Index(0)); err != nil {\n\t\treturn b, del, err\n\t}\n\n\tfor i := 1; i < n; i++ {\n\t\tb = append(b, del...)\n\t\tif b, del, err = appendArrayElement(b, rv.Index(i)); err != nil {\n\t\t\treturn b, del, err\n\t\t}\n\t}\n\n\treturn append(b, '}'), del, nil\n}\n\n// appendArrayElement appends rv to the buffer, returning the extended buffer\n// and the delimiter to use before the next element.\n//\n// When rv's Kind is neither reflect.Array nor reflect.Slice, it is converted\n// using driver.DefaultParameterConverter and the resulting []byte or string\n// is double-quoted.\n//\n// See http://www.postgresql.org/docs/current/static/arrays.html#ARRAYS-IO\nfunc appendArrayElement(b []byte, rv reflect.Value) ([]byte, string, error) {\n\tif k := rv.Kind(); k == reflect.Array || k == reflect.Slice {\n\t\tif t := rv.Type(); t != typeByteSlice && !t.Implements(typeDriverValuer) {\n\t\t\tif n := rv.Len(); n > 0 {\n\t\t\t\treturn appendArray(b, rv, n)\n\t\t\t}\n\n\t\t\treturn b, \"\", nil\n\t\t}\n\t}\n\n\tvar del = \",\"\n\tvar err error\n\tvar iv interface{} = rv.Interface()\n\n\tif ad, ok := iv.(ArrayDelimiter); ok {\n\t\tdel = ad.ArrayDelimiter()\n\t}\n\n\tif iv, err = driver.DefaultParameterConverter.ConvertValue(iv); err != nil {\n\t\treturn b, del, err\n\t}\n\n\tswitch v := iv.(type) {\n\tcase nil:\n\t\treturn append(b, \"NULL\"...), del, nil\n\tcase []byte:\n\t\treturn appendArrayQuotedBytes(b, v), del, nil\n\tcase string:\n\t\treturn appendArrayQuotedBytes(b, []byte(v)), del, nil\n\t}\n\n\tb, err = appendValue(b, iv)\n\treturn b, del, err\n}\n\nfunc appendArrayQuotedBytes(b, v []byte) []byte {\n\tb = append(b, '\"')\n\tfor {\n\t\ti := bytes.IndexAny(v, `\"\\`)\n\t\tif i < 0 {\n\t\t\tb = append(b, v...)\n\t\t\tbreak\n\t\t}\n\t\tif i > 0 {\n\t\t\tb = append(b, v[:i]...)\n\t\t}\n\t\tb = append(b, '\\\\', v[i])\n\t\tv = v[i+1:]\n\t}\n\treturn append(b, '\"')\n}\n\nfunc appendValue(b []byte, v driver.Value) ([]byte, error) {\n\treturn append(b, encode(nil, v, 0)...), nil\n}\n\n// parseArray extracts the dimensions and elements of an array represented in\n// text format. Only representations emitted by the backend are supported.\n// Notably, whitespace around brackets and delimiters is significant, and NULL\n// is case-sensitive.\n//\n// See http://www.postgresql.org/docs/current/static/arrays.html#ARRAYS-IO\nfunc parseArray(src, del []byte) (dims []int, elems [][]byte, err error) {\n\tvar depth, i int\n\n\tif len(src) < 1 || src[0] != '{' {\n\t\treturn nil, nil, fmt.Errorf(\"pq: unable to parse array; expected %q at offset %d\", '{', 0)\n\t}\n\nOpen:\n\tfor i < len(src) {\n\t\tswitch src[i] {\n\t\tcase '{':\n\t\t\tdepth++\n\t\t\ti++\n\t\tcase '}':\n\t\t\telems = make([][]byte, 0)\n\t\t\tgoto Close\n\t\tdefault:\n\t\t\tbreak Open\n\t\t}\n\t}\n\tdims = make([]int, i)\n\nElement:\n\tfor i < len(src) {\n\t\tswitch src[i] {\n\t\tcase '{':\n\t\t\tif depth == len(dims) {\n\t\t\t\tbreak Element\n\t\t\t}\n\t\t\tdepth++\n\t\t\tdims[depth-1] = 0\n\t\t\ti++\n\t\tcase '\"':\n\t\t\tvar elem = []byte{}\n\t\t\tvar escape bool\n\t\t\tfor i++; i < len(src); i++ {\n\t\t\t\tif escape {\n\t\t\t\t\telem = append(elem, src[i])\n\t\t\t\t\tescape = false\n\t\t\t\t} else {\n\t\t\t\t\tswitch src[i] {\n\t\t\t\t\tdefault:\n\t\t\t\t\t\telem = append(elem, src[i])\n\t\t\t\t\tcase '\\\\':\n\t\t\t\t\t\tescape = true\n\t\t\t\t\tcase '\"':\n\t\t\t\t\t\telems = append(elems, elem)\n\t\t\t\t\t\ti++\n\t\t\t\t\t\tbreak Element\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\tdefault:\n\t\t\tfor start := i; i < len(src); i++ {\n\t\t\t\tif bytes.HasPrefix(src[i:], del) || src[i] == '}' {\n\t\t\t\t\telem := src[start:i]\n\t\t\t\t\tif len(elem) == 0 {\n\t\t\t\t\t\treturn nil, nil, fmt.Errorf(\"pq: unable to parse array; unexpected %q at offset %d\", src[i], i)\n\t\t\t\t\t}\n\t\t\t\t\tif bytes.Equal(elem, []byte(\"NULL\")) {\n\t\t\t\t\t\telem = nil\n\t\t\t\t\t}\n\t\t\t\t\telems = append(elems, elem)\n\t\t\t\t\tbreak Element\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tfor i < len(src) {\n\t\tif bytes.HasPrefix(src[i:], del) && depth > 0 {\n\t\t\tdims[depth-1]++\n\t\t\ti += len(del)\n\t\t\tgoto Element\n\t\t} else if src[i] == '}' && depth > 0 {\n\t\t\tdims[depth-1]++\n\t\t\tdepth--\n\t\t\ti++\n\t\t} else {\n\t\t\treturn nil, nil, fmt.Errorf(\"pq: unable to parse array; unexpected %q at offset %d\", src[i], i)\n\t\t}\n\t}\n\nClose:\n\tfor i < len(src) {\n\t\tif src[i] == '}' && depth > 0 {\n\t\t\tdepth--\n\t\t\ti++\n\t\t} else {\n\t\t\treturn nil, nil, fmt.Errorf(\"pq: unable to parse array; unexpected %q at offset %d\", src[i], i)\n\t\t}\n\t}\n\tif depth > 0 {\n\t\terr = fmt.Errorf(\"pq: unable to parse array; expected %q at offset %d\", '}', i)\n\t}\n\tif err == nil {\n\t\tfor _, d := range dims {\n\t\t\tif (len(elems) % d) != 0 {\n\t\t\t\terr = fmt.Errorf(\"pq: multidimensional arrays must have elements with matching dimensions\")\n\t\t\t}\n\t\t}\n\t}\n\treturn\n}\n\nfunc scanLinearArray(src, del []byte, typ string) (elems [][]byte, err error) {\n\tdims, elems, err := parseArray(src, del)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif len(dims) > 1 {\n\t\treturn nil, fmt.Errorf(\"pq: cannot convert ARRAY%s to %s\", strings.Replace(fmt.Sprint(dims), \" \", \"][\", -1), typ)\n\t}\n\treturn elems, err\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/buf.go",
    "content": "package pq\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\n\t\"github.com/lib/pq/oid\"\n)\n\ntype readBuf []byte\n\nfunc (b *readBuf) int32() (n int) {\n\tn = int(int32(binary.BigEndian.Uint32(*b)))\n\t*b = (*b)[4:]\n\treturn\n}\n\nfunc (b *readBuf) oid() (n oid.Oid) {\n\tn = oid.Oid(binary.BigEndian.Uint32(*b))\n\t*b = (*b)[4:]\n\treturn\n}\n\n// N.B: this is actually an unsigned 16-bit integer, unlike int32\nfunc (b *readBuf) int16() (n int) {\n\tn = int(binary.BigEndian.Uint16(*b))\n\t*b = (*b)[2:]\n\treturn\n}\n\nfunc (b *readBuf) string() string {\n\ti := bytes.IndexByte(*b, 0)\n\tif i < 0 {\n\t\terrorf(\"invalid message format; expected string terminator\")\n\t}\n\ts := (*b)[:i]\n\t*b = (*b)[i+1:]\n\treturn string(s)\n}\n\nfunc (b *readBuf) next(n int) (v []byte) {\n\tv = (*b)[:n]\n\t*b = (*b)[n:]\n\treturn\n}\n\nfunc (b *readBuf) byte() byte {\n\treturn b.next(1)[0]\n}\n\ntype writeBuf struct {\n\tbuf []byte\n\tpos int\n}\n\nfunc (b *writeBuf) int32(n int) {\n\tx := make([]byte, 4)\n\tbinary.BigEndian.PutUint32(x, uint32(n))\n\tb.buf = append(b.buf, x...)\n}\n\nfunc (b *writeBuf) int16(n int) {\n\tx := make([]byte, 2)\n\tbinary.BigEndian.PutUint16(x, uint16(n))\n\tb.buf = append(b.buf, x...)\n}\n\nfunc (b *writeBuf) string(s string) {\n\tb.buf = append(append(b.buf, s...), '\\000')\n}\n\nfunc (b *writeBuf) byte(c byte) {\n\tb.buf = append(b.buf, c)\n}\n\nfunc (b *writeBuf) bytes(v []byte) {\n\tb.buf = append(b.buf, v...)\n}\n\nfunc (b *writeBuf) wrap() []byte {\n\tp := b.buf[b.pos:]\n\tbinary.BigEndian.PutUint32(p, uint32(len(p)))\n\treturn b.buf\n}\n\nfunc (b *writeBuf) next(c byte) {\n\tp := b.buf[b.pos:]\n\tbinary.BigEndian.PutUint32(p, uint32(len(p)))\n\tb.pos = len(b.buf) + 1\n\tb.buf = append(b.buf, c, 0, 0, 0, 0)\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/conn.go",
    "content": "package pq\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/md5\"\n\t\"crypto/sha256\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"encoding/binary\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"os\"\n\t\"os/user\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\t\"unicode\"\n\n\t\"github.com/lib/pq/oid\"\n\t\"github.com/lib/pq/scram\"\n)\n\n// Common error types\nvar (\n\tErrNotSupported              = errors.New(\"pq: Unsupported command\")\n\tErrInFailedTransaction       = errors.New(\"pq: Could not complete operation in a failed transaction\")\n\tErrSSLNotSupported           = errors.New(\"pq: SSL is not enabled on the server\")\n\tErrSSLKeyUnknownOwnership    = errors.New(\"pq: Could not get owner information for private key, may not be properly protected\")\n\tErrSSLKeyHasWorldPermissions = errors.New(\"pq: Private key has world access. Permissions should be u=rw,g=r (0640) if owned by root, or u=rw (0600), or less\")\n\n\tErrCouldNotDetectUsername = errors.New(\"pq: Could not detect default username. Please provide one explicitly\")\n\n\terrUnexpectedReady = errors.New(\"unexpected ReadyForQuery\")\n\terrNoRowsAffected  = errors.New(\"no RowsAffected available after the empty statement\")\n\terrNoLastInsertID  = errors.New(\"no LastInsertId available after the empty statement\")\n)\n\n// Compile time validation that our types implement the expected interfaces\nvar (\n\t_ driver.Driver = Driver{}\n)\n\n// Driver is the Postgres database driver.\ntype Driver struct{}\n\n// Open opens a new connection to the database. name is a connection string.\n// Most users should only use it through database/sql package from the standard\n// library.\nfunc (d Driver) Open(name string) (driver.Conn, error) {\n\treturn Open(name)\n}\n\nfunc init() {\n\tsql.Register(\"postgres\", &Driver{})\n}\n\ntype parameterStatus struct {\n\t// server version in the same format as server_version_num, or 0 if\n\t// unavailable\n\tserverVersion int\n\n\t// the current location based on the TimeZone value of the session, if\n\t// available\n\tcurrentLocation *time.Location\n}\n\ntype transactionStatus byte\n\nconst (\n\ttxnStatusIdle                transactionStatus = 'I'\n\ttxnStatusIdleInTransaction   transactionStatus = 'T'\n\ttxnStatusInFailedTransaction transactionStatus = 'E'\n)\n\nfunc (s transactionStatus) String() string {\n\tswitch s {\n\tcase txnStatusIdle:\n\t\treturn \"idle\"\n\tcase txnStatusIdleInTransaction:\n\t\treturn \"idle in transaction\"\n\tcase txnStatusInFailedTransaction:\n\t\treturn \"in a failed transaction\"\n\tdefault:\n\t\terrorf(\"unknown transactionStatus %d\", s)\n\t}\n\n\tpanic(\"not reached\")\n}\n\n// Dialer is the dialer interface. It can be used to obtain more control over\n// how pq creates network connections.\ntype Dialer interface {\n\tDial(network, address string) (net.Conn, error)\n\tDialTimeout(network, address string, timeout time.Duration) (net.Conn, error)\n}\n\n// DialerContext is the context-aware dialer interface.\ntype DialerContext interface {\n\tDialContext(ctx context.Context, network, address string) (net.Conn, error)\n}\n\ntype defaultDialer struct {\n\td net.Dialer\n}\n\nfunc (d defaultDialer) Dial(network, address string) (net.Conn, error) {\n\treturn d.d.Dial(network, address)\n}\nfunc (d defaultDialer) DialTimeout(\n\tnetwork, address string, timeout time.Duration,\n) (net.Conn, error) {\n\tctx, cancel := context.WithTimeout(context.Background(), timeout)\n\tdefer cancel()\n\treturn d.DialContext(ctx, network, address)\n}\nfunc (d defaultDialer) DialContext(ctx context.Context, network, address string) (net.Conn, error) {\n\treturn d.d.DialContext(ctx, network, address)\n}\n\ntype conn struct {\n\tc         net.Conn\n\tbuf       *bufio.Reader\n\tnamei     int\n\tscratch   [512]byte\n\ttxnStatus transactionStatus\n\ttxnFinish func()\n\n\t// Save connection arguments to use during CancelRequest.\n\tdialer Dialer\n\topts   values\n\n\t// Cancellation key data for use with CancelRequest messages.\n\tprocessID int\n\tsecretKey int\n\n\tparameterStatus parameterStatus\n\n\tsaveMessageType   byte\n\tsaveMessageBuffer []byte\n\n\t// If an error is set, this connection is bad and all public-facing\n\t// functions should return the appropriate error by calling get()\n\t// (ErrBadConn) or getForNext().\n\terr syncErr\n\n\t// If set, this connection should never use the binary format when\n\t// receiving query results from prepared statements.  Only provided for\n\t// debugging.\n\tdisablePreparedBinaryResult bool\n\n\t// Whether to always send []byte parameters over as binary.  Enables single\n\t// round-trip mode for non-prepared Query calls.\n\tbinaryParameters bool\n\n\t// If true this connection is in the middle of a COPY\n\tinCopy bool\n\n\t// If not nil, notices will be synchronously sent here\n\tnoticeHandler func(*Error)\n\n\t// If not nil, notifications will be synchronously sent here\n\tnotificationHandler func(*Notification)\n\n\t// GSSAPI context\n\tgss GSS\n}\n\ntype syncErr struct {\n\terr error\n\tsync.Mutex\n}\n\n// Return ErrBadConn if connection is bad.\nfunc (e *syncErr) get() error {\n\te.Lock()\n\tdefer e.Unlock()\n\tif e.err != nil {\n\t\treturn driver.ErrBadConn\n\t}\n\treturn nil\n}\n\n// Return the error set on the connection. Currently only used by rows.Next.\nfunc (e *syncErr) getForNext() error {\n\te.Lock()\n\tdefer e.Unlock()\n\treturn e.err\n}\n\n// Set error, only if it isn't set yet.\nfunc (e *syncErr) set(err error) {\n\tif err == nil {\n\t\tpanic(\"attempt to set nil err\")\n\t}\n\te.Lock()\n\tdefer e.Unlock()\n\tif e.err == nil {\n\t\te.err = err\n\t}\n}\n\n// Handle driver-side settings in parsed connection string.\nfunc (cn *conn) handleDriverSettings(o values) (err error) {\n\tboolSetting := func(key string, val *bool) error {\n\t\tif value, ok := o[key]; ok {\n\t\t\tif value == \"yes\" {\n\t\t\t\t*val = true\n\t\t\t} else if value == \"no\" {\n\t\t\t\t*val = false\n\t\t\t} else {\n\t\t\t\treturn fmt.Errorf(\"unrecognized value %q for %s\", value, key)\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\n\terr = boolSetting(\"disable_prepared_binary_result\", &cn.disablePreparedBinaryResult)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn boolSetting(\"binary_parameters\", &cn.binaryParameters)\n}\n\nfunc (cn *conn) handlePgpass(o values) {\n\t// if a password was supplied, do not process .pgpass\n\tif _, ok := o[\"password\"]; ok {\n\t\treturn\n\t}\n\tfilename := os.Getenv(\"PGPASSFILE\")\n\tif filename == \"\" {\n\t\t// XXX this code doesn't work on Windows where the default filename is\n\t\t// XXX %APPDATA%\\postgresql\\pgpass.conf\n\t\t// Prefer $HOME over user.Current due to glibc bug: golang.org/issue/13470\n\t\tuserHome := os.Getenv(\"HOME\")\n\t\tif userHome == \"\" {\n\t\t\tuser, err := user.Current()\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tuserHome = user.HomeDir\n\t\t}\n\t\tfilename = filepath.Join(userHome, \".pgpass\")\n\t}\n\tfileinfo, err := os.Stat(filename)\n\tif err != nil {\n\t\treturn\n\t}\n\tmode := fileinfo.Mode()\n\tif mode&(0x77) != 0 {\n\t\t// XXX should warn about incorrect .pgpass permissions as psql does\n\t\treturn\n\t}\n\tfile, err := os.Open(filename)\n\tif err != nil {\n\t\treturn\n\t}\n\tdefer file.Close()\n\tscanner := bufio.NewScanner(io.Reader(file))\n\t// From: https://github.com/tg/pgpass/blob/master/reader.go\n\tfor scanner.Scan() {\n\t\tif scanText(scanner.Text(), o) {\n\t\t\tbreak\n\t\t}\n\t}\n}\n\n// GetFields is a helper function for scanText.\nfunc getFields(s string) []string {\n\tfs := make([]string, 0, 5)\n\tf := make([]rune, 0, len(s))\n\n\tvar esc bool\n\tfor _, c := range s {\n\t\tswitch {\n\t\tcase esc:\n\t\t\tf = append(f, c)\n\t\t\tesc = false\n\t\tcase c == '\\\\':\n\t\t\tesc = true\n\t\tcase c == ':':\n\t\t\tfs = append(fs, string(f))\n\t\t\tf = f[:0]\n\t\tdefault:\n\t\t\tf = append(f, c)\n\t\t}\n\t}\n\treturn append(fs, string(f))\n}\n\n// ScanText assists HandlePgpass in it's objective.\nfunc scanText(line string, o values) bool {\n\thostname := o[\"host\"]\n\tntw, _ := network(o)\n\tport := o[\"port\"]\n\tdb := o[\"dbname\"]\n\tusername := o[\"user\"]\n\tif len(line) == 0 || line[0] == '#' {\n\t\treturn false\n\t}\n\tsplit := getFields(line)\n\tif len(split) != 5 {\n\t\treturn false\n\t}\n\tif (split[0] == \"*\" || split[0] == hostname || (split[0] == \"localhost\" && (hostname == \"\" || ntw == \"unix\"))) && (split[1] == \"*\" || split[1] == port) && (split[2] == \"*\" || split[2] == db) && (split[3] == \"*\" || split[3] == username) {\n\t\to[\"password\"] = split[4]\n\t\treturn true\n\t}\n\treturn false\n}\n\nfunc (cn *conn) writeBuf(b byte) *writeBuf {\n\tcn.scratch[0] = b\n\treturn &writeBuf{\n\t\tbuf: cn.scratch[:5],\n\t\tpos: 1,\n\t}\n}\n\n// Open opens a new connection to the database. dsn is a connection string.\n// Most users should only use it through database/sql package from the standard\n// library.\nfunc Open(dsn string) (_ driver.Conn, err error) {\n\treturn DialOpen(defaultDialer{}, dsn)\n}\n\n// DialOpen opens a new connection to the database using a dialer.\nfunc DialOpen(d Dialer, dsn string) (_ driver.Conn, err error) {\n\tc, err := NewConnector(dsn)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tc.Dialer(d)\n\treturn c.open(context.Background())\n}\n\nfunc (c *Connector) open(ctx context.Context) (cn *conn, err error) {\n\t// Handle any panics during connection initialization.  Note that we\n\t// specifically do *not* want to use errRecover(), as that would turn any\n\t// connection errors into ErrBadConns, hiding the real error message from\n\t// the user.\n\tdefer errRecoverNoErrBadConn(&err)\n\n\t// Create a new values map (copy). This makes it so maps in different\n\t// connections do not reference the same underlying data structure, so it\n\t// is safe for multiple connections to concurrently write to their opts.\n\to := make(values)\n\tfor k, v := range c.opts {\n\t\to[k] = v\n\t}\n\n\tcn = &conn{\n\t\topts:   o,\n\t\tdialer: c.dialer,\n\t}\n\terr = cn.handleDriverSettings(o)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcn.handlePgpass(o)\n\n\tcn.c, err = dial(ctx, c.dialer, o)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\terr = cn.ssl(o)\n\tif err != nil {\n\t\tif cn.c != nil {\n\t\t\tcn.c.Close()\n\t\t}\n\t\treturn nil, err\n\t}\n\n\t// cn.startup panics on error. Make sure we don't leak cn.c.\n\tpanicking := true\n\tdefer func() {\n\t\tif panicking {\n\t\t\tcn.c.Close()\n\t\t}\n\t}()\n\n\tcn.buf = bufio.NewReader(cn.c)\n\tcn.startup(o)\n\n\t// reset the deadline, in case one was set (see dial)\n\tif timeout, ok := o[\"connect_timeout\"]; ok && timeout != \"0\" {\n\t\terr = cn.c.SetDeadline(time.Time{})\n\t}\n\tpanicking = false\n\treturn cn, err\n}\n\nfunc dial(ctx context.Context, d Dialer, o values) (net.Conn, error) {\n\tnetwork, address := network(o)\n\n\t// Zero or not specified means wait indefinitely.\n\tif timeout, ok := o[\"connect_timeout\"]; ok && timeout != \"0\" {\n\t\tseconds, err := strconv.ParseInt(timeout, 10, 0)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid value for parameter connect_timeout: %s\", err)\n\t\t}\n\t\tduration := time.Duration(seconds) * time.Second\n\n\t\t// connect_timeout should apply to the entire connection establishment\n\t\t// procedure, so we both use a timeout for the TCP connection\n\t\t// establishment and set a deadline for doing the initial handshake.\n\t\t// The deadline is then reset after startup() is done.\n\t\tdeadline := time.Now().Add(duration)\n\t\tvar conn net.Conn\n\t\tif dctx, ok := d.(DialerContext); ok {\n\t\t\tctx, cancel := context.WithTimeout(ctx, duration)\n\t\t\tdefer cancel()\n\t\t\tconn, err = dctx.DialContext(ctx, network, address)\n\t\t} else {\n\t\t\tconn, err = d.DialTimeout(network, address, duration)\n\t\t}\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\terr = conn.SetDeadline(deadline)\n\t\treturn conn, err\n\t}\n\tif dctx, ok := d.(DialerContext); ok {\n\t\treturn dctx.DialContext(ctx, network, address)\n\t}\n\treturn d.Dial(network, address)\n}\n\nfunc network(o values) (string, string) {\n\thost := o[\"host\"]\n\n\tif strings.HasPrefix(host, \"/\") {\n\t\tsockPath := path.Join(host, \".s.PGSQL.\"+o[\"port\"])\n\t\treturn \"unix\", sockPath\n\t}\n\n\treturn \"tcp\", net.JoinHostPort(host, o[\"port\"])\n}\n\ntype values map[string]string\n\n// scanner implements a tokenizer for libpq-style option strings.\ntype scanner struct {\n\ts []rune\n\ti int\n}\n\n// newScanner returns a new scanner initialized with the option string s.\nfunc newScanner(s string) *scanner {\n\treturn &scanner{[]rune(s), 0}\n}\n\n// Next returns the next rune.\n// It returns 0, false if the end of the text has been reached.\nfunc (s *scanner) Next() (rune, bool) {\n\tif s.i >= len(s.s) {\n\t\treturn 0, false\n\t}\n\tr := s.s[s.i]\n\ts.i++\n\treturn r, true\n}\n\n// SkipSpaces returns the next non-whitespace rune.\n// It returns 0, false if the end of the text has been reached.\nfunc (s *scanner) SkipSpaces() (rune, bool) {\n\tr, ok := s.Next()\n\tfor unicode.IsSpace(r) && ok {\n\t\tr, ok = s.Next()\n\t}\n\treturn r, ok\n}\n\n// parseOpts parses the options from name and adds them to the values.\n//\n// The parsing code is based on conninfo_parse from libpq's fe-connect.c\nfunc parseOpts(name string, o values) error {\n\ts := newScanner(name)\n\n\tfor {\n\t\tvar (\n\t\t\tkeyRunes, valRunes []rune\n\t\t\tr                  rune\n\t\t\tok                 bool\n\t\t)\n\n\t\tif r, ok = s.SkipSpaces(); !ok {\n\t\t\tbreak\n\t\t}\n\n\t\t// Scan the key\n\t\tfor !unicode.IsSpace(r) && r != '=' {\n\t\t\tkeyRunes = append(keyRunes, r)\n\t\t\tif r, ok = s.Next(); !ok {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\t// Skip any whitespace if we're not at the = yet\n\t\tif r != '=' {\n\t\t\tr, ok = s.SkipSpaces()\n\t\t}\n\n\t\t// The current character should be =\n\t\tif r != '=' || !ok {\n\t\t\treturn fmt.Errorf(`missing \"=\" after %q in connection info string\"`, string(keyRunes))\n\t\t}\n\n\t\t// Skip any whitespace after the =\n\t\tif r, ok = s.SkipSpaces(); !ok {\n\t\t\t// If we reach the end here, the last value is just an empty string as per libpq.\n\t\t\to[string(keyRunes)] = \"\"\n\t\t\tbreak\n\t\t}\n\n\t\tif r != '\\'' {\n\t\t\tfor !unicode.IsSpace(r) {\n\t\t\t\tif r == '\\\\' {\n\t\t\t\t\tif r, ok = s.Next(); !ok {\n\t\t\t\t\t\treturn fmt.Errorf(`missing character after backslash`)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tvalRunes = append(valRunes, r)\n\n\t\t\t\tif r, ok = s.Next(); !ok {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\tquote:\n\t\t\tfor {\n\t\t\t\tif r, ok = s.Next(); !ok {\n\t\t\t\t\treturn fmt.Errorf(`unterminated quoted string literal in connection string`)\n\t\t\t\t}\n\t\t\t\tswitch r {\n\t\t\t\tcase '\\'':\n\t\t\t\t\tbreak quote\n\t\t\t\tcase '\\\\':\n\t\t\t\t\tr, _ = s.Next()\n\t\t\t\t\tfallthrough\n\t\t\t\tdefault:\n\t\t\t\t\tvalRunes = append(valRunes, r)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\to[string(keyRunes)] = string(valRunes)\n\t}\n\n\treturn nil\n}\n\nfunc (cn *conn) isInTransaction() bool {\n\treturn cn.txnStatus == txnStatusIdleInTransaction ||\n\t\tcn.txnStatus == txnStatusInFailedTransaction\n}\n\nfunc (cn *conn) checkIsInTransaction(intxn bool) {\n\tif cn.isInTransaction() != intxn {\n\t\tcn.err.set(driver.ErrBadConn)\n\t\terrorf(\"unexpected transaction status %v\", cn.txnStatus)\n\t}\n}\n\nfunc (cn *conn) Begin() (_ driver.Tx, err error) {\n\treturn cn.begin(\"\")\n}\n\nfunc (cn *conn) begin(mode string) (_ driver.Tx, err error) {\n\tif err := cn.err.get(); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer cn.errRecover(&err)\n\n\tcn.checkIsInTransaction(false)\n\t_, commandTag, err := cn.simpleExec(\"BEGIN\" + mode)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif commandTag != \"BEGIN\" {\n\t\tcn.err.set(driver.ErrBadConn)\n\t\treturn nil, fmt.Errorf(\"unexpected command tag %s\", commandTag)\n\t}\n\tif cn.txnStatus != txnStatusIdleInTransaction {\n\t\tcn.err.set(driver.ErrBadConn)\n\t\treturn nil, fmt.Errorf(\"unexpected transaction status %v\", cn.txnStatus)\n\t}\n\treturn cn, nil\n}\n\nfunc (cn *conn) closeTxn() {\n\tif finish := cn.txnFinish; finish != nil {\n\t\tfinish()\n\t}\n}\n\nfunc (cn *conn) Commit() (err error) {\n\tdefer cn.closeTxn()\n\tif err := cn.err.get(); err != nil {\n\t\treturn err\n\t}\n\tdefer cn.errRecover(&err)\n\n\tcn.checkIsInTransaction(true)\n\t// We don't want the client to think that everything is okay if it tries\n\t// to commit a failed transaction.  However, no matter what we return,\n\t// database/sql will release this connection back into the free connection\n\t// pool so we have to abort the current transaction here.  Note that you\n\t// would get the same behaviour if you issued a COMMIT in a failed\n\t// transaction, so it's also the least surprising thing to do here.\n\tif cn.txnStatus == txnStatusInFailedTransaction {\n\t\tif err := cn.rollback(); err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn ErrInFailedTransaction\n\t}\n\n\t_, commandTag, err := cn.simpleExec(\"COMMIT\")\n\tif err != nil {\n\t\tif cn.isInTransaction() {\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t}\n\t\treturn err\n\t}\n\tif commandTag != \"COMMIT\" {\n\t\tcn.err.set(driver.ErrBadConn)\n\t\treturn fmt.Errorf(\"unexpected command tag %s\", commandTag)\n\t}\n\tcn.checkIsInTransaction(false)\n\treturn nil\n}\n\nfunc (cn *conn) Rollback() (err error) {\n\tdefer cn.closeTxn()\n\tif err := cn.err.get(); err != nil {\n\t\treturn err\n\t}\n\tdefer cn.errRecover(&err)\n\treturn cn.rollback()\n}\n\nfunc (cn *conn) rollback() (err error) {\n\tcn.checkIsInTransaction(true)\n\t_, commandTag, err := cn.simpleExec(\"ROLLBACK\")\n\tif err != nil {\n\t\tif cn.isInTransaction() {\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t}\n\t\treturn err\n\t}\n\tif commandTag != \"ROLLBACK\" {\n\t\treturn fmt.Errorf(\"unexpected command tag %s\", commandTag)\n\t}\n\tcn.checkIsInTransaction(false)\n\treturn nil\n}\n\nfunc (cn *conn) gname() string {\n\tcn.namei++\n\treturn strconv.FormatInt(int64(cn.namei), 10)\n}\n\nfunc (cn *conn) simpleExec(q string) (res driver.Result, commandTag string, err error) {\n\tb := cn.writeBuf('Q')\n\tb.string(q)\n\tcn.send(b)\n\n\tfor {\n\t\tt, r := cn.recv1()\n\t\tswitch t {\n\t\tcase 'C':\n\t\t\tres, commandTag = cn.parseComplete(r.string())\n\t\tcase 'Z':\n\t\t\tcn.processReadyForQuery(r)\n\t\t\tif res == nil && err == nil {\n\t\t\t\terr = errUnexpectedReady\n\t\t\t}\n\t\t\t// done\n\t\t\treturn\n\t\tcase 'E':\n\t\t\terr = parseError(r)\n\t\tcase 'I':\n\t\t\tres = emptyRows\n\t\tcase 'T', 'D':\n\t\t\t// ignore any results\n\t\tdefault:\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\terrorf(\"unknown response for simple query: %q\", t)\n\t\t}\n\t}\n}\n\nfunc (cn *conn) simpleQuery(q string) (res *rows, err error) {\n\tdefer cn.errRecover(&err)\n\n\tb := cn.writeBuf('Q')\n\tb.string(q)\n\tcn.send(b)\n\n\tfor {\n\t\tt, r := cn.recv1()\n\t\tswitch t {\n\t\tcase 'C', 'I':\n\t\t\t// We allow queries which don't return any results through Query as\n\t\t\t// well as Exec.  We still have to give database/sql a rows object\n\t\t\t// the user can close, though, to avoid connections from being\n\t\t\t// leaked.  A \"rows\" with done=true works fine for that purpose.\n\t\t\tif err != nil {\n\t\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\t\terrorf(\"unexpected message %q in simple query execution\", t)\n\t\t\t}\n\t\t\tif res == nil {\n\t\t\t\tres = &rows{\n\t\t\t\t\tcn: cn,\n\t\t\t\t}\n\t\t\t}\n\t\t\t// Set the result and tag to the last command complete if there wasn't a\n\t\t\t// query already run. Although queries usually return from here and cede\n\t\t\t// control to Next, a query with zero results does not.\n\t\t\tif t == 'C' {\n\t\t\t\tres.result, res.tag = cn.parseComplete(r.string())\n\t\t\t\tif res.colNames != nil {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t\tres.done = true\n\t\tcase 'Z':\n\t\t\tcn.processReadyForQuery(r)\n\t\t\t// done\n\t\t\treturn\n\t\tcase 'E':\n\t\t\tres = nil\n\t\t\terr = parseError(r)\n\t\tcase 'D':\n\t\t\tif res == nil {\n\t\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\t\terrorf(\"unexpected DataRow in simple query execution\")\n\t\t\t}\n\t\t\t// the query didn't fail; kick off to Next\n\t\t\tcn.saveMessage(t, r)\n\t\t\treturn\n\t\tcase 'T':\n\t\t\t// res might be non-nil here if we received a previous\n\t\t\t// CommandComplete, but that's fine; just overwrite it\n\t\t\tres = &rows{cn: cn}\n\t\t\tres.rowsHeader = parsePortalRowDescribe(r)\n\n\t\t\t// To work around a bug in QueryRow in Go 1.2 and earlier, wait\n\t\t\t// until the first DataRow has been received.\n\t\tdefault:\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\terrorf(\"unknown response for simple query: %q\", t)\n\t\t}\n\t}\n}\n\ntype noRows struct{}\n\nvar emptyRows noRows\n\nvar _ driver.Result = noRows{}\n\nfunc (noRows) LastInsertId() (int64, error) {\n\treturn 0, errNoLastInsertID\n}\n\nfunc (noRows) RowsAffected() (int64, error) {\n\treturn 0, errNoRowsAffected\n}\n\n// Decides which column formats to use for a prepared statement.  The input is\n// an array of type oids, one element per result column.\nfunc decideColumnFormats(\n\tcolTyps []fieldDesc, forceText bool,\n) (colFmts []format, colFmtData []byte) {\n\tif len(colTyps) == 0 {\n\t\treturn nil, colFmtDataAllText\n\t}\n\n\tcolFmts = make([]format, len(colTyps))\n\tif forceText {\n\t\treturn colFmts, colFmtDataAllText\n\t}\n\n\tallBinary := true\n\tallText := true\n\tfor i, t := range colTyps {\n\t\tswitch t.OID {\n\t\t// This is the list of types to use binary mode for when receiving them\n\t\t// through a prepared statement.  If a type appears in this list, it\n\t\t// must also be implemented in binaryDecode in encode.go.\n\t\tcase oid.T_bytea:\n\t\t\tfallthrough\n\t\tcase oid.T_int8:\n\t\t\tfallthrough\n\t\tcase oid.T_int4:\n\t\t\tfallthrough\n\t\tcase oid.T_int2:\n\t\t\tfallthrough\n\t\tcase oid.T_uuid:\n\t\t\tcolFmts[i] = formatBinary\n\t\t\tallText = false\n\n\t\tdefault:\n\t\t\tallBinary = false\n\t\t}\n\t}\n\n\tif allBinary {\n\t\treturn colFmts, colFmtDataAllBinary\n\t} else if allText {\n\t\treturn colFmts, colFmtDataAllText\n\t} else {\n\t\tcolFmtData = make([]byte, 2+len(colFmts)*2)\n\t\tbinary.BigEndian.PutUint16(colFmtData, uint16(len(colFmts)))\n\t\tfor i, v := range colFmts {\n\t\t\tbinary.BigEndian.PutUint16(colFmtData[2+i*2:], uint16(v))\n\t\t}\n\t\treturn colFmts, colFmtData\n\t}\n}\n\nfunc (cn *conn) prepareTo(q, stmtName string) *stmt {\n\tst := &stmt{cn: cn, name: stmtName}\n\n\tb := cn.writeBuf('P')\n\tb.string(st.name)\n\tb.string(q)\n\tb.int16(0)\n\n\tb.next('D')\n\tb.byte('S')\n\tb.string(st.name)\n\n\tb.next('S')\n\tcn.send(b)\n\n\tcn.readParseResponse()\n\tst.paramTyps, st.colNames, st.colTyps = cn.readStatementDescribeResponse()\n\tst.colFmts, st.colFmtData = decideColumnFormats(st.colTyps, cn.disablePreparedBinaryResult)\n\tcn.readReadyForQuery()\n\treturn st\n}\n\nfunc (cn *conn) Prepare(q string) (_ driver.Stmt, err error) {\n\tif err := cn.err.get(); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer cn.errRecover(&err)\n\n\tif len(q) >= 4 && strings.EqualFold(q[:4], \"COPY\") {\n\t\ts, err := cn.prepareCopyIn(q)\n\t\tif err == nil {\n\t\t\tcn.inCopy = true\n\t\t}\n\t\treturn s, err\n\t}\n\treturn cn.prepareTo(q, cn.gname()), nil\n}\n\nfunc (cn *conn) Close() (err error) {\n\t// Skip cn.bad return here because we always want to close a connection.\n\tdefer cn.errRecover(&err)\n\n\t// Ensure that cn.c.Close is always run. Since error handling is done with\n\t// panics and cn.errRecover, the Close must be in a defer.\n\tdefer func() {\n\t\tcerr := cn.c.Close()\n\t\tif err == nil {\n\t\t\terr = cerr\n\t\t}\n\t}()\n\n\t// Don't go through send(); ListenerConn relies on us not scribbling on the\n\t// scratch buffer of this connection.\n\treturn cn.sendSimpleMessage('X')\n}\n\n// Implement the \"Queryer\" interface\nfunc (cn *conn) Query(query string, args []driver.Value) (driver.Rows, error) {\n\treturn cn.query(query, args)\n}\n\nfunc (cn *conn) query(query string, args []driver.Value) (_ *rows, err error) {\n\tif err := cn.err.get(); err != nil {\n\t\treturn nil, err\n\t}\n\tif cn.inCopy {\n\t\treturn nil, errCopyInProgress\n\t}\n\tdefer cn.errRecover(&err)\n\n\t// Check to see if we can use the \"simpleQuery\" interface, which is\n\t// *much* faster than going through prepare/exec\n\tif len(args) == 0 {\n\t\treturn cn.simpleQuery(query)\n\t}\n\n\tif cn.binaryParameters {\n\t\tcn.sendBinaryModeQuery(query, args)\n\n\t\tcn.readParseResponse()\n\t\tcn.readBindResponse()\n\t\trows := &rows{cn: cn}\n\t\trows.rowsHeader = cn.readPortalDescribeResponse()\n\t\tcn.postExecuteWorkaround()\n\t\treturn rows, nil\n\t}\n\tst := cn.prepareTo(query, \"\")\n\tst.exec(args)\n\treturn &rows{\n\t\tcn:         cn,\n\t\trowsHeader: st.rowsHeader,\n\t}, nil\n}\n\n// Implement the optional \"Execer\" interface for one-shot queries\nfunc (cn *conn) Exec(query string, args []driver.Value) (res driver.Result, err error) {\n\tif err := cn.err.get(); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer cn.errRecover(&err)\n\n\t// Check to see if we can use the \"simpleExec\" interface, which is\n\t// *much* faster than going through prepare/exec\n\tif len(args) == 0 {\n\t\t// ignore commandTag, our caller doesn't care\n\t\tr, _, err := cn.simpleExec(query)\n\t\treturn r, err\n\t}\n\n\tif cn.binaryParameters {\n\t\tcn.sendBinaryModeQuery(query, args)\n\n\t\tcn.readParseResponse()\n\t\tcn.readBindResponse()\n\t\tcn.readPortalDescribeResponse()\n\t\tcn.postExecuteWorkaround()\n\t\tres, _, err = cn.readExecuteResponse(\"Execute\")\n\t\treturn res, err\n\t}\n\t// Use the unnamed statement to defer planning until bind\n\t// time, or else value-based selectivity estimates cannot be\n\t// used.\n\tst := cn.prepareTo(query, \"\")\n\tr, err := st.Exec(args)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn r, err\n}\n\ntype safeRetryError struct {\n\tErr error\n}\n\nfunc (se *safeRetryError) Error() string {\n\treturn se.Err.Error()\n}\n\nfunc (cn *conn) send(m *writeBuf) {\n\tn, err := cn.c.Write(m.wrap())\n\tif err != nil {\n\t\tif n == 0 {\n\t\t\terr = &safeRetryError{Err: err}\n\t\t}\n\t\tpanic(err)\n\t}\n}\n\nfunc (cn *conn) sendStartupPacket(m *writeBuf) error {\n\t_, err := cn.c.Write((m.wrap())[1:])\n\treturn err\n}\n\n// Send a message of type typ to the server on the other end of cn.  The\n// message should have no payload.  This method does not use the scratch\n// buffer.\nfunc (cn *conn) sendSimpleMessage(typ byte) (err error) {\n\t_, err = cn.c.Write([]byte{typ, '\\x00', '\\x00', '\\x00', '\\x04'})\n\treturn err\n}\n\n// saveMessage memorizes a message and its buffer in the conn struct.\n// recvMessage will then return these values on the next call to it.  This\n// method is useful in cases where you have to see what the next message is\n// going to be (e.g. to see whether it's an error or not) but you can't handle\n// the message yourself.\nfunc (cn *conn) saveMessage(typ byte, buf *readBuf) {\n\tif cn.saveMessageType != 0 {\n\t\tcn.err.set(driver.ErrBadConn)\n\t\terrorf(\"unexpected saveMessageType %d\", cn.saveMessageType)\n\t}\n\tcn.saveMessageType = typ\n\tcn.saveMessageBuffer = *buf\n}\n\n// recvMessage receives any message from the backend, or returns an error if\n// a problem occurred while reading the message.\nfunc (cn *conn) recvMessage(r *readBuf) (byte, error) {\n\t// workaround for a QueryRow bug, see exec\n\tif cn.saveMessageType != 0 {\n\t\tt := cn.saveMessageType\n\t\t*r = cn.saveMessageBuffer\n\t\tcn.saveMessageType = 0\n\t\tcn.saveMessageBuffer = nil\n\t\treturn t, nil\n\t}\n\n\tx := cn.scratch[:5]\n\t_, err := io.ReadFull(cn.buf, x)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\n\t// read the type and length of the message that follows\n\tt := x[0]\n\tn := int(binary.BigEndian.Uint32(x[1:])) - 4\n\tvar y []byte\n\tif n <= len(cn.scratch) {\n\t\ty = cn.scratch[:n]\n\t} else {\n\t\ty = make([]byte, n)\n\t}\n\t_, err = io.ReadFull(cn.buf, y)\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\t*r = y\n\treturn t, nil\n}\n\n// recv receives a message from the backend, but if an error happened while\n// reading the message or the received message was an ErrorResponse, it panics.\n// NoticeResponses are ignored.  This function should generally be used only\n// during the startup sequence.\nfunc (cn *conn) recv() (t byte, r *readBuf) {\n\tfor {\n\t\tvar err error\n\t\tr = &readBuf{}\n\t\tt, err = cn.recvMessage(r)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\tswitch t {\n\t\tcase 'E':\n\t\t\tpanic(parseError(r))\n\t\tcase 'N':\n\t\t\tif n := cn.noticeHandler; n != nil {\n\t\t\t\tn(parseError(r))\n\t\t\t}\n\t\tcase 'A':\n\t\t\tif n := cn.notificationHandler; n != nil {\n\t\t\t\tn(recvNotification(r))\n\t\t\t}\n\t\tdefault:\n\t\t\treturn\n\t\t}\n\t}\n}\n\n// recv1Buf is exactly equivalent to recv1, except it uses a buffer supplied by\n// the caller to avoid an allocation.\nfunc (cn *conn) recv1Buf(r *readBuf) byte {\n\tfor {\n\t\tt, err := cn.recvMessage(r)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\n\t\tswitch t {\n\t\tcase 'A':\n\t\t\tif n := cn.notificationHandler; n != nil {\n\t\t\t\tn(recvNotification(r))\n\t\t\t}\n\t\tcase 'N':\n\t\t\tif n := cn.noticeHandler; n != nil {\n\t\t\t\tn(parseError(r))\n\t\t\t}\n\t\tcase 'S':\n\t\t\tcn.processParameterStatus(r)\n\t\tdefault:\n\t\t\treturn t\n\t\t}\n\t}\n}\n\n// recv1 receives a message from the backend, panicking if an error occurs\n// while attempting to read it.  All asynchronous messages are ignored, with\n// the exception of ErrorResponse.\nfunc (cn *conn) recv1() (t byte, r *readBuf) {\n\tr = &readBuf{}\n\tt = cn.recv1Buf(r)\n\treturn t, r\n}\n\nfunc (cn *conn) ssl(o values) error {\n\tupgrade, err := ssl(o)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif upgrade == nil {\n\t\t// Nothing to do\n\t\treturn nil\n\t}\n\n\tw := cn.writeBuf(0)\n\tw.int32(80877103)\n\tif err = cn.sendStartupPacket(w); err != nil {\n\t\treturn err\n\t}\n\n\tb := cn.scratch[:1]\n\t_, err = io.ReadFull(cn.c, b)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif b[0] != 'S' {\n\t\treturn ErrSSLNotSupported\n\t}\n\n\tcn.c, err = upgrade(cn.c)\n\treturn err\n}\n\n// isDriverSetting returns true iff a setting is purely for configuring the\n// driver's options and should not be sent to the server in the connection\n// startup packet.\nfunc isDriverSetting(key string) bool {\n\tswitch key {\n\tcase \"host\", \"port\":\n\t\treturn true\n\tcase \"password\":\n\t\treturn true\n\tcase \"sslmode\", \"sslcert\", \"sslkey\", \"sslrootcert\", \"sslinline\", \"sslsni\":\n\t\treturn true\n\tcase \"fallback_application_name\":\n\t\treturn true\n\tcase \"connect_timeout\":\n\t\treturn true\n\tcase \"disable_prepared_binary_result\":\n\t\treturn true\n\tcase \"binary_parameters\":\n\t\treturn true\n\tcase \"krbsrvname\":\n\t\treturn true\n\tcase \"krbspn\":\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\nfunc (cn *conn) startup(o values) {\n\tw := cn.writeBuf(0)\n\tw.int32(196608)\n\t// Send the backend the name of the database we want to connect to, and the\n\t// user we want to connect as.  Additionally, we send over any run-time\n\t// parameters potentially included in the connection string.  If the server\n\t// doesn't recognize any of them, it will reply with an error.\n\tfor k, v := range o {\n\t\tif isDriverSetting(k) {\n\t\t\t// skip options which can't be run-time parameters\n\t\t\tcontinue\n\t\t}\n\t\t// The protocol requires us to supply the database name as \"database\"\n\t\t// instead of \"dbname\".\n\t\tif k == \"dbname\" {\n\t\t\tk = \"database\"\n\t\t}\n\t\tw.string(k)\n\t\tw.string(v)\n\t}\n\tw.string(\"\")\n\tif err := cn.sendStartupPacket(w); err != nil {\n\t\tpanic(err)\n\t}\n\n\tfor {\n\t\tt, r := cn.recv()\n\t\tswitch t {\n\t\tcase 'K':\n\t\t\tcn.processBackendKeyData(r)\n\t\tcase 'S':\n\t\t\tcn.processParameterStatus(r)\n\t\tcase 'R':\n\t\t\tcn.auth(r, o)\n\t\tcase 'Z':\n\t\t\tcn.processReadyForQuery(r)\n\t\t\treturn\n\t\tdefault:\n\t\t\terrorf(\"unknown response for startup: %q\", t)\n\t\t}\n\t}\n}\n\nfunc (cn *conn) auth(r *readBuf, o values) {\n\tswitch code := r.int32(); code {\n\tcase 0:\n\t\t// OK\n\tcase 3:\n\t\tw := cn.writeBuf('p')\n\t\tw.string(o[\"password\"])\n\t\tcn.send(w)\n\n\t\tt, r := cn.recv()\n\t\tif t != 'R' {\n\t\t\terrorf(\"unexpected password response: %q\", t)\n\t\t}\n\n\t\tif r.int32() != 0 {\n\t\t\terrorf(\"unexpected authentication response: %q\", t)\n\t\t}\n\tcase 5:\n\t\ts := string(r.next(4))\n\t\tw := cn.writeBuf('p')\n\t\tw.string(\"md5\" + md5s(md5s(o[\"password\"]+o[\"user\"])+s))\n\t\tcn.send(w)\n\n\t\tt, r := cn.recv()\n\t\tif t != 'R' {\n\t\t\terrorf(\"unexpected password response: %q\", t)\n\t\t}\n\n\t\tif r.int32() != 0 {\n\t\t\terrorf(\"unexpected authentication response: %q\", t)\n\t\t}\n\tcase 7: // GSSAPI, startup\n\t\tif newGss == nil {\n\t\t\terrorf(\"kerberos error: no GSSAPI provider registered (import github.com/lib/pq/auth/kerberos if you need Kerberos support)\")\n\t\t}\n\t\tcli, err := newGss()\n\t\tif err != nil {\n\t\t\terrorf(\"kerberos error: %s\", err.Error())\n\t\t}\n\n\t\tvar token []byte\n\n\t\tif spn, ok := o[\"krbspn\"]; ok {\n\t\t\t// Use the supplied SPN if provided..\n\t\t\ttoken, err = cli.GetInitTokenFromSpn(spn)\n\t\t} else {\n\t\t\t// Allow the kerberos service name to be overridden\n\t\t\tservice := \"postgres\"\n\t\t\tif val, ok := o[\"krbsrvname\"]; ok {\n\t\t\t\tservice = val\n\t\t\t}\n\n\t\t\ttoken, err = cli.GetInitToken(o[\"host\"], service)\n\t\t}\n\n\t\tif err != nil {\n\t\t\terrorf(\"failed to get Kerberos ticket: %q\", err)\n\t\t}\n\n\t\tw := cn.writeBuf('p')\n\t\tw.bytes(token)\n\t\tcn.send(w)\n\n\t\t// Store for GSSAPI continue message\n\t\tcn.gss = cli\n\n\tcase 8: // GSSAPI continue\n\n\t\tif cn.gss == nil {\n\t\t\terrorf(\"GSSAPI protocol error\")\n\t\t}\n\n\t\tb := []byte(*r)\n\n\t\tdone, tokOut, err := cn.gss.Continue(b)\n\t\tif err == nil && !done {\n\t\t\tw := cn.writeBuf('p')\n\t\t\tw.bytes(tokOut)\n\t\t\tcn.send(w)\n\t\t}\n\n\t\t// Errors fall through and read the more detailed message\n\t\t// from the server..\n\n\tcase 10:\n\t\tsc := scram.NewClient(sha256.New, o[\"user\"], o[\"password\"])\n\t\tsc.Step(nil)\n\t\tif sc.Err() != nil {\n\t\t\terrorf(\"SCRAM-SHA-256 error: %s\", sc.Err().Error())\n\t\t}\n\t\tscOut := sc.Out()\n\n\t\tw := cn.writeBuf('p')\n\t\tw.string(\"SCRAM-SHA-256\")\n\t\tw.int32(len(scOut))\n\t\tw.bytes(scOut)\n\t\tcn.send(w)\n\n\t\tt, r := cn.recv()\n\t\tif t != 'R' {\n\t\t\terrorf(\"unexpected password response: %q\", t)\n\t\t}\n\n\t\tif r.int32() != 11 {\n\t\t\terrorf(\"unexpected authentication response: %q\", t)\n\t\t}\n\n\t\tnextStep := r.next(len(*r))\n\t\tsc.Step(nextStep)\n\t\tif sc.Err() != nil {\n\t\t\terrorf(\"SCRAM-SHA-256 error: %s\", sc.Err().Error())\n\t\t}\n\n\t\tscOut = sc.Out()\n\t\tw = cn.writeBuf('p')\n\t\tw.bytes(scOut)\n\t\tcn.send(w)\n\n\t\tt, r = cn.recv()\n\t\tif t != 'R' {\n\t\t\terrorf(\"unexpected password response: %q\", t)\n\t\t}\n\n\t\tif r.int32() != 12 {\n\t\t\terrorf(\"unexpected authentication response: %q\", t)\n\t\t}\n\n\t\tnextStep = r.next(len(*r))\n\t\tsc.Step(nextStep)\n\t\tif sc.Err() != nil {\n\t\t\terrorf(\"SCRAM-SHA-256 error: %s\", sc.Err().Error())\n\t\t}\n\n\tdefault:\n\t\terrorf(\"unknown authentication response: %d\", code)\n\t}\n}\n\ntype format int\n\nconst formatText format = 0\nconst formatBinary format = 1\n\n// One result-column format code with the value 1 (i.e. all binary).\nvar colFmtDataAllBinary = []byte{0, 1, 0, 1}\n\n// No result-column format codes (i.e. all text).\nvar colFmtDataAllText = []byte{0, 0}\n\ntype stmt struct {\n\tcn   *conn\n\tname string\n\trowsHeader\n\tcolFmtData []byte\n\tparamTyps  []oid.Oid\n\tclosed     bool\n}\n\nfunc (st *stmt) Close() (err error) {\n\tif st.closed {\n\t\treturn nil\n\t}\n\tif err := st.cn.err.get(); err != nil {\n\t\treturn err\n\t}\n\tdefer st.cn.errRecover(&err)\n\n\tw := st.cn.writeBuf('C')\n\tw.byte('S')\n\tw.string(st.name)\n\tst.cn.send(w)\n\n\tst.cn.send(st.cn.writeBuf('S'))\n\n\tt, _ := st.cn.recv1()\n\tif t != '3' {\n\t\tst.cn.err.set(driver.ErrBadConn)\n\t\terrorf(\"unexpected close response: %q\", t)\n\t}\n\tst.closed = true\n\n\tt, r := st.cn.recv1()\n\tif t != 'Z' {\n\t\tst.cn.err.set(driver.ErrBadConn)\n\t\terrorf(\"expected ready for query, but got: %q\", t)\n\t}\n\tst.cn.processReadyForQuery(r)\n\n\treturn nil\n}\n\nfunc (st *stmt) Query(v []driver.Value) (r driver.Rows, err error) {\n\treturn st.query(v)\n}\n\nfunc (st *stmt) query(v []driver.Value) (r *rows, err error) {\n\tif err := st.cn.err.get(); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer st.cn.errRecover(&err)\n\n\tst.exec(v)\n\treturn &rows{\n\t\tcn:         st.cn,\n\t\trowsHeader: st.rowsHeader,\n\t}, nil\n}\n\nfunc (st *stmt) Exec(v []driver.Value) (res driver.Result, err error) {\n\tif err := st.cn.err.get(); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer st.cn.errRecover(&err)\n\n\tst.exec(v)\n\tres, _, err = st.cn.readExecuteResponse(\"simple query\")\n\treturn res, err\n}\n\nfunc (st *stmt) exec(v []driver.Value) {\n\tif len(v) >= 65536 {\n\t\terrorf(\"got %d parameters but PostgreSQL only supports 65535 parameters\", len(v))\n\t}\n\tif len(v) != len(st.paramTyps) {\n\t\terrorf(\"got %d parameters but the statement requires %d\", len(v), len(st.paramTyps))\n\t}\n\n\tcn := st.cn\n\tw := cn.writeBuf('B')\n\tw.byte(0) // unnamed portal\n\tw.string(st.name)\n\n\tif cn.binaryParameters {\n\t\tcn.sendBinaryParameters(w, v)\n\t} else {\n\t\tw.int16(0)\n\t\tw.int16(len(v))\n\t\tfor i, x := range v {\n\t\t\tif x == nil {\n\t\t\t\tw.int32(-1)\n\t\t\t} else {\n\t\t\t\tb := encode(&cn.parameterStatus, x, st.paramTyps[i])\n\t\t\t\tw.int32(len(b))\n\t\t\t\tw.bytes(b)\n\t\t\t}\n\t\t}\n\t}\n\tw.bytes(st.colFmtData)\n\n\tw.next('E')\n\tw.byte(0)\n\tw.int32(0)\n\n\tw.next('S')\n\tcn.send(w)\n\n\tcn.readBindResponse()\n\tcn.postExecuteWorkaround()\n\n}\n\nfunc (st *stmt) NumInput() int {\n\treturn len(st.paramTyps)\n}\n\n// parseComplete parses the \"command tag\" from a CommandComplete message, and\n// returns the number of rows affected (if applicable) and a string\n// identifying only the command that was executed, e.g. \"ALTER TABLE\".  If the\n// command tag could not be parsed, parseComplete panics.\nfunc (cn *conn) parseComplete(commandTag string) (driver.Result, string) {\n\tcommandsWithAffectedRows := []string{\n\t\t\"SELECT \",\n\t\t// INSERT is handled below\n\t\t\"UPDATE \",\n\t\t\"DELETE \",\n\t\t\"FETCH \",\n\t\t\"MOVE \",\n\t\t\"COPY \",\n\t}\n\n\tvar affectedRows *string\n\tfor _, tag := range commandsWithAffectedRows {\n\t\tif strings.HasPrefix(commandTag, tag) {\n\t\t\tt := commandTag[len(tag):]\n\t\t\taffectedRows = &t\n\t\t\tcommandTag = tag[:len(tag)-1]\n\t\t\tbreak\n\t\t}\n\t}\n\t// INSERT also includes the oid of the inserted row in its command tag.\n\t// Oids in user tables are deprecated, and the oid is only returned when\n\t// exactly one row is inserted, so it's unlikely to be of value to any\n\t// real-world application and we can ignore it.\n\tif affectedRows == nil && strings.HasPrefix(commandTag, \"INSERT \") {\n\t\tparts := strings.Split(commandTag, \" \")\n\t\tif len(parts) != 3 {\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\terrorf(\"unexpected INSERT command tag %s\", commandTag)\n\t\t}\n\t\taffectedRows = &parts[len(parts)-1]\n\t\tcommandTag = \"INSERT\"\n\t}\n\t// There should be no affected rows attached to the tag, just return it\n\tif affectedRows == nil {\n\t\treturn driver.RowsAffected(0), commandTag\n\t}\n\tn, err := strconv.ParseInt(*affectedRows, 10, 64)\n\tif err != nil {\n\t\tcn.err.set(driver.ErrBadConn)\n\t\terrorf(\"could not parse commandTag: %s\", err)\n\t}\n\treturn driver.RowsAffected(n), commandTag\n}\n\ntype rowsHeader struct {\n\tcolNames []string\n\tcolTyps  []fieldDesc\n\tcolFmts  []format\n}\n\ntype rows struct {\n\tcn     *conn\n\tfinish func()\n\trowsHeader\n\tdone   bool\n\trb     readBuf\n\tresult driver.Result\n\ttag    string\n\n\tnext *rowsHeader\n}\n\nfunc (rs *rows) Close() error {\n\tif finish := rs.finish; finish != nil {\n\t\tdefer finish()\n\t}\n\t// no need to look at cn.bad as Next() will\n\tfor {\n\t\terr := rs.Next(nil)\n\t\tswitch err {\n\t\tcase nil:\n\t\tcase io.EOF:\n\t\t\t// rs.Next can return io.EOF on both 'Z' (ready for query) and 'T' (row\n\t\t\t// description, used with HasNextResultSet). We need to fetch messages until\n\t\t\t// we hit a 'Z', which is done by waiting for done to be set.\n\t\t\tif rs.done {\n\t\t\t\treturn nil\n\t\t\t}\n\t\tdefault:\n\t\t\treturn err\n\t\t}\n\t}\n}\n\nfunc (rs *rows) Columns() []string {\n\treturn rs.colNames\n}\n\nfunc (rs *rows) Result() driver.Result {\n\tif rs.result == nil {\n\t\treturn emptyRows\n\t}\n\treturn rs.result\n}\n\nfunc (rs *rows) Tag() string {\n\treturn rs.tag\n}\n\nfunc (rs *rows) Next(dest []driver.Value) (err error) {\n\tif rs.done {\n\t\treturn io.EOF\n\t}\n\n\tconn := rs.cn\n\tif err := conn.err.getForNext(); err != nil {\n\t\treturn err\n\t}\n\tdefer conn.errRecover(&err)\n\n\tfor {\n\t\tt := conn.recv1Buf(&rs.rb)\n\t\tswitch t {\n\t\tcase 'E':\n\t\t\terr = parseError(&rs.rb)\n\t\tcase 'C', 'I':\n\t\t\tif t == 'C' {\n\t\t\t\trs.result, rs.tag = conn.parseComplete(rs.rb.string())\n\t\t\t}\n\t\t\tcontinue\n\t\tcase 'Z':\n\t\t\tconn.processReadyForQuery(&rs.rb)\n\t\t\trs.done = true\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn io.EOF\n\t\tcase 'D':\n\t\t\tn := rs.rb.int16()\n\t\t\tif err != nil {\n\t\t\t\tconn.err.set(driver.ErrBadConn)\n\t\t\t\terrorf(\"unexpected DataRow after error %s\", err)\n\t\t\t}\n\t\t\tif n < len(dest) {\n\t\t\t\tdest = dest[:n]\n\t\t\t}\n\t\t\tfor i := range dest {\n\t\t\t\tl := rs.rb.int32()\n\t\t\t\tif l == -1 {\n\t\t\t\t\tdest[i] = nil\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tdest[i] = decode(&conn.parameterStatus, rs.rb.next(l), rs.colTyps[i].OID, rs.colFmts[i])\n\t\t\t}\n\t\t\treturn\n\t\tcase 'T':\n\t\t\tnext := parsePortalRowDescribe(&rs.rb)\n\t\t\trs.next = &next\n\t\t\treturn io.EOF\n\t\tdefault:\n\t\t\terrorf(\"unexpected message after execute: %q\", t)\n\t\t}\n\t}\n}\n\nfunc (rs *rows) HasNextResultSet() bool {\n\thasNext := rs.next != nil && !rs.done\n\treturn hasNext\n}\n\nfunc (rs *rows) NextResultSet() error {\n\tif rs.next == nil {\n\t\treturn io.EOF\n\t}\n\trs.rowsHeader = *rs.next\n\trs.next = nil\n\treturn nil\n}\n\n// QuoteIdentifier quotes an \"identifier\" (e.g. a table or a column name) to be\n// used as part of an SQL statement.  For example:\n//\n//\ttblname := \"my_table\"\n//\tdata := \"my_data\"\n//\tquoted := pq.QuoteIdentifier(tblname)\n//\terr := db.Exec(fmt.Sprintf(\"INSERT INTO %s VALUES ($1)\", quoted), data)\n//\n// Any double quotes in name will be escaped.  The quoted identifier will be\n// case sensitive when used in a query.  If the input string contains a zero\n// byte, the result will be truncated immediately before it.\nfunc QuoteIdentifier(name string) string {\n\tend := strings.IndexRune(name, 0)\n\tif end > -1 {\n\t\tname = name[:end]\n\t}\n\treturn `\"` + strings.Replace(name, `\"`, `\"\"`, -1) + `\"`\n}\n\n// BufferQuoteIdentifier satisfies the same purpose as QuoteIdentifier, but backed by a\n// byte buffer.\nfunc BufferQuoteIdentifier(name string, buffer *bytes.Buffer) {\n\tend := strings.IndexRune(name, 0)\n\tif end > -1 {\n\t\tname = name[:end]\n\t}\n\tbuffer.WriteRune('\"')\n\tbuffer.WriteString(strings.Replace(name, `\"`, `\"\"`, -1))\n\tbuffer.WriteRune('\"')\n}\n\n// QuoteLiteral quotes a 'literal' (e.g. a parameter, often used to pass literal\n// to DDL and other statements that do not accept parameters) to be used as part\n// of an SQL statement.  For example:\n//\n//\texp_date := pq.QuoteLiteral(\"2023-01-05 15:00:00Z\")\n//\terr := db.Exec(fmt.Sprintf(\"CREATE ROLE my_user VALID UNTIL %s\", exp_date))\n//\n// Any single quotes in name will be escaped. Any backslashes (i.e. \"\\\") will be\n// replaced by two backslashes (i.e. \"\\\\\") and the C-style escape identifier\n// that PostgreSQL provides ('E') will be prepended to the string.\nfunc QuoteLiteral(literal string) string {\n\t// This follows the PostgreSQL internal algorithm for handling quoted literals\n\t// from libpq, which can be found in the \"PQEscapeStringInternal\" function,\n\t// which is found in the libpq/fe-exec.c source file:\n\t// https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/interfaces/libpq/fe-exec.c\n\t//\n\t// substitute any single-quotes (') with two single-quotes ('')\n\tliteral = strings.Replace(literal, `'`, `''`, -1)\n\t// determine if the string has any backslashes (\\) in it.\n\t// if it does, replace any backslashes (\\) with two backslashes (\\\\)\n\t// then, we need to wrap the entire string with a PostgreSQL\n\t// C-style escape. Per how \"PQEscapeStringInternal\" handles this case, we\n\t// also add a space before the \"E\"\n\tif strings.Contains(literal, `\\`) {\n\t\tliteral = strings.Replace(literal, `\\`, `\\\\`, -1)\n\t\tliteral = ` E'` + literal + `'`\n\t} else {\n\t\t// otherwise, we can just wrap the literal with a pair of single quotes\n\t\tliteral = `'` + literal + `'`\n\t}\n\treturn literal\n}\n\nfunc md5s(s string) string {\n\th := md5.New()\n\th.Write([]byte(s))\n\treturn fmt.Sprintf(\"%x\", h.Sum(nil))\n}\n\nfunc (cn *conn) sendBinaryParameters(b *writeBuf, args []driver.Value) {\n\t// Do one pass over the parameters to see if we're going to send any of\n\t// them over in binary.  If we are, create a paramFormats array at the\n\t// same time.\n\tvar paramFormats []int\n\tfor i, x := range args {\n\t\t_, ok := x.([]byte)\n\t\tif ok {\n\t\t\tif paramFormats == nil {\n\t\t\t\tparamFormats = make([]int, len(args))\n\t\t\t}\n\t\t\tparamFormats[i] = 1\n\t\t}\n\t}\n\tif paramFormats == nil {\n\t\tb.int16(0)\n\t} else {\n\t\tb.int16(len(paramFormats))\n\t\tfor _, x := range paramFormats {\n\t\t\tb.int16(x)\n\t\t}\n\t}\n\n\tb.int16(len(args))\n\tfor _, x := range args {\n\t\tif x == nil {\n\t\t\tb.int32(-1)\n\t\t} else {\n\t\t\tdatum := binaryEncode(&cn.parameterStatus, x)\n\t\t\tb.int32(len(datum))\n\t\t\tb.bytes(datum)\n\t\t}\n\t}\n}\n\nfunc (cn *conn) sendBinaryModeQuery(query string, args []driver.Value) {\n\tif len(args) >= 65536 {\n\t\terrorf(\"got %d parameters but PostgreSQL only supports 65535 parameters\", len(args))\n\t}\n\n\tb := cn.writeBuf('P')\n\tb.byte(0) // unnamed statement\n\tb.string(query)\n\tb.int16(0)\n\n\tb.next('B')\n\tb.int16(0) // unnamed portal and statement\n\tcn.sendBinaryParameters(b, args)\n\tb.bytes(colFmtDataAllText)\n\n\tb.next('D')\n\tb.byte('P')\n\tb.byte(0) // unnamed portal\n\n\tb.next('E')\n\tb.byte(0)\n\tb.int32(0)\n\n\tb.next('S')\n\tcn.send(b)\n}\n\nfunc (cn *conn) processParameterStatus(r *readBuf) {\n\tvar err error\n\n\tparam := r.string()\n\tswitch param {\n\tcase \"server_version\":\n\t\tvar major1 int\n\t\tvar major2 int\n\t\t_, err = fmt.Sscanf(r.string(), \"%d.%d\", &major1, &major2)\n\t\tif err == nil {\n\t\t\tcn.parameterStatus.serverVersion = major1*10000 + major2*100\n\t\t}\n\n\tcase \"TimeZone\":\n\t\tcn.parameterStatus.currentLocation, err = time.LoadLocation(r.string())\n\t\tif err != nil {\n\t\t\tcn.parameterStatus.currentLocation = nil\n\t\t}\n\n\tdefault:\n\t\t// ignore\n\t}\n}\n\nfunc (cn *conn) processReadyForQuery(r *readBuf) {\n\tcn.txnStatus = transactionStatus(r.byte())\n}\n\nfunc (cn *conn) readReadyForQuery() {\n\tt, r := cn.recv1()\n\tswitch t {\n\tcase 'Z':\n\t\tcn.processReadyForQuery(r)\n\t\treturn\n\tdefault:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\terrorf(\"unexpected message %q; expected ReadyForQuery\", t)\n\t}\n}\n\nfunc (cn *conn) processBackendKeyData(r *readBuf) {\n\tcn.processID = r.int32()\n\tcn.secretKey = r.int32()\n}\n\nfunc (cn *conn) readParseResponse() {\n\tt, r := cn.recv1()\n\tswitch t {\n\tcase '1':\n\t\treturn\n\tcase 'E':\n\t\terr := parseError(r)\n\t\tcn.readReadyForQuery()\n\t\tpanic(err)\n\tdefault:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\terrorf(\"unexpected Parse response %q\", t)\n\t}\n}\n\nfunc (cn *conn) readStatementDescribeResponse() (\n\tparamTyps []oid.Oid,\n\tcolNames []string,\n\tcolTyps []fieldDesc,\n) {\n\tfor {\n\t\tt, r := cn.recv1()\n\t\tswitch t {\n\t\tcase 't':\n\t\t\tnparams := r.int16()\n\t\t\tparamTyps = make([]oid.Oid, nparams)\n\t\t\tfor i := range paramTyps {\n\t\t\t\tparamTyps[i] = r.oid()\n\t\t\t}\n\t\tcase 'n':\n\t\t\treturn paramTyps, nil, nil\n\t\tcase 'T':\n\t\t\tcolNames, colTyps = parseStatementRowDescribe(r)\n\t\t\treturn paramTyps, colNames, colTyps\n\t\tcase 'E':\n\t\t\terr := parseError(r)\n\t\t\tcn.readReadyForQuery()\n\t\t\tpanic(err)\n\t\tdefault:\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\terrorf(\"unexpected Describe statement response %q\", t)\n\t\t}\n\t}\n}\n\nfunc (cn *conn) readPortalDescribeResponse() rowsHeader {\n\tt, r := cn.recv1()\n\tswitch t {\n\tcase 'T':\n\t\treturn parsePortalRowDescribe(r)\n\tcase 'n':\n\t\treturn rowsHeader{}\n\tcase 'E':\n\t\terr := parseError(r)\n\t\tcn.readReadyForQuery()\n\t\tpanic(err)\n\tdefault:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\terrorf(\"unexpected Describe response %q\", t)\n\t}\n\tpanic(\"not reached\")\n}\n\nfunc (cn *conn) readBindResponse() {\n\tt, r := cn.recv1()\n\tswitch t {\n\tcase '2':\n\t\treturn\n\tcase 'E':\n\t\terr := parseError(r)\n\t\tcn.readReadyForQuery()\n\t\tpanic(err)\n\tdefault:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\terrorf(\"unexpected Bind response %q\", t)\n\t}\n}\n\nfunc (cn *conn) postExecuteWorkaround() {\n\t// Work around a bug in sql.DB.QueryRow: in Go 1.2 and earlier it ignores\n\t// any errors from rows.Next, which masks errors that happened during the\n\t// execution of the query.  To avoid the problem in common cases, we wait\n\t// here for one more message from the database.  If it's not an error the\n\t// query will likely succeed (or perhaps has already, if it's a\n\t// CommandComplete), so we push the message into the conn struct; recv1\n\t// will return it as the next message for rows.Next or rows.Close.\n\t// However, if it's an error, we wait until ReadyForQuery and then return\n\t// the error to our caller.\n\tfor {\n\t\tt, r := cn.recv1()\n\t\tswitch t {\n\t\tcase 'E':\n\t\t\terr := parseError(r)\n\t\t\tcn.readReadyForQuery()\n\t\t\tpanic(err)\n\t\tcase 'C', 'D', 'I':\n\t\t\t// the query didn't fail, but we can't process this message\n\t\t\tcn.saveMessage(t, r)\n\t\t\treturn\n\t\tdefault:\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\terrorf(\"unexpected message during extended query execution: %q\", t)\n\t\t}\n\t}\n}\n\n// Only for Exec(), since we ignore the returned data\nfunc (cn *conn) readExecuteResponse(\n\tprotocolState string,\n) (res driver.Result, commandTag string, err error) {\n\tfor {\n\t\tt, r := cn.recv1()\n\t\tswitch t {\n\t\tcase 'C':\n\t\t\tif err != nil {\n\t\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\t\terrorf(\"unexpected CommandComplete after error %s\", err)\n\t\t\t}\n\t\t\tres, commandTag = cn.parseComplete(r.string())\n\t\tcase 'Z':\n\t\t\tcn.processReadyForQuery(r)\n\t\t\tif res == nil && err == nil {\n\t\t\t\terr = errUnexpectedReady\n\t\t\t}\n\t\t\treturn res, commandTag, err\n\t\tcase 'E':\n\t\t\terr = parseError(r)\n\t\tcase 'T', 'D', 'I':\n\t\t\tif err != nil {\n\t\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\t\terrorf(\"unexpected %q after error %s\", t, err)\n\t\t\t}\n\t\t\tif t == 'I' {\n\t\t\t\tres = emptyRows\n\t\t\t}\n\t\t\t// ignore any results\n\t\tdefault:\n\t\t\tcn.err.set(driver.ErrBadConn)\n\t\t\terrorf(\"unknown %s response: %q\", protocolState, t)\n\t\t}\n\t}\n}\n\nfunc parseStatementRowDescribe(r *readBuf) (colNames []string, colTyps []fieldDesc) {\n\tn := r.int16()\n\tcolNames = make([]string, n)\n\tcolTyps = make([]fieldDesc, n)\n\tfor i := range colNames {\n\t\tcolNames[i] = r.string()\n\t\tr.next(6)\n\t\tcolTyps[i].OID = r.oid()\n\t\tcolTyps[i].Len = r.int16()\n\t\tcolTyps[i].Mod = r.int32()\n\t\t// format code not known when describing a statement; always 0\n\t\tr.next(2)\n\t}\n\treturn\n}\n\nfunc parsePortalRowDescribe(r *readBuf) rowsHeader {\n\tn := r.int16()\n\tcolNames := make([]string, n)\n\tcolFmts := make([]format, n)\n\tcolTyps := make([]fieldDesc, n)\n\tfor i := range colNames {\n\t\tcolNames[i] = r.string()\n\t\tr.next(6)\n\t\tcolTyps[i].OID = r.oid()\n\t\tcolTyps[i].Len = r.int16()\n\t\tcolTyps[i].Mod = r.int32()\n\t\tcolFmts[i] = format(r.int16())\n\t}\n\treturn rowsHeader{\n\t\tcolNames: colNames,\n\t\tcolFmts:  colFmts,\n\t\tcolTyps:  colTyps,\n\t}\n}\n\n// parseEnviron tries to mimic some of libpq's environment handling\n//\n// To ease testing, it does not directly reference os.Environ, but is\n// designed to accept its output.\n//\n// Environment-set connection information is intended to have a higher\n// precedence than a library default but lower than any explicitly\n// passed information (such as in the URL or connection string).\nfunc parseEnviron(env []string) (out map[string]string) {\n\tout = make(map[string]string)\n\n\tfor _, v := range env {\n\t\tparts := strings.SplitN(v, \"=\", 2)\n\n\t\taccrue := func(keyname string) {\n\t\t\tout[keyname] = parts[1]\n\t\t}\n\t\tunsupported := func() {\n\t\t\tpanic(fmt.Sprintf(\"setting %v not supported\", parts[0]))\n\t\t}\n\n\t\t// The order of these is the same as is seen in the\n\t\t// PostgreSQL 9.1 manual. Unsupported but well-defined\n\t\t// keys cause a panic; these should be unset prior to\n\t\t// execution. Options which pq expects to be set to a\n\t\t// certain value are allowed, but must be set to that\n\t\t// value if present (they can, of course, be absent).\n\t\tswitch parts[0] {\n\t\tcase \"PGHOST\":\n\t\t\taccrue(\"host\")\n\t\tcase \"PGHOSTADDR\":\n\t\t\tunsupported()\n\t\tcase \"PGPORT\":\n\t\t\taccrue(\"port\")\n\t\tcase \"PGDATABASE\":\n\t\t\taccrue(\"dbname\")\n\t\tcase \"PGUSER\":\n\t\t\taccrue(\"user\")\n\t\tcase \"PGPASSWORD\":\n\t\t\taccrue(\"password\")\n\t\tcase \"PGSERVICE\", \"PGSERVICEFILE\", \"PGREALM\":\n\t\t\tunsupported()\n\t\tcase \"PGOPTIONS\":\n\t\t\taccrue(\"options\")\n\t\tcase \"PGAPPNAME\":\n\t\t\taccrue(\"application_name\")\n\t\tcase \"PGSSLMODE\":\n\t\t\taccrue(\"sslmode\")\n\t\tcase \"PGSSLCERT\":\n\t\t\taccrue(\"sslcert\")\n\t\tcase \"PGSSLKEY\":\n\t\t\taccrue(\"sslkey\")\n\t\tcase \"PGSSLROOTCERT\":\n\t\t\taccrue(\"sslrootcert\")\n\t\tcase \"PGSSLSNI\":\n\t\t\taccrue(\"sslsni\")\n\t\tcase \"PGREQUIRESSL\", \"PGSSLCRL\":\n\t\t\tunsupported()\n\t\tcase \"PGREQUIREPEER\":\n\t\t\tunsupported()\n\t\tcase \"PGKRBSRVNAME\", \"PGGSSLIB\":\n\t\t\tunsupported()\n\t\tcase \"PGCONNECT_TIMEOUT\":\n\t\t\taccrue(\"connect_timeout\")\n\t\tcase \"PGCLIENTENCODING\":\n\t\t\taccrue(\"client_encoding\")\n\t\tcase \"PGDATESTYLE\":\n\t\t\taccrue(\"datestyle\")\n\t\tcase \"PGTZ\":\n\t\t\taccrue(\"timezone\")\n\t\tcase \"PGGEQO\":\n\t\t\taccrue(\"geqo\")\n\t\tcase \"PGSYSCONFDIR\", \"PGLOCALEDIR\":\n\t\t\tunsupported()\n\t\t}\n\t}\n\n\treturn out\n}\n\n// isUTF8 returns whether name is a fuzzy variation of the string \"UTF-8\".\nfunc isUTF8(name string) bool {\n\t// Recognize all sorts of silly things as \"UTF-8\", like Postgres does\n\ts := strings.Map(alnumLowerASCII, name)\n\treturn s == \"utf8\" || s == \"unicode\"\n}\n\nfunc alnumLowerASCII(ch rune) rune {\n\tif 'A' <= ch && ch <= 'Z' {\n\t\treturn ch + ('a' - 'A')\n\t}\n\tif 'a' <= ch && ch <= 'z' || '0' <= ch && ch <= '9' {\n\t\treturn ch\n\t}\n\treturn -1 // discard\n}\n\n// The database/sql/driver package says:\n// All Conn implementations should implement the following interfaces: Pinger, SessionResetter, and Validator.\nvar _ driver.Pinger = &conn{}\nvar _ driver.SessionResetter = &conn{}\n\nfunc (cn *conn) ResetSession(ctx context.Context) error {\n\t// Ensure bad connections are reported: From database/sql/driver:\n\t// If a connection is never returned to the connection pool but immediately reused, then\n\t// ResetSession is called prior to reuse but IsValid is not called.\n\treturn cn.err.get()\n}\n\nfunc (cn *conn) IsValid() bool {\n\treturn cn.err.get() == nil\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/conn_go115.go",
    "content": "//go:build go1.15\n// +build go1.15\n\npackage pq\n\nimport \"database/sql/driver\"\n\nvar _ driver.Validator = &conn{}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/conn_go18.go",
    "content": "package pq\n\nimport (\n\t\"context\"\n\t\"database/sql\"\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"time\"\n)\n\nconst (\n\twatchCancelDialContextTimeout = time.Second * 10\n)\n\n// Implement the \"QueryerContext\" interface\nfunc (cn *conn) QueryContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Rows, error) {\n\tlist := make([]driver.Value, len(args))\n\tfor i, nv := range args {\n\t\tlist[i] = nv.Value\n\t}\n\tfinish := cn.watchCancel(ctx)\n\tr, err := cn.query(query, list)\n\tif err != nil {\n\t\tif finish != nil {\n\t\t\tfinish()\n\t\t}\n\t\treturn nil, err\n\t}\n\tr.finish = finish\n\treturn r, nil\n}\n\n// Implement the \"ExecerContext\" interface\nfunc (cn *conn) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error) {\n\tlist := make([]driver.Value, len(args))\n\tfor i, nv := range args {\n\t\tlist[i] = nv.Value\n\t}\n\n\tif finish := cn.watchCancel(ctx); finish != nil {\n\t\tdefer finish()\n\t}\n\n\treturn cn.Exec(query, list)\n}\n\n// Implement the \"ConnPrepareContext\" interface\nfunc (cn *conn) PrepareContext(ctx context.Context, query string) (driver.Stmt, error) {\n\tif finish := cn.watchCancel(ctx); finish != nil {\n\t\tdefer finish()\n\t}\n\treturn cn.Prepare(query)\n}\n\n// Implement the \"ConnBeginTx\" interface\nfunc (cn *conn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) {\n\tvar mode string\n\n\tswitch sql.IsolationLevel(opts.Isolation) {\n\tcase sql.LevelDefault:\n\t\t// Don't touch mode: use the server's default\n\tcase sql.LevelReadUncommitted:\n\t\tmode = \" ISOLATION LEVEL READ UNCOMMITTED\"\n\tcase sql.LevelReadCommitted:\n\t\tmode = \" ISOLATION LEVEL READ COMMITTED\"\n\tcase sql.LevelRepeatableRead:\n\t\tmode = \" ISOLATION LEVEL REPEATABLE READ\"\n\tcase sql.LevelSerializable:\n\t\tmode = \" ISOLATION LEVEL SERIALIZABLE\"\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"pq: isolation level not supported: %d\", opts.Isolation)\n\t}\n\n\tif opts.ReadOnly {\n\t\tmode += \" READ ONLY\"\n\t} else {\n\t\tmode += \" READ WRITE\"\n\t}\n\n\ttx, err := cn.begin(mode)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tcn.txnFinish = cn.watchCancel(ctx)\n\treturn tx, nil\n}\n\nfunc (cn *conn) Ping(ctx context.Context) error {\n\tif finish := cn.watchCancel(ctx); finish != nil {\n\t\tdefer finish()\n\t}\n\trows, err := cn.simpleQuery(\";\")\n\tif err != nil {\n\t\treturn driver.ErrBadConn // https://golang.org/pkg/database/sql/driver/#Pinger\n\t}\n\trows.Close()\n\treturn nil\n}\n\nfunc (cn *conn) watchCancel(ctx context.Context) func() {\n\tif done := ctx.Done(); done != nil {\n\t\tfinished := make(chan struct{}, 1)\n\t\tgo func() {\n\t\t\tselect {\n\t\t\tcase <-done:\n\t\t\t\tselect {\n\t\t\t\tcase finished <- struct{}{}:\n\t\t\t\tdefault:\n\t\t\t\t\t// We raced with the finish func, let the next query handle this with the\n\t\t\t\t\t// context.\n\t\t\t\t\treturn\n\t\t\t\t}\n\n\t\t\t\t// Set the connection state to bad so it does not get reused.\n\t\t\t\tcn.err.set(ctx.Err())\n\n\t\t\t\t// At this point the function level context is canceled,\n\t\t\t\t// so it must not be used for the additional network\n\t\t\t\t// request to cancel the query.\n\t\t\t\t// Create a new context to pass into the dial.\n\t\t\t\tctxCancel, cancel := context.WithTimeout(context.Background(), watchCancelDialContextTimeout)\n\t\t\t\tdefer cancel()\n\n\t\t\t\t_ = cn.cancel(ctxCancel)\n\t\t\tcase <-finished:\n\t\t\t}\n\t\t}()\n\t\treturn func() {\n\t\t\tselect {\n\t\t\tcase <-finished:\n\t\t\t\tcn.err.set(ctx.Err())\n\t\t\t\tcn.Close()\n\t\t\tcase finished <- struct{}{}:\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (cn *conn) cancel(ctx context.Context) error {\n\t// Create a new values map (copy). This makes sure the connection created\n\t// in this method cannot write to the same underlying data, which could\n\t// cause a concurrent map write panic. This is necessary because cancel\n\t// is called from a goroutine in watchCancel.\n\to := make(values)\n\tfor k, v := range cn.opts {\n\t\to[k] = v\n\t}\n\n\tc, err := dial(ctx, cn.dialer, o)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer c.Close()\n\n\t{\n\t\tcan := conn{\n\t\t\tc: c,\n\t\t}\n\t\terr = can.ssl(o)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tw := can.writeBuf(0)\n\t\tw.int32(80877102) // cancel request code\n\t\tw.int32(cn.processID)\n\t\tw.int32(cn.secretKey)\n\n\t\tif err := can.sendStartupPacket(w); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Read until EOF to ensure that the server received the cancel.\n\t{\n\t\t_, err := io.Copy(ioutil.Discard, c)\n\t\treturn err\n\t}\n}\n\n// Implement the \"StmtQueryContext\" interface\nfunc (st *stmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) {\n\tlist := make([]driver.Value, len(args))\n\tfor i, nv := range args {\n\t\tlist[i] = nv.Value\n\t}\n\tfinish := st.watchCancel(ctx)\n\tr, err := st.query(list)\n\tif err != nil {\n\t\tif finish != nil {\n\t\t\tfinish()\n\t\t}\n\t\treturn nil, err\n\t}\n\tr.finish = finish\n\treturn r, nil\n}\n\n// Implement the \"StmtExecContext\" interface\nfunc (st *stmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {\n\tlist := make([]driver.Value, len(args))\n\tfor i, nv := range args {\n\t\tlist[i] = nv.Value\n\t}\n\n\tif finish := st.watchCancel(ctx); finish != nil {\n\t\tdefer finish()\n\t}\n\n\treturn st.Exec(list)\n}\n\n// watchCancel is implemented on stmt in order to not mark the parent conn as bad\nfunc (st *stmt) watchCancel(ctx context.Context) func() {\n\tif done := ctx.Done(); done != nil {\n\t\tfinished := make(chan struct{})\n\t\tgo func() {\n\t\t\tselect {\n\t\t\tcase <-done:\n\t\t\t\t// At this point the function level context is canceled,\n\t\t\t\t// so it must not be used for the additional network\n\t\t\t\t// request to cancel the query.\n\t\t\t\t// Create a new context to pass into the dial.\n\t\t\t\tctxCancel, cancel := context.WithTimeout(context.Background(), watchCancelDialContextTimeout)\n\t\t\t\tdefer cancel()\n\n\t\t\t\t_ = st.cancel(ctxCancel)\n\t\t\t\tfinished <- struct{}{}\n\t\t\tcase <-finished:\n\t\t\t}\n\t\t}()\n\t\treturn func() {\n\t\t\tselect {\n\t\t\tcase <-finished:\n\t\t\tcase finished <- struct{}{}:\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (st *stmt) cancel(ctx context.Context) error {\n\treturn st.cn.cancel(ctx)\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/connector.go",
    "content": "package pq\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n)\n\n// Connector represents a fixed configuration for the pq driver with a given\n// name. Connector satisfies the database/sql/driver Connector interface and\n// can be used to create any number of DB Conn's via the database/sql OpenDB\n// function.\n//\n// See https://golang.org/pkg/database/sql/driver/#Connector.\n// See https://golang.org/pkg/database/sql/#OpenDB.\ntype Connector struct {\n\topts   values\n\tdialer Dialer\n}\n\n// Connect returns a connection to the database using the fixed configuration\n// of this Connector. Context is not used.\nfunc (c *Connector) Connect(ctx context.Context) (driver.Conn, error) {\n\treturn c.open(ctx)\n}\n\n// Dialer allows change the dialer used to open connections.\nfunc (c *Connector) Dialer(dialer Dialer) {\n\tc.dialer = dialer\n}\n\n// Driver returns the underlying driver of this Connector.\nfunc (c *Connector) Driver() driver.Driver {\n\treturn &Driver{}\n}\n\n// NewConnector returns a connector for the pq driver in a fixed configuration\n// with the given dsn. The returned connector can be used to create any number\n// of equivalent Conn's. The returned connector is intended to be used with\n// database/sql.OpenDB.\n//\n// See https://golang.org/pkg/database/sql/driver/#Connector.\n// See https://golang.org/pkg/database/sql/#OpenDB.\nfunc NewConnector(dsn string) (*Connector, error) {\n\tvar err error\n\to := make(values)\n\n\t// A number of defaults are applied here, in this order:\n\t//\n\t// * Very low precedence defaults applied in every situation\n\t// * Environment variables\n\t// * Explicitly passed connection information\n\to[\"host\"] = \"localhost\"\n\to[\"port\"] = \"5432\"\n\t// N.B.: Extra float digits should be set to 3, but that breaks\n\t// Postgres 8.4 and older, where the max is 2.\n\to[\"extra_float_digits\"] = \"2\"\n\tfor k, v := range parseEnviron(os.Environ()) {\n\t\to[k] = v\n\t}\n\n\tif strings.HasPrefix(dsn, \"postgres://\") || strings.HasPrefix(dsn, \"postgresql://\") {\n\t\tdsn, err = ParseURL(dsn)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif err := parseOpts(dsn, o); err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Use the \"fallback\" application name if necessary\n\tif fallback, ok := o[\"fallback_application_name\"]; ok {\n\t\tif _, ok := o[\"application_name\"]; !ok {\n\t\t\to[\"application_name\"] = fallback\n\t\t}\n\t}\n\n\t// We can't work with any client_encoding other than UTF-8 currently.\n\t// However, we have historically allowed the user to set it to UTF-8\n\t// explicitly, and there's no reason to break such programs, so allow that.\n\t// Note that the \"options\" setting could also set client_encoding, but\n\t// parsing its value is not worth it.  Instead, we always explicitly send\n\t// client_encoding as a separate run-time parameter, which should override\n\t// anything set in options.\n\tif enc, ok := o[\"client_encoding\"]; ok && !isUTF8(enc) {\n\t\treturn nil, errors.New(\"client_encoding must be absent or 'UTF8'\")\n\t}\n\to[\"client_encoding\"] = \"UTF8\"\n\t// DateStyle needs a similar treatment.\n\tif datestyle, ok := o[\"datestyle\"]; ok {\n\t\tif datestyle != \"ISO, MDY\" {\n\t\t\treturn nil, fmt.Errorf(\"setting datestyle must be absent or %v; got %v\", \"ISO, MDY\", datestyle)\n\t\t}\n\t} else {\n\t\to[\"datestyle\"] = \"ISO, MDY\"\n\t}\n\n\t// If a user is not provided by any other means, the last\n\t// resort is to use the current operating system provided user\n\t// name.\n\tif _, ok := o[\"user\"]; !ok {\n\t\tu, err := userCurrent()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\to[\"user\"] = u\n\t}\n\n\t// SSL is not necessary or supported over UNIX domain sockets\n\tif network, _ := network(o); network == \"unix\" {\n\t\to[\"sslmode\"] = \"disable\"\n\t}\n\n\treturn &Connector{opts: o, dialer: defaultDialer{}}, nil\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/copy.go",
    "content": "package pq\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"encoding/binary\"\n\t\"errors\"\n\t\"fmt\"\n\t\"sync\"\n)\n\nvar (\n\terrCopyInClosed               = errors.New(\"pq: copyin statement has already been closed\")\n\terrBinaryCopyNotSupported     = errors.New(\"pq: only text format supported for COPY\")\n\terrCopyToNotSupported         = errors.New(\"pq: COPY TO is not supported\")\n\terrCopyNotSupportedOutsideTxn = errors.New(\"pq: COPY is only allowed inside a transaction\")\n\terrCopyInProgress             = errors.New(\"pq: COPY in progress\")\n)\n\n// CopyIn creates a COPY FROM statement which can be prepared with\n// Tx.Prepare().  The target table should be visible in search_path.\nfunc CopyIn(table string, columns ...string) string {\n\tbuffer := bytes.NewBufferString(\"COPY \")\n\tBufferQuoteIdentifier(table, buffer)\n\tbuffer.WriteString(\" (\")\n\tmakeStmt(buffer, columns...)\n\treturn buffer.String()\n}\n\n// MakeStmt makes the stmt string for CopyIn and CopyInSchema.\nfunc makeStmt(buffer *bytes.Buffer, columns ...string) {\n\t//s := bytes.NewBufferString()\n\tfor i, col := range columns {\n\t\tif i != 0 {\n\t\t\tbuffer.WriteString(\", \")\n\t\t}\n\t\tBufferQuoteIdentifier(col, buffer)\n\t}\n\tbuffer.WriteString(\") FROM STDIN\")\n}\n\n// CopyInSchema creates a COPY FROM statement which can be prepared with\n// Tx.Prepare().\nfunc CopyInSchema(schema, table string, columns ...string) string {\n\tbuffer := bytes.NewBufferString(\"COPY \")\n\tBufferQuoteIdentifier(schema, buffer)\n\tbuffer.WriteRune('.')\n\tBufferQuoteIdentifier(table, buffer)\n\tbuffer.WriteString(\" (\")\n\tmakeStmt(buffer, columns...)\n\treturn buffer.String()\n}\n\ntype copyin struct {\n\tcn      *conn\n\tbuffer  []byte\n\trowData chan []byte\n\tdone    chan bool\n\n\tclosed bool\n\n\tmu struct {\n\t\tsync.Mutex\n\t\terr error\n\t\tdriver.Result\n\t}\n}\n\nconst ciBufferSize = 64 * 1024\n\n// flush buffer before the buffer is filled up and needs reallocation\nconst ciBufferFlushSize = 63 * 1024\n\nfunc (cn *conn) prepareCopyIn(q string) (_ driver.Stmt, err error) {\n\tif !cn.isInTransaction() {\n\t\treturn nil, errCopyNotSupportedOutsideTxn\n\t}\n\n\tci := &copyin{\n\t\tcn:      cn,\n\t\tbuffer:  make([]byte, 0, ciBufferSize),\n\t\trowData: make(chan []byte),\n\t\tdone:    make(chan bool, 1),\n\t}\n\t// add CopyData identifier + 4 bytes for message length\n\tci.buffer = append(ci.buffer, 'd', 0, 0, 0, 0)\n\n\tb := cn.writeBuf('Q')\n\tb.string(q)\n\tcn.send(b)\n\nawaitCopyInResponse:\n\tfor {\n\t\tt, r := cn.recv1()\n\t\tswitch t {\n\t\tcase 'G':\n\t\t\tif r.byte() != 0 {\n\t\t\t\terr = errBinaryCopyNotSupported\n\t\t\t\tbreak awaitCopyInResponse\n\t\t\t}\n\t\t\tgo ci.resploop()\n\t\t\treturn ci, nil\n\t\tcase 'H':\n\t\t\terr = errCopyToNotSupported\n\t\t\tbreak awaitCopyInResponse\n\t\tcase 'E':\n\t\t\terr = parseError(r)\n\t\tcase 'Z':\n\t\t\tif err == nil {\n\t\t\t\tci.setBad(driver.ErrBadConn)\n\t\t\t\terrorf(\"unexpected ReadyForQuery in response to COPY\")\n\t\t\t}\n\t\t\tcn.processReadyForQuery(r)\n\t\t\treturn nil, err\n\t\tdefault:\n\t\t\tci.setBad(driver.ErrBadConn)\n\t\t\terrorf(\"unknown response for copy query: %q\", t)\n\t\t}\n\t}\n\n\t// something went wrong, abort COPY before we return\n\tb = cn.writeBuf('f')\n\tb.string(err.Error())\n\tcn.send(b)\n\n\tfor {\n\t\tt, r := cn.recv1()\n\t\tswitch t {\n\t\tcase 'c', 'C', 'E':\n\t\tcase 'Z':\n\t\t\t// correctly aborted, we're done\n\t\t\tcn.processReadyForQuery(r)\n\t\t\treturn nil, err\n\t\tdefault:\n\t\t\tci.setBad(driver.ErrBadConn)\n\t\t\terrorf(\"unknown response for CopyFail: %q\", t)\n\t\t}\n\t}\n}\n\nfunc (ci *copyin) flush(buf []byte) {\n\t// set message length (without message identifier)\n\tbinary.BigEndian.PutUint32(buf[1:], uint32(len(buf)-1))\n\n\t_, err := ci.cn.c.Write(buf)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n}\n\nfunc (ci *copyin) resploop() {\n\tfor {\n\t\tvar r readBuf\n\t\tt, err := ci.cn.recvMessage(&r)\n\t\tif err != nil {\n\t\t\tci.setBad(driver.ErrBadConn)\n\t\t\tci.setError(err)\n\t\t\tci.done <- true\n\t\t\treturn\n\t\t}\n\t\tswitch t {\n\t\tcase 'C':\n\t\t\t// complete\n\t\t\tres, _ := ci.cn.parseComplete(r.string())\n\t\t\tci.setResult(res)\n\t\tcase 'N':\n\t\t\tif n := ci.cn.noticeHandler; n != nil {\n\t\t\t\tn(parseError(&r))\n\t\t\t}\n\t\tcase 'Z':\n\t\t\tci.cn.processReadyForQuery(&r)\n\t\t\tci.done <- true\n\t\t\treturn\n\t\tcase 'E':\n\t\t\terr := parseError(&r)\n\t\t\tci.setError(err)\n\t\tdefault:\n\t\t\tci.setBad(driver.ErrBadConn)\n\t\t\tci.setError(fmt.Errorf(\"unknown response during CopyIn: %q\", t))\n\t\t\tci.done <- true\n\t\t\treturn\n\t\t}\n\t}\n}\n\nfunc (ci *copyin) setBad(err error) {\n\tci.cn.err.set(err)\n}\n\nfunc (ci *copyin) getBad() error {\n\treturn ci.cn.err.get()\n}\n\nfunc (ci *copyin) err() error {\n\tci.mu.Lock()\n\terr := ci.mu.err\n\tci.mu.Unlock()\n\treturn err\n}\n\n// setError() sets ci.err if one has not been set already.  Caller must not be\n// holding ci.Mutex.\nfunc (ci *copyin) setError(err error) {\n\tci.mu.Lock()\n\tif ci.mu.err == nil {\n\t\tci.mu.err = err\n\t}\n\tci.mu.Unlock()\n}\n\nfunc (ci *copyin) setResult(result driver.Result) {\n\tci.mu.Lock()\n\tci.mu.Result = result\n\tci.mu.Unlock()\n}\n\nfunc (ci *copyin) getResult() driver.Result {\n\tci.mu.Lock()\n\tresult := ci.mu.Result\n\tci.mu.Unlock()\n\tif result == nil {\n\t\treturn driver.RowsAffected(0)\n\t}\n\treturn result\n}\n\nfunc (ci *copyin) NumInput() int {\n\treturn -1\n}\n\nfunc (ci *copyin) Query(v []driver.Value) (r driver.Rows, err error) {\n\treturn nil, ErrNotSupported\n}\n\n// Exec inserts values into the COPY stream. The insert is asynchronous\n// and Exec can return errors from previous Exec calls to the same\n// COPY stmt.\n//\n// You need to call Exec(nil) to sync the COPY stream and to get any\n// errors from pending data, since Stmt.Close() doesn't return errors\n// to the user.\nfunc (ci *copyin) Exec(v []driver.Value) (r driver.Result, err error) {\n\tif ci.closed {\n\t\treturn nil, errCopyInClosed\n\t}\n\n\tif err := ci.getBad(); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer ci.cn.errRecover(&err)\n\n\tif err := ci.err(); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif len(v) == 0 {\n\t\tif err := ci.Close(); err != nil {\n\t\t\treturn driver.RowsAffected(0), err\n\t\t}\n\n\t\treturn ci.getResult(), nil\n\t}\n\n\tnumValues := len(v)\n\tfor i, value := range v {\n\t\tci.buffer = appendEncodedText(&ci.cn.parameterStatus, ci.buffer, value)\n\t\tif i < numValues-1 {\n\t\t\tci.buffer = append(ci.buffer, '\\t')\n\t\t}\n\t}\n\n\tci.buffer = append(ci.buffer, '\\n')\n\n\tif len(ci.buffer) > ciBufferFlushSize {\n\t\tci.flush(ci.buffer)\n\t\t// reset buffer, keep bytes for message identifier and length\n\t\tci.buffer = ci.buffer[:5]\n\t}\n\n\treturn driver.RowsAffected(0), nil\n}\n\n// CopyData inserts a raw string into the COPY stream. The insert is\n// asynchronous and CopyData can return errors from previous CopyData calls to\n// the same COPY stmt.\n//\n// You need to call Exec(nil) to sync the COPY stream and to get any\n// errors from pending data, since Stmt.Close() doesn't return errors\n// to the user.\nfunc (ci *copyin) CopyData(ctx context.Context, line string) (r driver.Result, err error) {\n\tif ci.closed {\n\t\treturn nil, errCopyInClosed\n\t}\n\n\tif finish := ci.cn.watchCancel(ctx); finish != nil {\n\t\tdefer finish()\n\t}\n\n\tif err := ci.getBad(); err != nil {\n\t\treturn nil, err\n\t}\n\tdefer ci.cn.errRecover(&err)\n\n\tif err := ci.err(); err != nil {\n\t\treturn nil, err\n\t}\n\n\tci.buffer = append(ci.buffer, []byte(line)...)\n\tci.buffer = append(ci.buffer, '\\n')\n\n\tif len(ci.buffer) > ciBufferFlushSize {\n\t\tci.flush(ci.buffer)\n\t\t// reset buffer, keep bytes for message identifier and length\n\t\tci.buffer = ci.buffer[:5]\n\t}\n\n\treturn driver.RowsAffected(0), nil\n}\n\nfunc (ci *copyin) Close() (err error) {\n\tif ci.closed { // Don't do anything, we're already closed\n\t\treturn nil\n\t}\n\tci.closed = true\n\n\tif err := ci.getBad(); err != nil {\n\t\treturn err\n\t}\n\tdefer ci.cn.errRecover(&err)\n\n\tif len(ci.buffer) > 0 {\n\t\tci.flush(ci.buffer)\n\t}\n\t// Avoid touching the scratch buffer as resploop could be using it.\n\terr = ci.cn.sendSimpleMessage('c')\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t<-ci.done\n\tci.cn.inCopy = false\n\n\tif err := ci.err(); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/doc.go",
    "content": "/*\nPackage pq is a pure Go Postgres driver for the database/sql package.\n\nIn most cases clients will use the database/sql package instead of\nusing this package directly. For example:\n\n\timport (\n\t\t\"database/sql\"\n\n\t\t_ \"github.com/lib/pq\"\n\t)\n\n\tfunc main() {\n\t\tconnStr := \"user=pqgotest dbname=pqgotest sslmode=verify-full\"\n\t\tdb, err := sql.Open(\"postgres\", connStr)\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\n\t\tage := 21\n\t\trows, err := db.Query(\"SELECT name FROM users WHERE age = $1\", age)\n\t\t…\n\t}\n\nYou can also connect to a database using a URL. For example:\n\n\tconnStr := \"postgres://pqgotest:password@localhost/pqgotest?sslmode=verify-full\"\n\tdb, err := sql.Open(\"postgres\", connStr)\n\n\nConnection String Parameters\n\n\nSimilarly to libpq, when establishing a connection using pq you are expected to\nsupply a connection string containing zero or more parameters.\nA subset of the connection parameters supported by libpq are also supported by pq.\nAdditionally, pq also lets you specify run-time parameters (such as search_path or work_mem)\ndirectly in the connection string.  This is different from libpq, which does not allow\nrun-time parameters in the connection string, instead requiring you to supply\nthem in the options parameter.\n\nFor compatibility with libpq, the following special connection parameters are\nsupported:\n\n\t* dbname - The name of the database to connect to\n\t* user - The user to sign in as\n\t* password - The user's password\n\t* host - The host to connect to. Values that start with / are for unix\n\t  domain sockets. (default is localhost)\n\t* port - The port to bind to. (default is 5432)\n\t* sslmode - Whether or not to use SSL (default is require, this is not\n\t  the default for libpq)\n\t* fallback_application_name - An application_name to fall back to if one isn't provided.\n\t* connect_timeout - Maximum wait for connection, in seconds. Zero or\n\t  not specified means wait indefinitely.\n\t* sslcert - Cert file location. The file must contain PEM encoded data.\n\t* sslkey - Key file location. The file must contain PEM encoded data.\n\t* sslrootcert - The location of the root certificate file. The file\n\t  must contain PEM encoded data.\n\nValid values for sslmode are:\n\n\t* disable - No SSL\n\t* require - Always SSL (skip verification)\n\t* verify-ca - Always SSL (verify that the certificate presented by the\n\t  server was signed by a trusted CA)\n\t* verify-full - Always SSL (verify that the certification presented by\n\t  the server was signed by a trusted CA and the server host name\n\t  matches the one in the certificate)\n\nSee http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING\nfor more information about connection string parameters.\n\nUse single quotes for values that contain whitespace:\n\n    \"user=pqgotest password='with spaces'\"\n\nA backslash will escape the next character in values:\n\n    \"user=space\\ man password='it\\'s valid'\"\n\nNote that the connection parameter client_encoding (which sets the\ntext encoding for the connection) may be set but must be \"UTF8\",\nmatching with the same rules as Postgres. It is an error to provide\nany other value.\n\nIn addition to the parameters listed above, any run-time parameter that can be\nset at backend start time can be set in the connection string.  For more\ninformation, see\nhttp://www.postgresql.org/docs/current/static/runtime-config.html.\n\nMost environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html\nsupported by libpq are also supported by pq.  If any of the environment\nvariables not supported by pq are set, pq will panic during connection\nestablishment.  Environment variables have a lower precedence than explicitly\nprovided connection parameters.\n\nThe pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html\nis supported, but on Windows PGPASSFILE must be specified explicitly.\n\n\nQueries\n\n\ndatabase/sql does not dictate any specific format for parameter\nmarkers in query strings, and pq uses the Postgres-native ordinal markers,\nas shown above. The same marker can be reused for the same parameter:\n\n\trows, err := db.Query(`SELECT name FROM users WHERE favorite_fruit = $1\n\t\tOR age BETWEEN $2 AND $2 + 3`, \"orange\", 64)\n\npq does not support the LastInsertId() method of the Result type in database/sql.\nTo return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres\nRETURNING clause with a standard Query or QueryRow call:\n\n\tvar userid int\n\terr := db.QueryRow(`INSERT INTO users(name, favorite_fruit, age)\n\t\tVALUES('beatrice', 'starfruit', 93) RETURNING id`).Scan(&userid)\n\nFor more details on RETURNING, see the Postgres documentation:\n\n\thttp://www.postgresql.org/docs/current/static/sql-insert.html\n\thttp://www.postgresql.org/docs/current/static/sql-update.html\n\thttp://www.postgresql.org/docs/current/static/sql-delete.html\n\nFor additional instructions on querying see the documentation for the database/sql package.\n\n\nData Types\n\n\nParameters pass through driver.DefaultParameterConverter before they are handled\nby this package. When the binary_parameters connection option is enabled,\n[]byte values are sent directly to the backend as data in binary format.\n\nThis package returns the following types for values from the PostgreSQL backend:\n\n\t- integer types smallint, integer, and bigint are returned as int64\n\t- floating-point types real and double precision are returned as float64\n\t- character types char, varchar, and text are returned as string\n\t- temporal types date, time, timetz, timestamp, and timestamptz are\n\t  returned as time.Time\n\t- the boolean type is returned as bool\n\t- the bytea type is returned as []byte\n\nAll other types are returned directly from the backend as []byte values in text format.\n\n\nErrors\n\n\npq may return errors of type *pq.Error which can be interrogated for error details:\n\n        if err, ok := err.(*pq.Error); ok {\n            fmt.Println(\"pq error:\", err.Code.Name())\n        }\n\nSee the pq.Error type for details.\n\n\nBulk imports\n\nYou can perform bulk imports by preparing a statement returned by pq.CopyIn (or\npq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement\nhandle can then be repeatedly \"executed\" to copy data into the target table.\nAfter all data has been processed you should call Exec() once with no arguments\nto flush all buffered data. Any call to Exec() might return an error which\nshould be handled appropriately, but because of the internal buffering an error\nreturned by Exec() might not be related to the data passed in the call that\nfailed.\n\nCopyIn uses COPY FROM internally. It is not possible to COPY outside of an\nexplicit transaction in pq.\n\nUsage example:\n\n\ttxn, err := db.Begin()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tstmt, err := txn.Prepare(pq.CopyIn(\"users\", \"name\", \"age\"))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tfor _, user := range users {\n\t\t_, err = stmt.Exec(user.Name, int64(user.Age))\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t}\n\n\t_, err = stmt.Exec()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\terr = stmt.Close()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\terr = txn.Commit()\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\nNotifications\n\n\nPostgreSQL supports a simple publish/subscribe model over database\nconnections.  See http://www.postgresql.org/docs/current/static/sql-notify.html\nfor more information about the general mechanism.\n\nTo start listening for notifications, you first have to open a new connection\nto the database by calling NewListener.  This connection can not be used for\nanything other than LISTEN / NOTIFY.  Calling Listen will open a \"notification\nchannel\"; once a notification channel is open, a notification generated on that\nchannel will effect a send on the Listener.Notify channel.  A notification\nchannel will remain open until Unlisten is called, though connection loss might\nresult in some notifications being lost.  To solve this problem, Listener sends\na nil pointer over the Notify channel any time the connection is re-established\nfollowing a connection loss.  The application can get information about the\nstate of the underlying connection by setting an event callback in the call to\nNewListener.\n\nA single Listener can safely be used from concurrent goroutines, which means\nthat there is often no need to create more than one Listener in your\napplication.  However, a Listener is always connected to a single database, so\nyou will need to create a new Listener instance for every database you want to\nreceive notifications in.\n\nThe channel name in both Listen and Unlisten is case sensitive, and can contain\nany characters legal in an identifier (see\nhttp://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS\nfor more information).  Note that the channel name will be truncated to 63\nbytes by the PostgreSQL server.\n\nYou can find a complete, working example of Listener usage at\nhttps://godoc.org/github.com/lib/pq/example/listen.\n\n\nKerberos Support\n\n\nIf you need support for Kerberos authentication, add the following to your main\npackage:\n\n\timport \"github.com/lib/pq/auth/kerberos\"\n\n\tfunc init() {\n\t\tpq.RegisterGSSProvider(func() (pq.Gss, error) { return kerberos.NewGSS() })\n\t}\n\nThis package is in a separate module so that users who don't need Kerberos\ndon't have to download unnecessary dependencies.\n\nWhen imported, additional connection string parameters are supported:\n\n\t* krbsrvname - GSS (Kerberos) service name when constructing the\n\t  SPN (default is `postgres`). This will be combined with the host\n\t  to form the full SPN: `krbsrvname/host`.\n\t* krbspn - GSS (Kerberos) SPN. This takes priority over\n\t  `krbsrvname` if present.\n*/\npackage pq\n"
  },
  {
    "path": "vendor/github.com/lib/pq/encode.go",
    "content": "package pq\n\nimport (\n\t\"bytes\"\n\t\"database/sql/driver\"\n\t\"encoding/binary\"\n\t\"encoding/hex\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/lib/pq/oid\"\n)\n\nvar time2400Regex = regexp.MustCompile(`^(24:00(?::00(?:\\.0+)?)?)(?:[Z+-].*)?$`)\n\nfunc binaryEncode(parameterStatus *parameterStatus, x interface{}) []byte {\n\tswitch v := x.(type) {\n\tcase []byte:\n\t\treturn v\n\tdefault:\n\t\treturn encode(parameterStatus, x, oid.T_unknown)\n\t}\n}\n\nfunc encode(parameterStatus *parameterStatus, x interface{}, pgtypOid oid.Oid) []byte {\n\tswitch v := x.(type) {\n\tcase int64:\n\t\treturn strconv.AppendInt(nil, v, 10)\n\tcase float64:\n\t\treturn strconv.AppendFloat(nil, v, 'f', -1, 64)\n\tcase []byte:\n\t\tif pgtypOid == oid.T_bytea {\n\t\t\treturn encodeBytea(parameterStatus.serverVersion, v)\n\t\t}\n\n\t\treturn v\n\tcase string:\n\t\tif pgtypOid == oid.T_bytea {\n\t\t\treturn encodeBytea(parameterStatus.serverVersion, []byte(v))\n\t\t}\n\n\t\treturn []byte(v)\n\tcase bool:\n\t\treturn strconv.AppendBool(nil, v)\n\tcase time.Time:\n\t\treturn formatTs(v)\n\n\tdefault:\n\t\terrorf(\"encode: unknown type for %T\", v)\n\t}\n\n\tpanic(\"not reached\")\n}\n\nfunc decode(parameterStatus *parameterStatus, s []byte, typ oid.Oid, f format) interface{} {\n\tswitch f {\n\tcase formatBinary:\n\t\treturn binaryDecode(parameterStatus, s, typ)\n\tcase formatText:\n\t\treturn textDecode(parameterStatus, s, typ)\n\tdefault:\n\t\tpanic(\"not reached\")\n\t}\n}\n\nfunc binaryDecode(parameterStatus *parameterStatus, s []byte, typ oid.Oid) interface{} {\n\tswitch typ {\n\tcase oid.T_bytea:\n\t\treturn s\n\tcase oid.T_int8:\n\t\treturn int64(binary.BigEndian.Uint64(s))\n\tcase oid.T_int4:\n\t\treturn int64(int32(binary.BigEndian.Uint32(s)))\n\tcase oid.T_int2:\n\t\treturn int64(int16(binary.BigEndian.Uint16(s)))\n\tcase oid.T_uuid:\n\t\tb, err := decodeUUIDBinary(s)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\treturn b\n\n\tdefault:\n\t\terrorf(\"don't know how to decode binary parameter of type %d\", uint32(typ))\n\t}\n\n\tpanic(\"not reached\")\n}\n\nfunc textDecode(parameterStatus *parameterStatus, s []byte, typ oid.Oid) interface{} {\n\tswitch typ {\n\tcase oid.T_char, oid.T_varchar, oid.T_text:\n\t\treturn string(s)\n\tcase oid.T_bytea:\n\t\tb, err := parseBytea(s)\n\t\tif err != nil {\n\t\t\terrorf(\"%s\", err)\n\t\t}\n\t\treturn b\n\tcase oid.T_timestamptz:\n\t\treturn parseTs(parameterStatus.currentLocation, string(s))\n\tcase oid.T_timestamp, oid.T_date:\n\t\treturn parseTs(nil, string(s))\n\tcase oid.T_time:\n\t\treturn mustParse(\"15:04:05\", typ, s)\n\tcase oid.T_timetz:\n\t\treturn mustParse(\"15:04:05-07\", typ, s)\n\tcase oid.T_bool:\n\t\treturn s[0] == 't'\n\tcase oid.T_int8, oid.T_int4, oid.T_int2:\n\t\ti, err := strconv.ParseInt(string(s), 10, 64)\n\t\tif err != nil {\n\t\t\terrorf(\"%s\", err)\n\t\t}\n\t\treturn i\n\tcase oid.T_float4, oid.T_float8:\n\t\t// We always use 64 bit parsing, regardless of whether the input text is for\n\t\t// a float4 or float8, because clients expect float64s for all float datatypes\n\t\t// and returning a 32-bit parsed float64 produces lossy results.\n\t\tf, err := strconv.ParseFloat(string(s), 64)\n\t\tif err != nil {\n\t\t\terrorf(\"%s\", err)\n\t\t}\n\t\treturn f\n\t}\n\n\treturn s\n}\n\n// appendEncodedText encodes item in text format as required by COPY\n// and appends to buf\nfunc appendEncodedText(parameterStatus *parameterStatus, buf []byte, x interface{}) []byte {\n\tswitch v := x.(type) {\n\tcase int64:\n\t\treturn strconv.AppendInt(buf, v, 10)\n\tcase float64:\n\t\treturn strconv.AppendFloat(buf, v, 'f', -1, 64)\n\tcase []byte:\n\t\tencodedBytea := encodeBytea(parameterStatus.serverVersion, v)\n\t\treturn appendEscapedText(buf, string(encodedBytea))\n\tcase string:\n\t\treturn appendEscapedText(buf, v)\n\tcase bool:\n\t\treturn strconv.AppendBool(buf, v)\n\tcase time.Time:\n\t\treturn append(buf, formatTs(v)...)\n\tcase nil:\n\t\treturn append(buf, \"\\\\N\"...)\n\tdefault:\n\t\terrorf(\"encode: unknown type for %T\", v)\n\t}\n\n\tpanic(\"not reached\")\n}\n\nfunc appendEscapedText(buf []byte, text string) []byte {\n\tescapeNeeded := false\n\tstartPos := 0\n\tvar c byte\n\n\t// check if we need to escape\n\tfor i := 0; i < len(text); i++ {\n\t\tc = text[i]\n\t\tif c == '\\\\' || c == '\\n' || c == '\\r' || c == '\\t' {\n\t\t\tescapeNeeded = true\n\t\t\tstartPos = i\n\t\t\tbreak\n\t\t}\n\t}\n\tif !escapeNeeded {\n\t\treturn append(buf, text...)\n\t}\n\n\t// copy till first char to escape, iterate the rest\n\tresult := append(buf, text[:startPos]...)\n\tfor i := startPos; i < len(text); i++ {\n\t\tc = text[i]\n\t\tswitch c {\n\t\tcase '\\\\':\n\t\t\tresult = append(result, '\\\\', '\\\\')\n\t\tcase '\\n':\n\t\t\tresult = append(result, '\\\\', 'n')\n\t\tcase '\\r':\n\t\t\tresult = append(result, '\\\\', 'r')\n\t\tcase '\\t':\n\t\t\tresult = append(result, '\\\\', 't')\n\t\tdefault:\n\t\t\tresult = append(result, c)\n\t\t}\n\t}\n\treturn result\n}\n\nfunc mustParse(f string, typ oid.Oid, s []byte) time.Time {\n\tstr := string(s)\n\n\t// Check for a minute and second offset in the timezone.\n\tif typ == oid.T_timestamptz || typ == oid.T_timetz {\n\t\tfor i := 3; i <= 6; i += 3 {\n\t\t\tif str[len(str)-i] == ':' {\n\t\t\t\tf += \":00\"\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// Special case for 24:00 time.\n\t// Unfortunately, golang does not parse 24:00 as a proper time.\n\t// In this case, we want to try \"round to the next day\", to differentiate.\n\t// As such, we find if the 24:00 time matches at the beginning; if so,\n\t// we default it back to 00:00 but add a day later.\n\tvar is2400Time bool\n\tswitch typ {\n\tcase oid.T_timetz, oid.T_time:\n\t\tif matches := time2400Regex.FindStringSubmatch(str); matches != nil {\n\t\t\t// Concatenate timezone information at the back.\n\t\t\tstr = \"00:00:00\" + str[len(matches[1]):]\n\t\t\tis2400Time = true\n\t\t}\n\t}\n\tt, err := time.Parse(f, str)\n\tif err != nil {\n\t\terrorf(\"decode: %s\", err)\n\t}\n\tif is2400Time {\n\t\tt = t.Add(24 * time.Hour)\n\t}\n\treturn t\n}\n\nvar errInvalidTimestamp = errors.New(\"invalid timestamp\")\n\ntype timestampParser struct {\n\terr error\n}\n\nfunc (p *timestampParser) expect(str string, char byte, pos int) {\n\tif p.err != nil {\n\t\treturn\n\t}\n\tif pos+1 > len(str) {\n\t\tp.err = errInvalidTimestamp\n\t\treturn\n\t}\n\tif c := str[pos]; c != char && p.err == nil {\n\t\tp.err = fmt.Errorf(\"expected '%v' at position %v; got '%v'\", char, pos, c)\n\t}\n}\n\nfunc (p *timestampParser) mustAtoi(str string, begin int, end int) int {\n\tif p.err != nil {\n\t\treturn 0\n\t}\n\tif begin < 0 || end < 0 || begin > end || end > len(str) {\n\t\tp.err = errInvalidTimestamp\n\t\treturn 0\n\t}\n\tresult, err := strconv.Atoi(str[begin:end])\n\tif err != nil {\n\t\tif p.err == nil {\n\t\t\tp.err = fmt.Errorf(\"expected number; got '%v'\", str)\n\t\t}\n\t\treturn 0\n\t}\n\treturn result\n}\n\n// The location cache caches the time zones typically used by the client.\ntype locationCache struct {\n\tcache map[int]*time.Location\n\tlock  sync.Mutex\n}\n\n// All connections share the same list of timezones. Benchmarking shows that\n// about 5% speed could be gained by putting the cache in the connection and\n// losing the mutex, at the cost of a small amount of memory and a somewhat\n// significant increase in code complexity.\nvar globalLocationCache = newLocationCache()\n\nfunc newLocationCache() *locationCache {\n\treturn &locationCache{cache: make(map[int]*time.Location)}\n}\n\n// Returns the cached timezone for the specified offset, creating and caching\n// it if necessary.\nfunc (c *locationCache) getLocation(offset int) *time.Location {\n\tc.lock.Lock()\n\tdefer c.lock.Unlock()\n\n\tlocation, ok := c.cache[offset]\n\tif !ok {\n\t\tlocation = time.FixedZone(\"\", offset)\n\t\tc.cache[offset] = location\n\t}\n\n\treturn location\n}\n\nvar infinityTsEnabled = false\nvar infinityTsNegative time.Time\nvar infinityTsPositive time.Time\n\nconst (\n\tinfinityTsEnabledAlready        = \"pq: infinity timestamp enabled already\"\n\tinfinityTsNegativeMustBeSmaller = \"pq: infinity timestamp: negative value must be smaller (before) than positive\"\n)\n\n// EnableInfinityTs controls the handling of Postgres' \"-infinity\" and\n// \"infinity\" \"timestamp\"s.\n//\n// If EnableInfinityTs is not called, \"-infinity\" and \"infinity\" will return\n// []byte(\"-infinity\") and []byte(\"infinity\") respectively, and potentially\n// cause error \"sql: Scan error on column index 0: unsupported driver -> Scan\n// pair: []uint8 -> *time.Time\", when scanning into a time.Time value.\n//\n// Once EnableInfinityTs has been called, all connections created using this\n// driver will decode Postgres' \"-infinity\" and \"infinity\" for \"timestamp\",\n// \"timestamp with time zone\" and \"date\" types to the predefined minimum and\n// maximum times, respectively.  When encoding time.Time values, any time which\n// equals or precedes the predefined minimum time will be encoded to\n// \"-infinity\".  Any values at or past the maximum time will similarly be\n// encoded to \"infinity\".\n//\n// If EnableInfinityTs is called with negative >= positive, it will panic.\n// Calling EnableInfinityTs after a connection has been established results in\n// undefined behavior.  If EnableInfinityTs is called more than once, it will\n// panic.\nfunc EnableInfinityTs(negative time.Time, positive time.Time) {\n\tif infinityTsEnabled {\n\t\tpanic(infinityTsEnabledAlready)\n\t}\n\tif !negative.Before(positive) {\n\t\tpanic(infinityTsNegativeMustBeSmaller)\n\t}\n\tinfinityTsEnabled = true\n\tinfinityTsNegative = negative\n\tinfinityTsPositive = positive\n}\n\n/*\n * Testing might want to toggle infinityTsEnabled\n */\nfunc disableInfinityTs() {\n\tinfinityTsEnabled = false\n}\n\n// This is a time function specific to the Postgres default DateStyle\n// setting (\"ISO, MDY\"), the only one we currently support. This\n// accounts for the discrepancies between the parsing available with\n// time.Parse and the Postgres date formatting quirks.\nfunc parseTs(currentLocation *time.Location, str string) interface{} {\n\tswitch str {\n\tcase \"-infinity\":\n\t\tif infinityTsEnabled {\n\t\t\treturn infinityTsNegative\n\t\t}\n\t\treturn []byte(str)\n\tcase \"infinity\":\n\t\tif infinityTsEnabled {\n\t\t\treturn infinityTsPositive\n\t\t}\n\t\treturn []byte(str)\n\t}\n\tt, err := ParseTimestamp(currentLocation, str)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn t\n}\n\n// ParseTimestamp parses Postgres' text format. It returns a time.Time in\n// currentLocation iff that time's offset agrees with the offset sent from the\n// Postgres server. Otherwise, ParseTimestamp returns a time.Time with the\n// fixed offset offset provided by the Postgres server.\nfunc ParseTimestamp(currentLocation *time.Location, str string) (time.Time, error) {\n\tp := timestampParser{}\n\n\tmonSep := strings.IndexRune(str, '-')\n\t// this is Gregorian year, not ISO Year\n\t// In Gregorian system, the year 1 BC is followed by AD 1\n\tyear := p.mustAtoi(str, 0, monSep)\n\tdaySep := monSep + 3\n\tmonth := p.mustAtoi(str, monSep+1, daySep)\n\tp.expect(str, '-', daySep)\n\ttimeSep := daySep + 3\n\tday := p.mustAtoi(str, daySep+1, timeSep)\n\n\tminLen := monSep + len(\"01-01\") + 1\n\n\tisBC := strings.HasSuffix(str, \" BC\")\n\tif isBC {\n\t\tminLen += 3\n\t}\n\n\tvar hour, minute, second int\n\tif len(str) > minLen {\n\t\tp.expect(str, ' ', timeSep)\n\t\tminSep := timeSep + 3\n\t\tp.expect(str, ':', minSep)\n\t\thour = p.mustAtoi(str, timeSep+1, minSep)\n\t\tsecSep := minSep + 3\n\t\tp.expect(str, ':', secSep)\n\t\tminute = p.mustAtoi(str, minSep+1, secSep)\n\t\tsecEnd := secSep + 3\n\t\tsecond = p.mustAtoi(str, secSep+1, secEnd)\n\t}\n\tremainderIdx := monSep + len(\"01-01 00:00:00\") + 1\n\t// Three optional (but ordered) sections follow: the\n\t// fractional seconds, the time zone offset, and the BC\n\t// designation. We set them up here and adjust the other\n\t// offsets if the preceding sections exist.\n\n\tnanoSec := 0\n\ttzOff := 0\n\n\tif remainderIdx < len(str) && str[remainderIdx] == '.' {\n\t\tfracStart := remainderIdx + 1\n\t\tfracOff := strings.IndexAny(str[fracStart:], \"-+Z \")\n\t\tif fracOff < 0 {\n\t\t\tfracOff = len(str) - fracStart\n\t\t}\n\t\tfracSec := p.mustAtoi(str, fracStart, fracStart+fracOff)\n\t\tnanoSec = fracSec * (1000000000 / int(math.Pow(10, float64(fracOff))))\n\n\t\tremainderIdx += fracOff + 1\n\t}\n\tif tzStart := remainderIdx; tzStart < len(str) && (str[tzStart] == '-' || str[tzStart] == '+') {\n\t\t// time zone separator is always '-' or '+' or 'Z' (UTC is +00)\n\t\tvar tzSign int\n\t\tswitch c := str[tzStart]; c {\n\t\tcase '-':\n\t\t\ttzSign = -1\n\t\tcase '+':\n\t\t\ttzSign = +1\n\t\tdefault:\n\t\t\treturn time.Time{}, fmt.Errorf(\"expected '-' or '+' at position %v; got %v\", tzStart, c)\n\t\t}\n\t\ttzHours := p.mustAtoi(str, tzStart+1, tzStart+3)\n\t\tremainderIdx += 3\n\t\tvar tzMin, tzSec int\n\t\tif remainderIdx < len(str) && str[remainderIdx] == ':' {\n\t\t\ttzMin = p.mustAtoi(str, remainderIdx+1, remainderIdx+3)\n\t\t\tremainderIdx += 3\n\t\t}\n\t\tif remainderIdx < len(str) && str[remainderIdx] == ':' {\n\t\t\ttzSec = p.mustAtoi(str, remainderIdx+1, remainderIdx+3)\n\t\t\tremainderIdx += 3\n\t\t}\n\t\ttzOff = tzSign * ((tzHours * 60 * 60) + (tzMin * 60) + tzSec)\n\t} else if tzStart < len(str) && str[tzStart] == 'Z' {\n\t\t// time zone Z separator indicates UTC is +00\n\t\tremainderIdx += 1\n\t}\n\n\tvar isoYear int\n\n\tif isBC {\n\t\tisoYear = 1 - year\n\t\tremainderIdx += 3\n\t} else {\n\t\tisoYear = year\n\t}\n\tif remainderIdx < len(str) {\n\t\treturn time.Time{}, fmt.Errorf(\"expected end of input, got %v\", str[remainderIdx:])\n\t}\n\tt := time.Date(isoYear, time.Month(month), day,\n\t\thour, minute, second, nanoSec,\n\t\tglobalLocationCache.getLocation(tzOff))\n\n\tif currentLocation != nil {\n\t\t// Set the location of the returned Time based on the session's\n\t\t// TimeZone value, but only if the local time zone database agrees with\n\t\t// the remote database on the offset.\n\t\tlt := t.In(currentLocation)\n\t\t_, newOff := lt.Zone()\n\t\tif newOff == tzOff {\n\t\t\tt = lt\n\t\t}\n\t}\n\n\treturn t, p.err\n}\n\n// formatTs formats t into a format postgres understands.\nfunc formatTs(t time.Time) []byte {\n\tif infinityTsEnabled {\n\t\t// t <= -infinity : ! (t > -infinity)\n\t\tif !t.After(infinityTsNegative) {\n\t\t\treturn []byte(\"-infinity\")\n\t\t}\n\t\t// t >= infinity : ! (!t < infinity)\n\t\tif !t.Before(infinityTsPositive) {\n\t\t\treturn []byte(\"infinity\")\n\t\t}\n\t}\n\treturn FormatTimestamp(t)\n}\n\n// FormatTimestamp formats t into Postgres' text format for timestamps.\nfunc FormatTimestamp(t time.Time) []byte {\n\t// Need to send dates before 0001 A.D. with \" BC\" suffix, instead of the\n\t// minus sign preferred by Go.\n\t// Beware, \"0000\" in ISO is \"1 BC\", \"-0001\" is \"2 BC\" and so on\n\tbc := false\n\tif t.Year() <= 0 {\n\t\t// flip year sign, and add 1, e.g: \"0\" will be \"1\", and \"-10\" will be \"11\"\n\t\tt = t.AddDate((-t.Year())*2+1, 0, 0)\n\t\tbc = true\n\t}\n\tb := []byte(t.Format(\"2006-01-02 15:04:05.999999999Z07:00\"))\n\n\t_, offset := t.Zone()\n\toffset %= 60\n\tif offset != 0 {\n\t\t// RFC3339Nano already printed the minus sign\n\t\tif offset < 0 {\n\t\t\toffset = -offset\n\t\t}\n\n\t\tb = append(b, ':')\n\t\tif offset < 10 {\n\t\t\tb = append(b, '0')\n\t\t}\n\t\tb = strconv.AppendInt(b, int64(offset), 10)\n\t}\n\n\tif bc {\n\t\tb = append(b, \" BC\"...)\n\t}\n\treturn b\n}\n\n// Parse a bytea value received from the server.  Both \"hex\" and the legacy\n// \"escape\" format are supported.\nfunc parseBytea(s []byte) (result []byte, err error) {\n\tif len(s) >= 2 && bytes.Equal(s[:2], []byte(\"\\\\x\")) {\n\t\t// bytea_output = hex\n\t\ts = s[2:] // trim off leading \"\\\\x\"\n\t\tresult = make([]byte, hex.DecodedLen(len(s)))\n\t\t_, err := hex.Decode(result, s)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\t// bytea_output = escape\n\t\tfor len(s) > 0 {\n\t\t\tif s[0] == '\\\\' {\n\t\t\t\t// escaped '\\\\'\n\t\t\t\tif len(s) >= 2 && s[1] == '\\\\' {\n\t\t\t\t\tresult = append(result, '\\\\')\n\t\t\t\t\ts = s[2:]\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\t// '\\\\' followed by an octal number\n\t\t\t\tif len(s) < 4 {\n\t\t\t\t\treturn nil, fmt.Errorf(\"invalid bytea sequence %v\", s)\n\t\t\t\t}\n\t\t\t\tr, err := strconv.ParseUint(string(s[1:4]), 8, 8)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, fmt.Errorf(\"could not parse bytea value: %s\", err.Error())\n\t\t\t\t}\n\t\t\t\tresult = append(result, byte(r))\n\t\t\t\ts = s[4:]\n\t\t\t} else {\n\t\t\t\t// We hit an unescaped, raw byte.  Try to read in as many as\n\t\t\t\t// possible in one go.\n\t\t\t\ti := bytes.IndexByte(s, '\\\\')\n\t\t\t\tif i == -1 {\n\t\t\t\t\tresult = append(result, s...)\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tresult = append(result, s[:i]...)\n\t\t\t\ts = s[i:]\n\t\t\t}\n\t\t}\n\t}\n\n\treturn result, nil\n}\n\nfunc encodeBytea(serverVersion int, v []byte) (result []byte) {\n\tif serverVersion >= 90000 {\n\t\t// Use the hex format if we know that the server supports it\n\t\tresult = make([]byte, 2+hex.EncodedLen(len(v)))\n\t\tresult[0] = '\\\\'\n\t\tresult[1] = 'x'\n\t\thex.Encode(result[2:], v)\n\t} else {\n\t\t// .. or resort to \"escape\"\n\t\tfor _, b := range v {\n\t\t\tif b == '\\\\' {\n\t\t\t\tresult = append(result, '\\\\', '\\\\')\n\t\t\t} else if b < 0x20 || b > 0x7e {\n\t\t\t\tresult = append(result, []byte(fmt.Sprintf(\"\\\\%03o\", b))...)\n\t\t\t} else {\n\t\t\t\tresult = append(result, b)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn result\n}\n\n// NullTime represents a time.Time that may be null. NullTime implements the\n// sql.Scanner interface so it can be used as a scan destination, similar to\n// sql.NullString.\ntype NullTime struct {\n\tTime  time.Time\n\tValid bool // Valid is true if Time is not NULL\n}\n\n// Scan implements the Scanner interface.\nfunc (nt *NullTime) Scan(value interface{}) error {\n\tnt.Time, nt.Valid = value.(time.Time)\n\treturn nil\n}\n\n// Value implements the driver Valuer interface.\nfunc (nt NullTime) Value() (driver.Value, error) {\n\tif !nt.Valid {\n\t\treturn nil, nil\n\t}\n\treturn nt.Time, nil\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/error.go",
    "content": "package pq\n\nimport (\n\t\"database/sql/driver\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"runtime\"\n)\n\n// Error severities\nconst (\n\tEfatal   = \"FATAL\"\n\tEpanic   = \"PANIC\"\n\tEwarning = \"WARNING\"\n\tEnotice  = \"NOTICE\"\n\tEdebug   = \"DEBUG\"\n\tEinfo    = \"INFO\"\n\tElog     = \"LOG\"\n)\n\n// Error represents an error communicating with the server.\n//\n// See http://www.postgresql.org/docs/current/static/protocol-error-fields.html for details of the fields\ntype Error struct {\n\tSeverity         string\n\tCode             ErrorCode\n\tMessage          string\n\tDetail           string\n\tHint             string\n\tPosition         string\n\tInternalPosition string\n\tInternalQuery    string\n\tWhere            string\n\tSchema           string\n\tTable            string\n\tColumn           string\n\tDataTypeName     string\n\tConstraint       string\n\tFile             string\n\tLine             string\n\tRoutine          string\n}\n\n// ErrorCode is a five-character error code.\ntype ErrorCode string\n\n// Name returns a more human friendly rendering of the error code, namely the\n// \"condition name\".\n//\n// See http://www.postgresql.org/docs/9.3/static/errcodes-appendix.html for\n// details.\nfunc (ec ErrorCode) Name() string {\n\treturn errorCodeNames[ec]\n}\n\n// ErrorClass is only the class part of an error code.\ntype ErrorClass string\n\n// Name returns the condition name of an error class.  It is equivalent to the\n// condition name of the \"standard\" error code (i.e. the one having the last\n// three characters \"000\").\nfunc (ec ErrorClass) Name() string {\n\treturn errorCodeNames[ErrorCode(ec+\"000\")]\n}\n\n// Class returns the error class, e.g. \"28\".\n//\n// See http://www.postgresql.org/docs/9.3/static/errcodes-appendix.html for\n// details.\nfunc (ec ErrorCode) Class() ErrorClass {\n\treturn ErrorClass(ec[0:2])\n}\n\n// errorCodeNames is a mapping between the five-character error codes and the\n// human readable \"condition names\". It is derived from the list at\n// http://www.postgresql.org/docs/9.3/static/errcodes-appendix.html\nvar errorCodeNames = map[ErrorCode]string{\n\t// Class 00 - Successful Completion\n\t\"00000\": \"successful_completion\",\n\t// Class 01 - Warning\n\t\"01000\": \"warning\",\n\t\"0100C\": \"dynamic_result_sets_returned\",\n\t\"01008\": \"implicit_zero_bit_padding\",\n\t\"01003\": \"null_value_eliminated_in_set_function\",\n\t\"01007\": \"privilege_not_granted\",\n\t\"01006\": \"privilege_not_revoked\",\n\t\"01004\": \"string_data_right_truncation\",\n\t\"01P01\": \"deprecated_feature\",\n\t// Class 02 - No Data (this is also a warning class per the SQL standard)\n\t\"02000\": \"no_data\",\n\t\"02001\": \"no_additional_dynamic_result_sets_returned\",\n\t// Class 03 - SQL Statement Not Yet Complete\n\t\"03000\": \"sql_statement_not_yet_complete\",\n\t// Class 08 - Connection Exception\n\t\"08000\": \"connection_exception\",\n\t\"08003\": \"connection_does_not_exist\",\n\t\"08006\": \"connection_failure\",\n\t\"08001\": \"sqlclient_unable_to_establish_sqlconnection\",\n\t\"08004\": \"sqlserver_rejected_establishment_of_sqlconnection\",\n\t\"08007\": \"transaction_resolution_unknown\",\n\t\"08P01\": \"protocol_violation\",\n\t// Class 09 - Triggered Action Exception\n\t\"09000\": \"triggered_action_exception\",\n\t// Class 0A - Feature Not Supported\n\t\"0A000\": \"feature_not_supported\",\n\t// Class 0B - Invalid Transaction Initiation\n\t\"0B000\": \"invalid_transaction_initiation\",\n\t// Class 0F - Locator Exception\n\t\"0F000\": \"locator_exception\",\n\t\"0F001\": \"invalid_locator_specification\",\n\t// Class 0L - Invalid Grantor\n\t\"0L000\": \"invalid_grantor\",\n\t\"0LP01\": \"invalid_grant_operation\",\n\t// Class 0P - Invalid Role Specification\n\t\"0P000\": \"invalid_role_specification\",\n\t// Class 0Z - Diagnostics Exception\n\t\"0Z000\": \"diagnostics_exception\",\n\t\"0Z002\": \"stacked_diagnostics_accessed_without_active_handler\",\n\t// Class 20 - Case Not Found\n\t\"20000\": \"case_not_found\",\n\t// Class 21 - Cardinality Violation\n\t\"21000\": \"cardinality_violation\",\n\t// Class 22 - Data Exception\n\t\"22000\": \"data_exception\",\n\t\"2202E\": \"array_subscript_error\",\n\t\"22021\": \"character_not_in_repertoire\",\n\t\"22008\": \"datetime_field_overflow\",\n\t\"22012\": \"division_by_zero\",\n\t\"22005\": \"error_in_assignment\",\n\t\"2200B\": \"escape_character_conflict\",\n\t\"22022\": \"indicator_overflow\",\n\t\"22015\": \"interval_field_overflow\",\n\t\"2201E\": \"invalid_argument_for_logarithm\",\n\t\"22014\": \"invalid_argument_for_ntile_function\",\n\t\"22016\": \"invalid_argument_for_nth_value_function\",\n\t\"2201F\": \"invalid_argument_for_power_function\",\n\t\"2201G\": \"invalid_argument_for_width_bucket_function\",\n\t\"22018\": \"invalid_character_value_for_cast\",\n\t\"22007\": \"invalid_datetime_format\",\n\t\"22019\": \"invalid_escape_character\",\n\t\"2200D\": \"invalid_escape_octet\",\n\t\"22025\": \"invalid_escape_sequence\",\n\t\"22P06\": \"nonstandard_use_of_escape_character\",\n\t\"22010\": \"invalid_indicator_parameter_value\",\n\t\"22023\": \"invalid_parameter_value\",\n\t\"2201B\": \"invalid_regular_expression\",\n\t\"2201W\": \"invalid_row_count_in_limit_clause\",\n\t\"2201X\": \"invalid_row_count_in_result_offset_clause\",\n\t\"22009\": \"invalid_time_zone_displacement_value\",\n\t\"2200C\": \"invalid_use_of_escape_character\",\n\t\"2200G\": \"most_specific_type_mismatch\",\n\t\"22004\": \"null_value_not_allowed\",\n\t\"22002\": \"null_value_no_indicator_parameter\",\n\t\"22003\": \"numeric_value_out_of_range\",\n\t\"2200H\": \"sequence_generator_limit_exceeded\",\n\t\"22026\": \"string_data_length_mismatch\",\n\t\"22001\": \"string_data_right_truncation\",\n\t\"22011\": \"substring_error\",\n\t\"22027\": \"trim_error\",\n\t\"22024\": \"unterminated_c_string\",\n\t\"2200F\": \"zero_length_character_string\",\n\t\"22P01\": \"floating_point_exception\",\n\t\"22P02\": \"invalid_text_representation\",\n\t\"22P03\": \"invalid_binary_representation\",\n\t\"22P04\": \"bad_copy_file_format\",\n\t\"22P05\": \"untranslatable_character\",\n\t\"2200L\": \"not_an_xml_document\",\n\t\"2200M\": \"invalid_xml_document\",\n\t\"2200N\": \"invalid_xml_content\",\n\t\"2200S\": \"invalid_xml_comment\",\n\t\"2200T\": \"invalid_xml_processing_instruction\",\n\t// Class 23 - Integrity Constraint Violation\n\t\"23000\": \"integrity_constraint_violation\",\n\t\"23001\": \"restrict_violation\",\n\t\"23502\": \"not_null_violation\",\n\t\"23503\": \"foreign_key_violation\",\n\t\"23505\": \"unique_violation\",\n\t\"23514\": \"check_violation\",\n\t\"23P01\": \"exclusion_violation\",\n\t// Class 24 - Invalid Cursor State\n\t\"24000\": \"invalid_cursor_state\",\n\t// Class 25 - Invalid Transaction State\n\t\"25000\": \"invalid_transaction_state\",\n\t\"25001\": \"active_sql_transaction\",\n\t\"25002\": \"branch_transaction_already_active\",\n\t\"25008\": \"held_cursor_requires_same_isolation_level\",\n\t\"25003\": \"inappropriate_access_mode_for_branch_transaction\",\n\t\"25004\": \"inappropriate_isolation_level_for_branch_transaction\",\n\t\"25005\": \"no_active_sql_transaction_for_branch_transaction\",\n\t\"25006\": \"read_only_sql_transaction\",\n\t\"25007\": \"schema_and_data_statement_mixing_not_supported\",\n\t\"25P01\": \"no_active_sql_transaction\",\n\t\"25P02\": \"in_failed_sql_transaction\",\n\t// Class 26 - Invalid SQL Statement Name\n\t\"26000\": \"invalid_sql_statement_name\",\n\t// Class 27 - Triggered Data Change Violation\n\t\"27000\": \"triggered_data_change_violation\",\n\t// Class 28 - Invalid Authorization Specification\n\t\"28000\": \"invalid_authorization_specification\",\n\t\"28P01\": \"invalid_password\",\n\t// Class 2B - Dependent Privilege Descriptors Still Exist\n\t\"2B000\": \"dependent_privilege_descriptors_still_exist\",\n\t\"2BP01\": \"dependent_objects_still_exist\",\n\t// Class 2D - Invalid Transaction Termination\n\t\"2D000\": \"invalid_transaction_termination\",\n\t// Class 2F - SQL Routine Exception\n\t\"2F000\": \"sql_routine_exception\",\n\t\"2F005\": \"function_executed_no_return_statement\",\n\t\"2F002\": \"modifying_sql_data_not_permitted\",\n\t\"2F003\": \"prohibited_sql_statement_attempted\",\n\t\"2F004\": \"reading_sql_data_not_permitted\",\n\t// Class 34 - Invalid Cursor Name\n\t\"34000\": \"invalid_cursor_name\",\n\t// Class 38 - External Routine Exception\n\t\"38000\": \"external_routine_exception\",\n\t\"38001\": \"containing_sql_not_permitted\",\n\t\"38002\": \"modifying_sql_data_not_permitted\",\n\t\"38003\": \"prohibited_sql_statement_attempted\",\n\t\"38004\": \"reading_sql_data_not_permitted\",\n\t// Class 39 - External Routine Invocation Exception\n\t\"39000\": \"external_routine_invocation_exception\",\n\t\"39001\": \"invalid_sqlstate_returned\",\n\t\"39004\": \"null_value_not_allowed\",\n\t\"39P01\": \"trigger_protocol_violated\",\n\t\"39P02\": \"srf_protocol_violated\",\n\t// Class 3B - Savepoint Exception\n\t\"3B000\": \"savepoint_exception\",\n\t\"3B001\": \"invalid_savepoint_specification\",\n\t// Class 3D - Invalid Catalog Name\n\t\"3D000\": \"invalid_catalog_name\",\n\t// Class 3F - Invalid Schema Name\n\t\"3F000\": \"invalid_schema_name\",\n\t// Class 40 - Transaction Rollback\n\t\"40000\": \"transaction_rollback\",\n\t\"40002\": \"transaction_integrity_constraint_violation\",\n\t\"40001\": \"serialization_failure\",\n\t\"40003\": \"statement_completion_unknown\",\n\t\"40P01\": \"deadlock_detected\",\n\t// Class 42 - Syntax Error or Access Rule Violation\n\t\"42000\": \"syntax_error_or_access_rule_violation\",\n\t\"42601\": \"syntax_error\",\n\t\"42501\": \"insufficient_privilege\",\n\t\"42846\": \"cannot_coerce\",\n\t\"42803\": \"grouping_error\",\n\t\"42P20\": \"windowing_error\",\n\t\"42P19\": \"invalid_recursion\",\n\t\"42830\": \"invalid_foreign_key\",\n\t\"42602\": \"invalid_name\",\n\t\"42622\": \"name_too_long\",\n\t\"42939\": \"reserved_name\",\n\t\"42804\": \"datatype_mismatch\",\n\t\"42P18\": \"indeterminate_datatype\",\n\t\"42P21\": \"collation_mismatch\",\n\t\"42P22\": \"indeterminate_collation\",\n\t\"42809\": \"wrong_object_type\",\n\t\"42703\": \"undefined_column\",\n\t\"42883\": \"undefined_function\",\n\t\"42P01\": \"undefined_table\",\n\t\"42P02\": \"undefined_parameter\",\n\t\"42704\": \"undefined_object\",\n\t\"42701\": \"duplicate_column\",\n\t\"42P03\": \"duplicate_cursor\",\n\t\"42P04\": \"duplicate_database\",\n\t\"42723\": \"duplicate_function\",\n\t\"42P05\": \"duplicate_prepared_statement\",\n\t\"42P06\": \"duplicate_schema\",\n\t\"42P07\": \"duplicate_table\",\n\t\"42712\": \"duplicate_alias\",\n\t\"42710\": \"duplicate_object\",\n\t\"42702\": \"ambiguous_column\",\n\t\"42725\": \"ambiguous_function\",\n\t\"42P08\": \"ambiguous_parameter\",\n\t\"42P09\": \"ambiguous_alias\",\n\t\"42P10\": \"invalid_column_reference\",\n\t\"42611\": \"invalid_column_definition\",\n\t\"42P11\": \"invalid_cursor_definition\",\n\t\"42P12\": \"invalid_database_definition\",\n\t\"42P13\": \"invalid_function_definition\",\n\t\"42P14\": \"invalid_prepared_statement_definition\",\n\t\"42P15\": \"invalid_schema_definition\",\n\t\"42P16\": \"invalid_table_definition\",\n\t\"42P17\": \"invalid_object_definition\",\n\t// Class 44 - WITH CHECK OPTION Violation\n\t\"44000\": \"with_check_option_violation\",\n\t// Class 53 - Insufficient Resources\n\t\"53000\": \"insufficient_resources\",\n\t\"53100\": \"disk_full\",\n\t\"53200\": \"out_of_memory\",\n\t\"53300\": \"too_many_connections\",\n\t\"53400\": \"configuration_limit_exceeded\",\n\t// Class 54 - Program Limit Exceeded\n\t\"54000\": \"program_limit_exceeded\",\n\t\"54001\": \"statement_too_complex\",\n\t\"54011\": \"too_many_columns\",\n\t\"54023\": \"too_many_arguments\",\n\t// Class 55 - Object Not In Prerequisite State\n\t\"55000\": \"object_not_in_prerequisite_state\",\n\t\"55006\": \"object_in_use\",\n\t\"55P02\": \"cant_change_runtime_param\",\n\t\"55P03\": \"lock_not_available\",\n\t// Class 57 - Operator Intervention\n\t\"57000\": \"operator_intervention\",\n\t\"57014\": \"query_canceled\",\n\t\"57P01\": \"admin_shutdown\",\n\t\"57P02\": \"crash_shutdown\",\n\t\"57P03\": \"cannot_connect_now\",\n\t\"57P04\": \"database_dropped\",\n\t// Class 58 - System Error (errors external to PostgreSQL itself)\n\t\"58000\": \"system_error\",\n\t\"58030\": \"io_error\",\n\t\"58P01\": \"undefined_file\",\n\t\"58P02\": \"duplicate_file\",\n\t// Class F0 - Configuration File Error\n\t\"F0000\": \"config_file_error\",\n\t\"F0001\": \"lock_file_exists\",\n\t// Class HV - Foreign Data Wrapper Error (SQL/MED)\n\t\"HV000\": \"fdw_error\",\n\t\"HV005\": \"fdw_column_name_not_found\",\n\t\"HV002\": \"fdw_dynamic_parameter_value_needed\",\n\t\"HV010\": \"fdw_function_sequence_error\",\n\t\"HV021\": \"fdw_inconsistent_descriptor_information\",\n\t\"HV024\": \"fdw_invalid_attribute_value\",\n\t\"HV007\": \"fdw_invalid_column_name\",\n\t\"HV008\": \"fdw_invalid_column_number\",\n\t\"HV004\": \"fdw_invalid_data_type\",\n\t\"HV006\": \"fdw_invalid_data_type_descriptors\",\n\t\"HV091\": \"fdw_invalid_descriptor_field_identifier\",\n\t\"HV00B\": \"fdw_invalid_handle\",\n\t\"HV00C\": \"fdw_invalid_option_index\",\n\t\"HV00D\": \"fdw_invalid_option_name\",\n\t\"HV090\": \"fdw_invalid_string_length_or_buffer_length\",\n\t\"HV00A\": \"fdw_invalid_string_format\",\n\t\"HV009\": \"fdw_invalid_use_of_null_pointer\",\n\t\"HV014\": \"fdw_too_many_handles\",\n\t\"HV001\": \"fdw_out_of_memory\",\n\t\"HV00P\": \"fdw_no_schemas\",\n\t\"HV00J\": \"fdw_option_name_not_found\",\n\t\"HV00K\": \"fdw_reply_handle\",\n\t\"HV00Q\": \"fdw_schema_not_found\",\n\t\"HV00R\": \"fdw_table_not_found\",\n\t\"HV00L\": \"fdw_unable_to_create_execution\",\n\t\"HV00M\": \"fdw_unable_to_create_reply\",\n\t\"HV00N\": \"fdw_unable_to_establish_connection\",\n\t// Class P0 - PL/pgSQL Error\n\t\"P0000\": \"plpgsql_error\",\n\t\"P0001\": \"raise_exception\",\n\t\"P0002\": \"no_data_found\",\n\t\"P0003\": \"too_many_rows\",\n\t// Class XX - Internal Error\n\t\"XX000\": \"internal_error\",\n\t\"XX001\": \"data_corrupted\",\n\t\"XX002\": \"index_corrupted\",\n}\n\nfunc parseError(r *readBuf) *Error {\n\terr := new(Error)\n\tfor t := r.byte(); t != 0; t = r.byte() {\n\t\tmsg := r.string()\n\t\tswitch t {\n\t\tcase 'S':\n\t\t\terr.Severity = msg\n\t\tcase 'C':\n\t\t\terr.Code = ErrorCode(msg)\n\t\tcase 'M':\n\t\t\terr.Message = msg\n\t\tcase 'D':\n\t\t\terr.Detail = msg\n\t\tcase 'H':\n\t\t\terr.Hint = msg\n\t\tcase 'P':\n\t\t\terr.Position = msg\n\t\tcase 'p':\n\t\t\terr.InternalPosition = msg\n\t\tcase 'q':\n\t\t\terr.InternalQuery = msg\n\t\tcase 'W':\n\t\t\terr.Where = msg\n\t\tcase 's':\n\t\t\terr.Schema = msg\n\t\tcase 't':\n\t\t\terr.Table = msg\n\t\tcase 'c':\n\t\t\terr.Column = msg\n\t\tcase 'd':\n\t\t\terr.DataTypeName = msg\n\t\tcase 'n':\n\t\t\terr.Constraint = msg\n\t\tcase 'F':\n\t\t\terr.File = msg\n\t\tcase 'L':\n\t\t\terr.Line = msg\n\t\tcase 'R':\n\t\t\terr.Routine = msg\n\t\t}\n\t}\n\treturn err\n}\n\n// Fatal returns true if the Error Severity is fatal.\nfunc (err *Error) Fatal() bool {\n\treturn err.Severity == Efatal\n}\n\n// SQLState returns the SQLState of the error.\nfunc (err *Error) SQLState() string {\n\treturn string(err.Code)\n}\n\n// Get implements the legacy PGError interface. New code should use the fields\n// of the Error struct directly.\nfunc (err *Error) Get(k byte) (v string) {\n\tswitch k {\n\tcase 'S':\n\t\treturn err.Severity\n\tcase 'C':\n\t\treturn string(err.Code)\n\tcase 'M':\n\t\treturn err.Message\n\tcase 'D':\n\t\treturn err.Detail\n\tcase 'H':\n\t\treturn err.Hint\n\tcase 'P':\n\t\treturn err.Position\n\tcase 'p':\n\t\treturn err.InternalPosition\n\tcase 'q':\n\t\treturn err.InternalQuery\n\tcase 'W':\n\t\treturn err.Where\n\tcase 's':\n\t\treturn err.Schema\n\tcase 't':\n\t\treturn err.Table\n\tcase 'c':\n\t\treturn err.Column\n\tcase 'd':\n\t\treturn err.DataTypeName\n\tcase 'n':\n\t\treturn err.Constraint\n\tcase 'F':\n\t\treturn err.File\n\tcase 'L':\n\t\treturn err.Line\n\tcase 'R':\n\t\treturn err.Routine\n\t}\n\treturn \"\"\n}\n\nfunc (err *Error) Error() string {\n\treturn \"pq: \" + err.Message\n}\n\n// PGError is an interface used by previous versions of pq. It is provided\n// only to support legacy code. New code should use the Error type.\ntype PGError interface {\n\tError() string\n\tFatal() bool\n\tGet(k byte) (v string)\n}\n\nfunc errorf(s string, args ...interface{}) {\n\tpanic(fmt.Errorf(\"pq: %s\", fmt.Sprintf(s, args...)))\n}\n\n// TODO(ainar-g) Rename to errorf after removing panics.\nfunc fmterrorf(s string, args ...interface{}) error {\n\treturn fmt.Errorf(\"pq: %s\", fmt.Sprintf(s, args...))\n}\n\nfunc errRecoverNoErrBadConn(err *error) {\n\te := recover()\n\tif e == nil {\n\t\t// Do nothing\n\t\treturn\n\t}\n\tvar ok bool\n\t*err, ok = e.(error)\n\tif !ok {\n\t\t*err = fmt.Errorf(\"pq: unexpected error: %#v\", e)\n\t}\n}\n\nfunc (cn *conn) errRecover(err *error) {\n\te := recover()\n\tswitch v := e.(type) {\n\tcase nil:\n\t\t// Do nothing\n\tcase runtime.Error:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\tpanic(v)\n\tcase *Error:\n\t\tif v.Fatal() {\n\t\t\t*err = driver.ErrBadConn\n\t\t} else {\n\t\t\t*err = v\n\t\t}\n\tcase *net.OpError:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\t*err = v\n\tcase *safeRetryError:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\t*err = driver.ErrBadConn\n\tcase error:\n\t\tif v == io.EOF || v.Error() == \"remote error: handshake failure\" {\n\t\t\t*err = driver.ErrBadConn\n\t\t} else {\n\t\t\t*err = v\n\t\t}\n\n\tdefault:\n\t\tcn.err.set(driver.ErrBadConn)\n\t\tpanic(fmt.Sprintf(\"unknown error: %#v\", e))\n\t}\n\n\t// Any time we return ErrBadConn, we need to remember it since *Tx doesn't\n\t// mark the connection bad in database/sql.\n\tif *err == driver.ErrBadConn {\n\t\tcn.err.set(driver.ErrBadConn)\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/krb.go",
    "content": "package pq\n\n// NewGSSFunc creates a GSS authentication provider, for use with\n// RegisterGSSProvider.\ntype NewGSSFunc func() (GSS, error)\n\nvar newGss NewGSSFunc\n\n// RegisterGSSProvider registers a GSS authentication provider. For example, if\n// you need to use Kerberos to authenticate with your server, add this to your\n// main package:\n//\n//\timport \"github.com/lib/pq/auth/kerberos\"\n//\n//\tfunc init() {\n//\t\tpq.RegisterGSSProvider(func() (pq.GSS, error) { return kerberos.NewGSS() })\n//\t}\nfunc RegisterGSSProvider(newGssArg NewGSSFunc) {\n\tnewGss = newGssArg\n}\n\n// GSS provides GSSAPI authentication (e.g., Kerberos).\ntype GSS interface {\n\tGetInitToken(host string, service string) ([]byte, error)\n\tGetInitTokenFromSpn(spn string) ([]byte, error)\n\tContinue(inToken []byte) (done bool, outToken []byte, err error)\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/notice.go",
    "content": "//go:build go1.10\n// +build go1.10\n\npackage pq\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n)\n\n// NoticeHandler returns the notice handler on the given connection, if any. A\n// runtime panic occurs if c is not a pq connection. This is rarely used\n// directly, use ConnectorNoticeHandler and ConnectorWithNoticeHandler instead.\nfunc NoticeHandler(c driver.Conn) func(*Error) {\n\treturn c.(*conn).noticeHandler\n}\n\n// SetNoticeHandler sets the given notice handler on the given connection. A\n// runtime panic occurs if c is not a pq connection. A nil handler may be used\n// to unset it. This is rarely used directly, use ConnectorNoticeHandler and\n// ConnectorWithNoticeHandler instead.\n//\n// Note: Notice handlers are executed synchronously by pq meaning commands\n// won't continue to be processed until the handler returns.\nfunc SetNoticeHandler(c driver.Conn, handler func(*Error)) {\n\tc.(*conn).noticeHandler = handler\n}\n\n// NoticeHandlerConnector wraps a regular connector and sets a notice handler\n// on it.\ntype NoticeHandlerConnector struct {\n\tdriver.Connector\n\tnoticeHandler func(*Error)\n}\n\n// Connect calls the underlying connector's connect method and then sets the\n// notice handler.\nfunc (n *NoticeHandlerConnector) Connect(ctx context.Context) (driver.Conn, error) {\n\tc, err := n.Connector.Connect(ctx)\n\tif err == nil {\n\t\tSetNoticeHandler(c, n.noticeHandler)\n\t}\n\treturn c, err\n}\n\n// ConnectorNoticeHandler returns the currently set notice handler, if any. If\n// the given connector is not a result of ConnectorWithNoticeHandler, nil is\n// returned.\nfunc ConnectorNoticeHandler(c driver.Connector) func(*Error) {\n\tif c, ok := c.(*NoticeHandlerConnector); ok {\n\t\treturn c.noticeHandler\n\t}\n\treturn nil\n}\n\n// ConnectorWithNoticeHandler creates or sets the given handler for the given\n// connector. If the given connector is a result of calling this function\n// previously, it is simply set on the given connector and returned. Otherwise,\n// this returns a new connector wrapping the given one and setting the notice\n// handler. A nil notice handler may be used to unset it.\n//\n// The returned connector is intended to be used with database/sql.OpenDB.\n//\n// Note: Notice handlers are executed synchronously by pq meaning commands\n// won't continue to be processed until the handler returns.\nfunc ConnectorWithNoticeHandler(c driver.Connector, handler func(*Error)) *NoticeHandlerConnector {\n\tif c, ok := c.(*NoticeHandlerConnector); ok {\n\t\tc.noticeHandler = handler\n\t\treturn c\n\t}\n\treturn &NoticeHandlerConnector{Connector: c, noticeHandler: handler}\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/notify.go",
    "content": "package pq\n\n// Package pq is a pure Go Postgres driver for the database/sql package.\n// This module contains support for Postgres LISTEN/NOTIFY.\n\nimport (\n\t\"context\"\n\t\"database/sql/driver\"\n\t\"errors\"\n\t\"fmt\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n)\n\n// Notification represents a single notification from the database.\ntype Notification struct {\n\t// Process ID (PID) of the notifying postgres backend.\n\tBePid int\n\t// Name of the channel the notification was sent on.\n\tChannel string\n\t// Payload, or the empty string if unspecified.\n\tExtra string\n}\n\nfunc recvNotification(r *readBuf) *Notification {\n\tbePid := r.int32()\n\tchannel := r.string()\n\textra := r.string()\n\n\treturn &Notification{bePid, channel, extra}\n}\n\n// SetNotificationHandler sets the given notification handler on the given\n// connection. A runtime panic occurs if c is not a pq connection. A nil handler\n// may be used to unset it.\n//\n// Note: Notification handlers are executed synchronously by pq meaning commands\n// won't continue to be processed until the handler returns.\nfunc SetNotificationHandler(c driver.Conn, handler func(*Notification)) {\n\tc.(*conn).notificationHandler = handler\n}\n\n// NotificationHandlerConnector wraps a regular connector and sets a notification handler\n// on it.\ntype NotificationHandlerConnector struct {\n\tdriver.Connector\n\tnotificationHandler func(*Notification)\n}\n\n// Connect calls the underlying connector's connect method and then sets the\n// notification handler.\nfunc (n *NotificationHandlerConnector) Connect(ctx context.Context) (driver.Conn, error) {\n\tc, err := n.Connector.Connect(ctx)\n\tif err == nil {\n\t\tSetNotificationHandler(c, n.notificationHandler)\n\t}\n\treturn c, err\n}\n\n// ConnectorNotificationHandler returns the currently set notification handler, if any. If\n// the given connector is not a result of ConnectorWithNotificationHandler, nil is\n// returned.\nfunc ConnectorNotificationHandler(c driver.Connector) func(*Notification) {\n\tif c, ok := c.(*NotificationHandlerConnector); ok {\n\t\treturn c.notificationHandler\n\t}\n\treturn nil\n}\n\n// ConnectorWithNotificationHandler creates or sets the given handler for the given\n// connector. If the given connector is a result of calling this function\n// previously, it is simply set on the given connector and returned. Otherwise,\n// this returns a new connector wrapping the given one and setting the notification\n// handler. A nil notification handler may be used to unset it.\n//\n// The returned connector is intended to be used with database/sql.OpenDB.\n//\n// Note: Notification handlers are executed synchronously by pq meaning commands\n// won't continue to be processed until the handler returns.\nfunc ConnectorWithNotificationHandler(c driver.Connector, handler func(*Notification)) *NotificationHandlerConnector {\n\tif c, ok := c.(*NotificationHandlerConnector); ok {\n\t\tc.notificationHandler = handler\n\t\treturn c\n\t}\n\treturn &NotificationHandlerConnector{Connector: c, notificationHandler: handler}\n}\n\nconst (\n\tconnStateIdle int32 = iota\n\tconnStateExpectResponse\n\tconnStateExpectReadyForQuery\n)\n\ntype message struct {\n\ttyp byte\n\terr error\n}\n\nvar errListenerConnClosed = errors.New(\"pq: ListenerConn has been closed\")\n\n// ListenerConn is a low-level interface for waiting for notifications.  You\n// should use Listener instead.\ntype ListenerConn struct {\n\t// guards cn and err\n\tconnectionLock sync.Mutex\n\tcn             *conn\n\terr            error\n\n\tconnState int32\n\n\t// the sending goroutine will be holding this lock\n\tsenderLock sync.Mutex\n\n\tnotificationChan chan<- *Notification\n\n\treplyChan chan message\n}\n\n// NewListenerConn creates a new ListenerConn. Use NewListener instead.\nfunc NewListenerConn(name string, notificationChan chan<- *Notification) (*ListenerConn, error) {\n\treturn newDialListenerConn(defaultDialer{}, name, notificationChan)\n}\n\nfunc newDialListenerConn(d Dialer, name string, c chan<- *Notification) (*ListenerConn, error) {\n\tcn, err := DialOpen(d, name)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tl := &ListenerConn{\n\t\tcn:               cn.(*conn),\n\t\tnotificationChan: c,\n\t\tconnState:        connStateIdle,\n\t\treplyChan:        make(chan message, 2),\n\t}\n\n\tgo l.listenerConnMain()\n\n\treturn l, nil\n}\n\n// We can only allow one goroutine at a time to be running a query on the\n// connection for various reasons, so the goroutine sending on the connection\n// must be holding senderLock.\n//\n// Returns an error if an unrecoverable error has occurred and the ListenerConn\n// should be abandoned.\nfunc (l *ListenerConn) acquireSenderLock() error {\n\t// we must acquire senderLock first to avoid deadlocks; see ExecSimpleQuery\n\tl.senderLock.Lock()\n\n\tl.connectionLock.Lock()\n\terr := l.err\n\tl.connectionLock.Unlock()\n\tif err != nil {\n\t\tl.senderLock.Unlock()\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (l *ListenerConn) releaseSenderLock() {\n\tl.senderLock.Unlock()\n}\n\n// setState advances the protocol state to newState.  Returns false if moving\n// to that state from the current state is not allowed.\nfunc (l *ListenerConn) setState(newState int32) bool {\n\tvar expectedState int32\n\n\tswitch newState {\n\tcase connStateIdle:\n\t\texpectedState = connStateExpectReadyForQuery\n\tcase connStateExpectResponse:\n\t\texpectedState = connStateIdle\n\tcase connStateExpectReadyForQuery:\n\t\texpectedState = connStateExpectResponse\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"unexpected listenerConnState %d\", newState))\n\t}\n\n\treturn atomic.CompareAndSwapInt32(&l.connState, expectedState, newState)\n}\n\n// Main logic is here: receive messages from the postgres backend, forward\n// notifications and query replies and keep the internal state in sync with the\n// protocol state.  Returns when the connection has been lost, is about to go\n// away or should be discarded because we couldn't agree on the state with the\n// server backend.\nfunc (l *ListenerConn) listenerConnLoop() (err error) {\n\tdefer errRecoverNoErrBadConn(&err)\n\n\tr := &readBuf{}\n\tfor {\n\t\tt, err := l.cn.recvMessage(r)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tswitch t {\n\t\tcase 'A':\n\t\t\t// recvNotification copies all the data so we don't need to worry\n\t\t\t// about the scratch buffer being overwritten.\n\t\t\tl.notificationChan <- recvNotification(r)\n\n\t\tcase 'T', 'D':\n\t\t\t// only used by tests; ignore\n\n\t\tcase 'E':\n\t\t\t// We might receive an ErrorResponse even when not in a query; it\n\t\t\t// is expected that the server will close the connection after\n\t\t\t// that, but we should make sure that the error we display is the\n\t\t\t// one from the stray ErrorResponse, not io.ErrUnexpectedEOF.\n\t\t\tif !l.setState(connStateExpectReadyForQuery) {\n\t\t\t\treturn parseError(r)\n\t\t\t}\n\t\t\tl.replyChan <- message{t, parseError(r)}\n\n\t\tcase 'C', 'I':\n\t\t\tif !l.setState(connStateExpectReadyForQuery) {\n\t\t\t\t// protocol out of sync\n\t\t\t\treturn fmt.Errorf(\"unexpected CommandComplete\")\n\t\t\t}\n\t\t\t// ExecSimpleQuery doesn't need to know about this message\n\n\t\tcase 'Z':\n\t\t\tif !l.setState(connStateIdle) {\n\t\t\t\t// protocol out of sync\n\t\t\t\treturn fmt.Errorf(\"unexpected ReadyForQuery\")\n\t\t\t}\n\t\t\tl.replyChan <- message{t, nil}\n\n\t\tcase 'S':\n\t\t\t// ignore\n\t\tcase 'N':\n\t\t\tif n := l.cn.noticeHandler; n != nil {\n\t\t\t\tn(parseError(r))\n\t\t\t}\n\t\tdefault:\n\t\t\treturn fmt.Errorf(\"unexpected message %q from server in listenerConnLoop\", t)\n\t\t}\n\t}\n}\n\n// This is the main routine for the goroutine receiving on the database\n// connection.  Most of the main logic is in listenerConnLoop.\nfunc (l *ListenerConn) listenerConnMain() {\n\terr := l.listenerConnLoop()\n\n\t// listenerConnLoop terminated; we're done, but we still have to clean up.\n\t// Make sure nobody tries to start any new queries by making sure the err\n\t// pointer is set.  It is important that we do not overwrite its value; a\n\t// connection could be closed by either this goroutine or one sending on\n\t// the connection -- whoever closes the connection is assumed to have the\n\t// more meaningful error message (as the other one will probably get\n\t// net.errClosed), so that goroutine sets the error we expose while the\n\t// other error is discarded.  If the connection is lost while two\n\t// goroutines are operating on the socket, it probably doesn't matter which\n\t// error we expose so we don't try to do anything more complex.\n\tl.connectionLock.Lock()\n\tif l.err == nil {\n\t\tl.err = err\n\t}\n\tl.cn.Close()\n\tl.connectionLock.Unlock()\n\n\t// There might be a query in-flight; make sure nobody's waiting for a\n\t// response to it, since there's not going to be one.\n\tclose(l.replyChan)\n\n\t// let the listener know we're done\n\tclose(l.notificationChan)\n\n\t// this ListenerConn is done\n}\n\n// Listen sends a LISTEN query to the server. See ExecSimpleQuery.\nfunc (l *ListenerConn) Listen(channel string) (bool, error) {\n\treturn l.ExecSimpleQuery(\"LISTEN \" + QuoteIdentifier(channel))\n}\n\n// Unlisten sends an UNLISTEN query to the server. See ExecSimpleQuery.\nfunc (l *ListenerConn) Unlisten(channel string) (bool, error) {\n\treturn l.ExecSimpleQuery(\"UNLISTEN \" + QuoteIdentifier(channel))\n}\n\n// UnlistenAll sends an `UNLISTEN *` query to the server. See ExecSimpleQuery.\nfunc (l *ListenerConn) UnlistenAll() (bool, error) {\n\treturn l.ExecSimpleQuery(\"UNLISTEN *\")\n}\n\n// Ping the remote server to make sure it's alive.  Non-nil error means the\n// connection has failed and should be abandoned.\nfunc (l *ListenerConn) Ping() error {\n\tsent, err := l.ExecSimpleQuery(\"\")\n\tif !sent {\n\t\treturn err\n\t}\n\tif err != nil {\n\t\t// shouldn't happen\n\t\tpanic(err)\n\t}\n\treturn nil\n}\n\n// Attempt to send a query on the connection.  Returns an error if sending the\n// query failed, and the caller should initiate closure of this connection.\n// The caller must be holding senderLock (see acquireSenderLock and\n// releaseSenderLock).\nfunc (l *ListenerConn) sendSimpleQuery(q string) (err error) {\n\tdefer errRecoverNoErrBadConn(&err)\n\n\t// must set connection state before sending the query\n\tif !l.setState(connStateExpectResponse) {\n\t\tpanic(\"two queries running at the same time\")\n\t}\n\n\t// Can't use l.cn.writeBuf here because it uses the scratch buffer which\n\t// might get overwritten by listenerConnLoop.\n\tb := &writeBuf{\n\t\tbuf: []byte(\"Q\\x00\\x00\\x00\\x00\"),\n\t\tpos: 1,\n\t}\n\tb.string(q)\n\tl.cn.send(b)\n\n\treturn nil\n}\n\n// ExecSimpleQuery executes a \"simple query\" (i.e. one with no bindable\n// parameters) on the connection. The possible return values are:\n//   1) \"executed\" is true; the query was executed to completion on the\n//      database server.  If the query failed, err will be set to the error\n//      returned by the database, otherwise err will be nil.\n//   2) If \"executed\" is false, the query could not be executed on the remote\n//      server.  err will be non-nil.\n//\n// After a call to ExecSimpleQuery has returned an executed=false value, the\n// connection has either been closed or will be closed shortly thereafter, and\n// all subsequently executed queries will return an error.\nfunc (l *ListenerConn) ExecSimpleQuery(q string) (executed bool, err error) {\n\tif err = l.acquireSenderLock(); err != nil {\n\t\treturn false, err\n\t}\n\tdefer l.releaseSenderLock()\n\n\terr = l.sendSimpleQuery(q)\n\tif err != nil {\n\t\t// We can't know what state the protocol is in, so we need to abandon\n\t\t// this connection.\n\t\tl.connectionLock.Lock()\n\t\t// Set the error pointer if it hasn't been set already; see\n\t\t// listenerConnMain.\n\t\tif l.err == nil {\n\t\t\tl.err = err\n\t\t}\n\t\tl.connectionLock.Unlock()\n\t\tl.cn.c.Close()\n\t\treturn false, err\n\t}\n\n\t// now we just wait for a reply..\n\tfor {\n\t\tm, ok := <-l.replyChan\n\t\tif !ok {\n\t\t\t// We lost the connection to server, don't bother waiting for a\n\t\t\t// a response.  err should have been set already.\n\t\t\tl.connectionLock.Lock()\n\t\t\terr := l.err\n\t\t\tl.connectionLock.Unlock()\n\t\t\treturn false, err\n\t\t}\n\t\tswitch m.typ {\n\t\tcase 'Z':\n\t\t\t// sanity check\n\t\t\tif m.err != nil {\n\t\t\t\tpanic(\"m.err != nil\")\n\t\t\t}\n\t\t\t// done; err might or might not be set\n\t\t\treturn true, err\n\n\t\tcase 'E':\n\t\t\t// sanity check\n\t\t\tif m.err == nil {\n\t\t\t\tpanic(\"m.err == nil\")\n\t\t\t}\n\t\t\t// server responded with an error; ReadyForQuery to follow\n\t\t\terr = m.err\n\n\t\tdefault:\n\t\t\treturn false, fmt.Errorf(\"unknown response for simple query: %q\", m.typ)\n\t\t}\n\t}\n}\n\n// Close closes the connection.\nfunc (l *ListenerConn) Close() error {\n\tl.connectionLock.Lock()\n\tif l.err != nil {\n\t\tl.connectionLock.Unlock()\n\t\treturn errListenerConnClosed\n\t}\n\tl.err = errListenerConnClosed\n\tl.connectionLock.Unlock()\n\t// We can't send anything on the connection without holding senderLock.\n\t// Simply close the net.Conn to wake up everyone operating on it.\n\treturn l.cn.c.Close()\n}\n\n// Err returns the reason the connection was closed. It is not safe to call\n// this function until l.Notify has been closed.\nfunc (l *ListenerConn) Err() error {\n\treturn l.err\n}\n\nvar errListenerClosed = errors.New(\"pq: Listener has been closed\")\n\n// ErrChannelAlreadyOpen is returned from Listen when a channel is already\n// open.\nvar ErrChannelAlreadyOpen = errors.New(\"pq: channel is already open\")\n\n// ErrChannelNotOpen is returned from Unlisten when a channel is not open.\nvar ErrChannelNotOpen = errors.New(\"pq: channel is not open\")\n\n// ListenerEventType is an enumeration of listener event types.\ntype ListenerEventType int\n\nconst (\n\t// ListenerEventConnected is emitted only when the database connection\n\t// has been initially initialized. The err argument of the callback\n\t// will always be nil.\n\tListenerEventConnected ListenerEventType = iota\n\n\t// ListenerEventDisconnected is emitted after a database connection has\n\t// been lost, either because of an error or because Close has been\n\t// called. The err argument will be set to the reason the database\n\t// connection was lost.\n\tListenerEventDisconnected\n\n\t// ListenerEventReconnected is emitted after a database connection has\n\t// been re-established after connection loss. The err argument of the\n\t// callback will always be nil. After this event has been emitted, a\n\t// nil pq.Notification is sent on the Listener.Notify channel.\n\tListenerEventReconnected\n\n\t// ListenerEventConnectionAttemptFailed is emitted after a connection\n\t// to the database was attempted, but failed. The err argument will be\n\t// set to an error describing why the connection attempt did not\n\t// succeed.\n\tListenerEventConnectionAttemptFailed\n)\n\n// EventCallbackType is the event callback type. See also ListenerEventType\n// constants' documentation.\ntype EventCallbackType func(event ListenerEventType, err error)\n\n// Listener provides an interface for listening to notifications from a\n// PostgreSQL database.  For general usage information, see section\n// \"Notifications\".\n//\n// Listener can safely be used from concurrently running goroutines.\ntype Listener struct {\n\t// Channel for receiving notifications from the database.  In some cases a\n\t// nil value will be sent.  See section \"Notifications\" above.\n\tNotify chan *Notification\n\n\tname                 string\n\tminReconnectInterval time.Duration\n\tmaxReconnectInterval time.Duration\n\tdialer               Dialer\n\teventCallback        EventCallbackType\n\n\tlock                 sync.Mutex\n\tisClosed             bool\n\treconnectCond        *sync.Cond\n\tcn                   *ListenerConn\n\tconnNotificationChan <-chan *Notification\n\tchannels             map[string]struct{}\n}\n\n// NewListener creates a new database connection dedicated to LISTEN / NOTIFY.\n//\n// name should be set to a connection string to be used to establish the\n// database connection (see section \"Connection String Parameters\" above).\n//\n// minReconnectInterval controls the duration to wait before trying to\n// re-establish the database connection after connection loss.  After each\n// consecutive failure this interval is doubled, until maxReconnectInterval is\n// reached.  Successfully completing the connection establishment procedure\n// resets the interval back to minReconnectInterval.\n//\n// The last parameter eventCallback can be set to a function which will be\n// called by the Listener when the state of the underlying database connection\n// changes.  This callback will be called by the goroutine which dispatches the\n// notifications over the Notify channel, so you should try to avoid doing\n// potentially time-consuming operations from the callback.\nfunc NewListener(name string,\n\tminReconnectInterval time.Duration,\n\tmaxReconnectInterval time.Duration,\n\teventCallback EventCallbackType) *Listener {\n\treturn NewDialListener(defaultDialer{}, name, minReconnectInterval, maxReconnectInterval, eventCallback)\n}\n\n// NewDialListener is like NewListener but it takes a Dialer.\nfunc NewDialListener(d Dialer,\n\tname string,\n\tminReconnectInterval time.Duration,\n\tmaxReconnectInterval time.Duration,\n\teventCallback EventCallbackType) *Listener {\n\n\tl := &Listener{\n\t\tname:                 name,\n\t\tminReconnectInterval: minReconnectInterval,\n\t\tmaxReconnectInterval: maxReconnectInterval,\n\t\tdialer:               d,\n\t\teventCallback:        eventCallback,\n\n\t\tchannels: make(map[string]struct{}),\n\n\t\tNotify: make(chan *Notification, 32),\n\t}\n\tl.reconnectCond = sync.NewCond(&l.lock)\n\n\tgo l.listenerMain()\n\n\treturn l\n}\n\n// NotificationChannel returns the notification channel for this listener.\n// This is the same channel as Notify, and will not be recreated during the\n// life time of the Listener.\nfunc (l *Listener) NotificationChannel() <-chan *Notification {\n\treturn l.Notify\n}\n\n// Listen starts listening for notifications on a channel.  Calls to this\n// function will block until an acknowledgement has been received from the\n// server.  Note that Listener automatically re-establishes the connection\n// after connection loss, so this function may block indefinitely if the\n// connection can not be re-established.\n//\n// Listen will only fail in three conditions:\n//   1) The channel is already open.  The returned error will be\n//      ErrChannelAlreadyOpen.\n//   2) The query was executed on the remote server, but PostgreSQL returned an\n//      error message in response to the query.  The returned error will be a\n//      pq.Error containing the information the server supplied.\n//   3) Close is called on the Listener before the request could be completed.\n//\n// The channel name is case-sensitive.\nfunc (l *Listener) Listen(channel string) error {\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\tif l.isClosed {\n\t\treturn errListenerClosed\n\t}\n\n\t// The server allows you to issue a LISTEN on a channel which is already\n\t// open, but it seems useful to be able to detect this case to spot for\n\t// mistakes in application logic.  If the application genuinely does't\n\t// care, it can check the exported error and ignore it.\n\t_, exists := l.channels[channel]\n\tif exists {\n\t\treturn ErrChannelAlreadyOpen\n\t}\n\n\tif l.cn != nil {\n\t\t// If gotResponse is true but error is set, the query was executed on\n\t\t// the remote server, but resulted in an error.  This should be\n\t\t// relatively rare, so it's fine if we just pass the error to our\n\t\t// caller.  However, if gotResponse is false, we could not complete the\n\t\t// query on the remote server and our underlying connection is about\n\t\t// to go away, so we only add relname to l.channels, and wait for\n\t\t// resync() to take care of the rest.\n\t\tgotResponse, err := l.cn.Listen(channel)\n\t\tif gotResponse && err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tl.channels[channel] = struct{}{}\n\tfor l.cn == nil {\n\t\tl.reconnectCond.Wait()\n\t\t// we let go of the mutex for a while\n\t\tif l.isClosed {\n\t\t\treturn errListenerClosed\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Unlisten removes a channel from the Listener's channel list.  Returns\n// ErrChannelNotOpen if the Listener is not listening on the specified channel.\n// Returns immediately with no error if there is no connection.  Note that you\n// might still get notifications for this channel even after Unlisten has\n// returned.\n//\n// The channel name is case-sensitive.\nfunc (l *Listener) Unlisten(channel string) error {\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\tif l.isClosed {\n\t\treturn errListenerClosed\n\t}\n\n\t// Similarly to LISTEN, this is not an error in Postgres, but it seems\n\t// useful to distinguish from the normal conditions.\n\t_, exists := l.channels[channel]\n\tif !exists {\n\t\treturn ErrChannelNotOpen\n\t}\n\n\tif l.cn != nil {\n\t\t// Similarly to Listen (see comment in that function), the caller\n\t\t// should only be bothered with an error if it came from the backend as\n\t\t// a response to our query.\n\t\tgotResponse, err := l.cn.Unlisten(channel)\n\t\tif gotResponse && err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Don't bother waiting for resync if there's no connection.\n\tdelete(l.channels, channel)\n\treturn nil\n}\n\n// UnlistenAll removes all channels from the Listener's channel list.  Returns\n// immediately with no error if there is no connection.  Note that you might\n// still get notifications for any of the deleted channels even after\n// UnlistenAll has returned.\nfunc (l *Listener) UnlistenAll() error {\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\tif l.isClosed {\n\t\treturn errListenerClosed\n\t}\n\n\tif l.cn != nil {\n\t\t// Similarly to Listen (see comment in that function), the caller\n\t\t// should only be bothered with an error if it came from the backend as\n\t\t// a response to our query.\n\t\tgotResponse, err := l.cn.UnlistenAll()\n\t\tif gotResponse && err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Don't bother waiting for resync if there's no connection.\n\tl.channels = make(map[string]struct{})\n\treturn nil\n}\n\n// Ping the remote server to make sure it's alive.  Non-nil return value means\n// that there is no active connection.\nfunc (l *Listener) Ping() error {\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\tif l.isClosed {\n\t\treturn errListenerClosed\n\t}\n\tif l.cn == nil {\n\t\treturn errors.New(\"no connection\")\n\t}\n\n\treturn l.cn.Ping()\n}\n\n// Clean up after losing the server connection.  Returns l.cn.Err(), which\n// should have the reason the connection was lost.\nfunc (l *Listener) disconnectCleanup() error {\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\t// sanity check; can't look at Err() until the channel has been closed\n\tselect {\n\tcase _, ok := <-l.connNotificationChan:\n\t\tif ok {\n\t\t\tpanic(\"connNotificationChan not closed\")\n\t\t}\n\tdefault:\n\t\tpanic(\"connNotificationChan not closed\")\n\t}\n\n\terr := l.cn.Err()\n\tl.cn.Close()\n\tl.cn = nil\n\treturn err\n}\n\n// Synchronize the list of channels we want to be listening on with the server\n// after the connection has been established.\nfunc (l *Listener) resync(cn *ListenerConn, notificationChan <-chan *Notification) error {\n\tdoneChan := make(chan error)\n\tgo func(notificationChan <-chan *Notification) {\n\t\tfor channel := range l.channels {\n\t\t\t// If we got a response, return that error to our caller as it's\n\t\t\t// going to be more descriptive than cn.Err().\n\t\t\tgotResponse, err := cn.Listen(channel)\n\t\t\tif gotResponse && err != nil {\n\t\t\t\tdoneChan <- err\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\t// If we couldn't reach the server, wait for notificationChan to\n\t\t\t// close and then return the error message from the connection, as\n\t\t\t// per ListenerConn's interface.\n\t\t\tif err != nil {\n\t\t\t\tfor range notificationChan {\n\t\t\t\t}\n\t\t\t\tdoneChan <- cn.Err()\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tdoneChan <- nil\n\t}(notificationChan)\n\n\t// Ignore notifications while synchronization is going on to avoid\n\t// deadlocks.  We have to send a nil notification over Notify anyway as\n\t// we can't possibly know which notifications (if any) were lost while\n\t// the connection was down, so there's no reason to try and process\n\t// these messages at all.\n\tfor {\n\t\tselect {\n\t\tcase _, ok := <-notificationChan:\n\t\t\tif !ok {\n\t\t\t\tnotificationChan = nil\n\t\t\t}\n\n\t\tcase err := <-doneChan:\n\t\t\treturn err\n\t\t}\n\t}\n}\n\n// caller should NOT be holding l.lock\nfunc (l *Listener) closed() bool {\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\treturn l.isClosed\n}\n\nfunc (l *Listener) connect() error {\n\tnotificationChan := make(chan *Notification, 32)\n\tcn, err := newDialListenerConn(l.dialer, l.name, notificationChan)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\terr = l.resync(cn, notificationChan)\n\tif err != nil {\n\t\tcn.Close()\n\t\treturn err\n\t}\n\n\tl.cn = cn\n\tl.connNotificationChan = notificationChan\n\tl.reconnectCond.Broadcast()\n\n\treturn nil\n}\n\n// Close disconnects the Listener from the database and shuts it down.\n// Subsequent calls to its methods will return an error.  Close returns an\n// error if the connection has already been closed.\nfunc (l *Listener) Close() error {\n\tl.lock.Lock()\n\tdefer l.lock.Unlock()\n\n\tif l.isClosed {\n\t\treturn errListenerClosed\n\t}\n\n\tif l.cn != nil {\n\t\tl.cn.Close()\n\t}\n\tl.isClosed = true\n\n\t// Unblock calls to Listen()\n\tl.reconnectCond.Broadcast()\n\n\treturn nil\n}\n\nfunc (l *Listener) emitEvent(event ListenerEventType, err error) {\n\tif l.eventCallback != nil {\n\t\tl.eventCallback(event, err)\n\t}\n}\n\n// Main logic here: maintain a connection to the server when possible, wait\n// for notifications and emit events.\nfunc (l *Listener) listenerConnLoop() {\n\tvar nextReconnect time.Time\n\n\treconnectInterval := l.minReconnectInterval\n\tfor {\n\t\tfor {\n\t\t\terr := l.connect()\n\t\t\tif err == nil {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tif l.closed() {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tl.emitEvent(ListenerEventConnectionAttemptFailed, err)\n\n\t\t\ttime.Sleep(reconnectInterval)\n\t\t\treconnectInterval *= 2\n\t\t\tif reconnectInterval > l.maxReconnectInterval {\n\t\t\t\treconnectInterval = l.maxReconnectInterval\n\t\t\t}\n\t\t}\n\n\t\tif nextReconnect.IsZero() {\n\t\t\tl.emitEvent(ListenerEventConnected, nil)\n\t\t} else {\n\t\t\tl.emitEvent(ListenerEventReconnected, nil)\n\t\t\tl.Notify <- nil\n\t\t}\n\n\t\treconnectInterval = l.minReconnectInterval\n\t\tnextReconnect = time.Now().Add(reconnectInterval)\n\n\t\tfor {\n\t\t\tnotification, ok := <-l.connNotificationChan\n\t\t\tif !ok {\n\t\t\t\t// lost connection, loop again\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tl.Notify <- notification\n\t\t}\n\n\t\terr := l.disconnectCleanup()\n\t\tif l.closed() {\n\t\t\treturn\n\t\t}\n\t\tl.emitEvent(ListenerEventDisconnected, err)\n\n\t\ttime.Sleep(time.Until(nextReconnect))\n\t}\n}\n\nfunc (l *Listener) listenerMain() {\n\tl.listenerConnLoop()\n\tclose(l.Notify)\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/oid/doc.go",
    "content": "// Package oid contains OID constants\n// as defined by the Postgres server.\npackage oid\n\n// Oid is a Postgres Object ID.\ntype Oid uint32\n"
  },
  {
    "path": "vendor/github.com/lib/pq/oid/types.go",
    "content": "// Code generated by gen.go. DO NOT EDIT.\n\npackage oid\n\nconst (\n\tT_bool             Oid = 16\n\tT_bytea            Oid = 17\n\tT_char             Oid = 18\n\tT_name             Oid = 19\n\tT_int8             Oid = 20\n\tT_int2             Oid = 21\n\tT_int2vector       Oid = 22\n\tT_int4             Oid = 23\n\tT_regproc          Oid = 24\n\tT_text             Oid = 25\n\tT_oid              Oid = 26\n\tT_tid              Oid = 27\n\tT_xid              Oid = 28\n\tT_cid              Oid = 29\n\tT_oidvector        Oid = 30\n\tT_pg_ddl_command   Oid = 32\n\tT_pg_type          Oid = 71\n\tT_pg_attribute     Oid = 75\n\tT_pg_proc          Oid = 81\n\tT_pg_class         Oid = 83\n\tT_json             Oid = 114\n\tT_xml              Oid = 142\n\tT__xml             Oid = 143\n\tT_pg_node_tree     Oid = 194\n\tT__json            Oid = 199\n\tT_smgr             Oid = 210\n\tT_index_am_handler Oid = 325\n\tT_point            Oid = 600\n\tT_lseg             Oid = 601\n\tT_path             Oid = 602\n\tT_box              Oid = 603\n\tT_polygon          Oid = 604\n\tT_line             Oid = 628\n\tT__line            Oid = 629\n\tT_cidr             Oid = 650\n\tT__cidr            Oid = 651\n\tT_float4           Oid = 700\n\tT_float8           Oid = 701\n\tT_abstime          Oid = 702\n\tT_reltime          Oid = 703\n\tT_tinterval        Oid = 704\n\tT_unknown          Oid = 705\n\tT_circle           Oid = 718\n\tT__circle          Oid = 719\n\tT_money            Oid = 790\n\tT__money           Oid = 791\n\tT_macaddr          Oid = 829\n\tT_inet             Oid = 869\n\tT__bool            Oid = 1000\n\tT__bytea           Oid = 1001\n\tT__char            Oid = 1002\n\tT__name            Oid = 1003\n\tT__int2            Oid = 1005\n\tT__int2vector      Oid = 1006\n\tT__int4            Oid = 1007\n\tT__regproc         Oid = 1008\n\tT__text            Oid = 1009\n\tT__tid             Oid = 1010\n\tT__xid             Oid = 1011\n\tT__cid             Oid = 1012\n\tT__oidvector       Oid = 1013\n\tT__bpchar          Oid = 1014\n\tT__varchar         Oid = 1015\n\tT__int8            Oid = 1016\n\tT__point           Oid = 1017\n\tT__lseg            Oid = 1018\n\tT__path            Oid = 1019\n\tT__box             Oid = 1020\n\tT__float4          Oid = 1021\n\tT__float8          Oid = 1022\n\tT__abstime         Oid = 1023\n\tT__reltime         Oid = 1024\n\tT__tinterval       Oid = 1025\n\tT__polygon         Oid = 1027\n\tT__oid             Oid = 1028\n\tT_aclitem          Oid = 1033\n\tT__aclitem         Oid = 1034\n\tT__macaddr         Oid = 1040\n\tT__inet            Oid = 1041\n\tT_bpchar           Oid = 1042\n\tT_varchar          Oid = 1043\n\tT_date             Oid = 1082\n\tT_time             Oid = 1083\n\tT_timestamp        Oid = 1114\n\tT__timestamp       Oid = 1115\n\tT__date            Oid = 1182\n\tT__time            Oid = 1183\n\tT_timestamptz      Oid = 1184\n\tT__timestamptz     Oid = 1185\n\tT_interval         Oid = 1186\n\tT__interval        Oid = 1187\n\tT__numeric         Oid = 1231\n\tT_pg_database      Oid = 1248\n\tT__cstring         Oid = 1263\n\tT_timetz           Oid = 1266\n\tT__timetz          Oid = 1270\n\tT_bit              Oid = 1560\n\tT__bit             Oid = 1561\n\tT_varbit           Oid = 1562\n\tT__varbit          Oid = 1563\n\tT_numeric          Oid = 1700\n\tT_refcursor        Oid = 1790\n\tT__refcursor       Oid = 2201\n\tT_regprocedure     Oid = 2202\n\tT_regoper          Oid = 2203\n\tT_regoperator      Oid = 2204\n\tT_regclass         Oid = 2205\n\tT_regtype          Oid = 2206\n\tT__regprocedure    Oid = 2207\n\tT__regoper         Oid = 2208\n\tT__regoperator     Oid = 2209\n\tT__regclass        Oid = 2210\n\tT__regtype         Oid = 2211\n\tT_record           Oid = 2249\n\tT_cstring          Oid = 2275\n\tT_any              Oid = 2276\n\tT_anyarray         Oid = 2277\n\tT_void             Oid = 2278\n\tT_trigger          Oid = 2279\n\tT_language_handler Oid = 2280\n\tT_internal         Oid = 2281\n\tT_opaque           Oid = 2282\n\tT_anyelement       Oid = 2283\n\tT__record          Oid = 2287\n\tT_anynonarray      Oid = 2776\n\tT_pg_authid        Oid = 2842\n\tT_pg_auth_members  Oid = 2843\n\tT__txid_snapshot   Oid = 2949\n\tT_uuid             Oid = 2950\n\tT__uuid            Oid = 2951\n\tT_txid_snapshot    Oid = 2970\n\tT_fdw_handler      Oid = 3115\n\tT_pg_lsn           Oid = 3220\n\tT__pg_lsn          Oid = 3221\n\tT_tsm_handler      Oid = 3310\n\tT_anyenum          Oid = 3500\n\tT_tsvector         Oid = 3614\n\tT_tsquery          Oid = 3615\n\tT_gtsvector        Oid = 3642\n\tT__tsvector        Oid = 3643\n\tT__gtsvector       Oid = 3644\n\tT__tsquery         Oid = 3645\n\tT_regconfig        Oid = 3734\n\tT__regconfig       Oid = 3735\n\tT_regdictionary    Oid = 3769\n\tT__regdictionary   Oid = 3770\n\tT_jsonb            Oid = 3802\n\tT__jsonb           Oid = 3807\n\tT_anyrange         Oid = 3831\n\tT_event_trigger    Oid = 3838\n\tT_int4range        Oid = 3904\n\tT__int4range       Oid = 3905\n\tT_numrange         Oid = 3906\n\tT__numrange        Oid = 3907\n\tT_tsrange          Oid = 3908\n\tT__tsrange         Oid = 3909\n\tT_tstzrange        Oid = 3910\n\tT__tstzrange       Oid = 3911\n\tT_daterange        Oid = 3912\n\tT__daterange       Oid = 3913\n\tT_int8range        Oid = 3926\n\tT__int8range       Oid = 3927\n\tT_pg_shseclabel    Oid = 4066\n\tT_regnamespace     Oid = 4089\n\tT__regnamespace    Oid = 4090\n\tT_regrole          Oid = 4096\n\tT__regrole         Oid = 4097\n)\n\nvar TypeName = map[Oid]string{\n\tT_bool:             \"BOOL\",\n\tT_bytea:            \"BYTEA\",\n\tT_char:             \"CHAR\",\n\tT_name:             \"NAME\",\n\tT_int8:             \"INT8\",\n\tT_int2:             \"INT2\",\n\tT_int2vector:       \"INT2VECTOR\",\n\tT_int4:             \"INT4\",\n\tT_regproc:          \"REGPROC\",\n\tT_text:             \"TEXT\",\n\tT_oid:              \"OID\",\n\tT_tid:              \"TID\",\n\tT_xid:              \"XID\",\n\tT_cid:              \"CID\",\n\tT_oidvector:        \"OIDVECTOR\",\n\tT_pg_ddl_command:   \"PG_DDL_COMMAND\",\n\tT_pg_type:          \"PG_TYPE\",\n\tT_pg_attribute:     \"PG_ATTRIBUTE\",\n\tT_pg_proc:          \"PG_PROC\",\n\tT_pg_class:         \"PG_CLASS\",\n\tT_json:             \"JSON\",\n\tT_xml:              \"XML\",\n\tT__xml:             \"_XML\",\n\tT_pg_node_tree:     \"PG_NODE_TREE\",\n\tT__json:            \"_JSON\",\n\tT_smgr:             \"SMGR\",\n\tT_index_am_handler: \"INDEX_AM_HANDLER\",\n\tT_point:            \"POINT\",\n\tT_lseg:             \"LSEG\",\n\tT_path:             \"PATH\",\n\tT_box:              \"BOX\",\n\tT_polygon:          \"POLYGON\",\n\tT_line:             \"LINE\",\n\tT__line:            \"_LINE\",\n\tT_cidr:             \"CIDR\",\n\tT__cidr:            \"_CIDR\",\n\tT_float4:           \"FLOAT4\",\n\tT_float8:           \"FLOAT8\",\n\tT_abstime:          \"ABSTIME\",\n\tT_reltime:          \"RELTIME\",\n\tT_tinterval:        \"TINTERVAL\",\n\tT_unknown:          \"UNKNOWN\",\n\tT_circle:           \"CIRCLE\",\n\tT__circle:          \"_CIRCLE\",\n\tT_money:            \"MONEY\",\n\tT__money:           \"_MONEY\",\n\tT_macaddr:          \"MACADDR\",\n\tT_inet:             \"INET\",\n\tT__bool:            \"_BOOL\",\n\tT__bytea:           \"_BYTEA\",\n\tT__char:            \"_CHAR\",\n\tT__name:            \"_NAME\",\n\tT__int2:            \"_INT2\",\n\tT__int2vector:      \"_INT2VECTOR\",\n\tT__int4:            \"_INT4\",\n\tT__regproc:         \"_REGPROC\",\n\tT__text:            \"_TEXT\",\n\tT__tid:             \"_TID\",\n\tT__xid:             \"_XID\",\n\tT__cid:             \"_CID\",\n\tT__oidvector:       \"_OIDVECTOR\",\n\tT__bpchar:          \"_BPCHAR\",\n\tT__varchar:         \"_VARCHAR\",\n\tT__int8:            \"_INT8\",\n\tT__point:           \"_POINT\",\n\tT__lseg:            \"_LSEG\",\n\tT__path:            \"_PATH\",\n\tT__box:             \"_BOX\",\n\tT__float4:          \"_FLOAT4\",\n\tT__float8:          \"_FLOAT8\",\n\tT__abstime:         \"_ABSTIME\",\n\tT__reltime:         \"_RELTIME\",\n\tT__tinterval:       \"_TINTERVAL\",\n\tT__polygon:         \"_POLYGON\",\n\tT__oid:             \"_OID\",\n\tT_aclitem:          \"ACLITEM\",\n\tT__aclitem:         \"_ACLITEM\",\n\tT__macaddr:         \"_MACADDR\",\n\tT__inet:            \"_INET\",\n\tT_bpchar:           \"BPCHAR\",\n\tT_varchar:          \"VARCHAR\",\n\tT_date:             \"DATE\",\n\tT_time:             \"TIME\",\n\tT_timestamp:        \"TIMESTAMP\",\n\tT__timestamp:       \"_TIMESTAMP\",\n\tT__date:            \"_DATE\",\n\tT__time:            \"_TIME\",\n\tT_timestamptz:      \"TIMESTAMPTZ\",\n\tT__timestamptz:     \"_TIMESTAMPTZ\",\n\tT_interval:         \"INTERVAL\",\n\tT__interval:        \"_INTERVAL\",\n\tT__numeric:         \"_NUMERIC\",\n\tT_pg_database:      \"PG_DATABASE\",\n\tT__cstring:         \"_CSTRING\",\n\tT_timetz:           \"TIMETZ\",\n\tT__timetz:          \"_TIMETZ\",\n\tT_bit:              \"BIT\",\n\tT__bit:             \"_BIT\",\n\tT_varbit:           \"VARBIT\",\n\tT__varbit:          \"_VARBIT\",\n\tT_numeric:          \"NUMERIC\",\n\tT_refcursor:        \"REFCURSOR\",\n\tT__refcursor:       \"_REFCURSOR\",\n\tT_regprocedure:     \"REGPROCEDURE\",\n\tT_regoper:          \"REGOPER\",\n\tT_regoperator:      \"REGOPERATOR\",\n\tT_regclass:         \"REGCLASS\",\n\tT_regtype:          \"REGTYPE\",\n\tT__regprocedure:    \"_REGPROCEDURE\",\n\tT__regoper:         \"_REGOPER\",\n\tT__regoperator:     \"_REGOPERATOR\",\n\tT__regclass:        \"_REGCLASS\",\n\tT__regtype:         \"_REGTYPE\",\n\tT_record:           \"RECORD\",\n\tT_cstring:          \"CSTRING\",\n\tT_any:              \"ANY\",\n\tT_anyarray:         \"ANYARRAY\",\n\tT_void:             \"VOID\",\n\tT_trigger:          \"TRIGGER\",\n\tT_language_handler: \"LANGUAGE_HANDLER\",\n\tT_internal:         \"INTERNAL\",\n\tT_opaque:           \"OPAQUE\",\n\tT_anyelement:       \"ANYELEMENT\",\n\tT__record:          \"_RECORD\",\n\tT_anynonarray:      \"ANYNONARRAY\",\n\tT_pg_authid:        \"PG_AUTHID\",\n\tT_pg_auth_members:  \"PG_AUTH_MEMBERS\",\n\tT__txid_snapshot:   \"_TXID_SNAPSHOT\",\n\tT_uuid:             \"UUID\",\n\tT__uuid:            \"_UUID\",\n\tT_txid_snapshot:    \"TXID_SNAPSHOT\",\n\tT_fdw_handler:      \"FDW_HANDLER\",\n\tT_pg_lsn:           \"PG_LSN\",\n\tT__pg_lsn:          \"_PG_LSN\",\n\tT_tsm_handler:      \"TSM_HANDLER\",\n\tT_anyenum:          \"ANYENUM\",\n\tT_tsvector:         \"TSVECTOR\",\n\tT_tsquery:          \"TSQUERY\",\n\tT_gtsvector:        \"GTSVECTOR\",\n\tT__tsvector:        \"_TSVECTOR\",\n\tT__gtsvector:       \"_GTSVECTOR\",\n\tT__tsquery:         \"_TSQUERY\",\n\tT_regconfig:        \"REGCONFIG\",\n\tT__regconfig:       \"_REGCONFIG\",\n\tT_regdictionary:    \"REGDICTIONARY\",\n\tT__regdictionary:   \"_REGDICTIONARY\",\n\tT_jsonb:            \"JSONB\",\n\tT__jsonb:           \"_JSONB\",\n\tT_anyrange:         \"ANYRANGE\",\n\tT_event_trigger:    \"EVENT_TRIGGER\",\n\tT_int4range:        \"INT4RANGE\",\n\tT__int4range:       \"_INT4RANGE\",\n\tT_numrange:         \"NUMRANGE\",\n\tT__numrange:        \"_NUMRANGE\",\n\tT_tsrange:          \"TSRANGE\",\n\tT__tsrange:         \"_TSRANGE\",\n\tT_tstzrange:        \"TSTZRANGE\",\n\tT__tstzrange:       \"_TSTZRANGE\",\n\tT_daterange:        \"DATERANGE\",\n\tT__daterange:       \"_DATERANGE\",\n\tT_int8range:        \"INT8RANGE\",\n\tT__int8range:       \"_INT8RANGE\",\n\tT_pg_shseclabel:    \"PG_SHSECLABEL\",\n\tT_regnamespace:     \"REGNAMESPACE\",\n\tT__regnamespace:    \"_REGNAMESPACE\",\n\tT_regrole:          \"REGROLE\",\n\tT__regrole:         \"_REGROLE\",\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/rows.go",
    "content": "package pq\n\nimport (\n\t\"math\"\n\t\"reflect\"\n\t\"time\"\n\n\t\"github.com/lib/pq/oid\"\n)\n\nconst headerSize = 4\n\ntype fieldDesc struct {\n\t// The object ID of the data type.\n\tOID oid.Oid\n\t// The data type size (see pg_type.typlen).\n\t// Note that negative values denote variable-width types.\n\tLen int\n\t// The type modifier (see pg_attribute.atttypmod).\n\t// The meaning of the modifier is type-specific.\n\tMod int\n}\n\nfunc (fd fieldDesc) Type() reflect.Type {\n\tswitch fd.OID {\n\tcase oid.T_int8:\n\t\treturn reflect.TypeOf(int64(0))\n\tcase oid.T_int4:\n\t\treturn reflect.TypeOf(int32(0))\n\tcase oid.T_int2:\n\t\treturn reflect.TypeOf(int16(0))\n\tcase oid.T_varchar, oid.T_text:\n\t\treturn reflect.TypeOf(\"\")\n\tcase oid.T_bool:\n\t\treturn reflect.TypeOf(false)\n\tcase oid.T_date, oid.T_time, oid.T_timetz, oid.T_timestamp, oid.T_timestamptz:\n\t\treturn reflect.TypeOf(time.Time{})\n\tcase oid.T_bytea:\n\t\treturn reflect.TypeOf([]byte(nil))\n\tdefault:\n\t\treturn reflect.TypeOf(new(interface{})).Elem()\n\t}\n}\n\nfunc (fd fieldDesc) Name() string {\n\treturn oid.TypeName[fd.OID]\n}\n\nfunc (fd fieldDesc) Length() (length int64, ok bool) {\n\tswitch fd.OID {\n\tcase oid.T_text, oid.T_bytea:\n\t\treturn math.MaxInt64, true\n\tcase oid.T_varchar, oid.T_bpchar:\n\t\treturn int64(fd.Mod - headerSize), true\n\tdefault:\n\t\treturn 0, false\n\t}\n}\n\nfunc (fd fieldDesc) PrecisionScale() (precision, scale int64, ok bool) {\n\tswitch fd.OID {\n\tcase oid.T_numeric, oid.T__numeric:\n\t\tmod := fd.Mod - headerSize\n\t\tprecision = int64((mod >> 16) & 0xffff)\n\t\tscale = int64(mod & 0xffff)\n\t\treturn precision, scale, true\n\tdefault:\n\t\treturn 0, 0, false\n\t}\n}\n\n// ColumnTypeScanType returns the value type that can be used to scan types into.\nfunc (rs *rows) ColumnTypeScanType(index int) reflect.Type {\n\treturn rs.colTyps[index].Type()\n}\n\n// ColumnTypeDatabaseTypeName return the database system type name.\nfunc (rs *rows) ColumnTypeDatabaseTypeName(index int) string {\n\treturn rs.colTyps[index].Name()\n}\n\n// ColumnTypeLength returns the length of the column type if the column is a\n// variable length type. If the column is not a variable length type ok\n// should return false.\nfunc (rs *rows) ColumnTypeLength(index int) (length int64, ok bool) {\n\treturn rs.colTyps[index].Length()\n}\n\n// ColumnTypePrecisionScale should return the precision and scale for decimal\n// types. If not applicable, ok should be false.\nfunc (rs *rows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool) {\n\treturn rs.colTyps[index].PrecisionScale()\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/scram/scram.go",
    "content": "// Copyright (c) 2014 - Gustavo Niemeyer <gustavo@niemeyer.net>\n//\n// All rights reserved.\n//\n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are met:\n//\n// 1. Redistributions of source code must retain the above copyright notice, this\n//    list of conditions and the following disclaimer.\n// 2. Redistributions in binary form must reproduce the above copyright notice,\n//    this list of conditions and the following disclaimer in the documentation\n//    and/or other materials provided with the distribution.\n//\n// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\n// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n// ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n// Package scram implements a SCRAM-{SHA-1,etc} client per RFC5802.\n//\n// http://tools.ietf.org/html/rfc5802\n//\npackage scram\n\nimport (\n\t\"bytes\"\n\t\"crypto/hmac\"\n\t\"crypto/rand\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"hash\"\n\t\"strconv\"\n\t\"strings\"\n)\n\n// Client implements a SCRAM-* client (SCRAM-SHA-1, SCRAM-SHA-256, etc).\n//\n// A Client may be used within a SASL conversation with logic resembling:\n//\n//    var in []byte\n//    var client = scram.NewClient(sha1.New, user, pass)\n//    for client.Step(in) {\n//            out := client.Out()\n//            // send out to server\n//            in := serverOut\n//    }\n//    if client.Err() != nil {\n//            // auth failed\n//    }\n//\ntype Client struct {\n\tnewHash func() hash.Hash\n\n\tuser string\n\tpass string\n\tstep int\n\tout  bytes.Buffer\n\terr  error\n\n\tclientNonce []byte\n\tserverNonce []byte\n\tsaltedPass  []byte\n\tauthMsg     bytes.Buffer\n}\n\n// NewClient returns a new SCRAM-* client with the provided hash algorithm.\n//\n// For SCRAM-SHA-256, for example, use:\n//\n//    client := scram.NewClient(sha256.New, user, pass)\n//\nfunc NewClient(newHash func() hash.Hash, user, pass string) *Client {\n\tc := &Client{\n\t\tnewHash: newHash,\n\t\tuser:    user,\n\t\tpass:    pass,\n\t}\n\tc.out.Grow(256)\n\tc.authMsg.Grow(256)\n\treturn c\n}\n\n// Out returns the data to be sent to the server in the current step.\nfunc (c *Client) Out() []byte {\n\tif c.out.Len() == 0 {\n\t\treturn nil\n\t}\n\treturn c.out.Bytes()\n}\n\n// Err returns the error that occurred, or nil if there were no errors.\nfunc (c *Client) Err() error {\n\treturn c.err\n}\n\n// SetNonce sets the client nonce to the provided value.\n// If not set, the nonce is generated automatically out of crypto/rand on the first step.\nfunc (c *Client) SetNonce(nonce []byte) {\n\tc.clientNonce = nonce\n}\n\nvar escaper = strings.NewReplacer(\"=\", \"=3D\", \",\", \"=2C\")\n\n// Step processes the incoming data from the server and makes the\n// next round of data for the server available via Client.Out.\n// Step returns false if there are no errors and more data is\n// still expected.\nfunc (c *Client) Step(in []byte) bool {\n\tc.out.Reset()\n\tif c.step > 2 || c.err != nil {\n\t\treturn false\n\t}\n\tc.step++\n\tswitch c.step {\n\tcase 1:\n\t\tc.err = c.step1(in)\n\tcase 2:\n\t\tc.err = c.step2(in)\n\tcase 3:\n\t\tc.err = c.step3(in)\n\t}\n\treturn c.step > 2 || c.err != nil\n}\n\nfunc (c *Client) step1(in []byte) error {\n\tif len(c.clientNonce) == 0 {\n\t\tconst nonceLen = 16\n\t\tbuf := make([]byte, nonceLen+b64.EncodedLen(nonceLen))\n\t\tif _, err := rand.Read(buf[:nonceLen]); err != nil {\n\t\t\treturn fmt.Errorf(\"cannot read random SCRAM-SHA-256 nonce from operating system: %v\", err)\n\t\t}\n\t\tc.clientNonce = buf[nonceLen:]\n\t\tb64.Encode(c.clientNonce, buf[:nonceLen])\n\t}\n\tc.authMsg.WriteString(\"n=\")\n\tescaper.WriteString(&c.authMsg, c.user)\n\tc.authMsg.WriteString(\",r=\")\n\tc.authMsg.Write(c.clientNonce)\n\n\tc.out.WriteString(\"n,,\")\n\tc.out.Write(c.authMsg.Bytes())\n\treturn nil\n}\n\nvar b64 = base64.StdEncoding\n\nfunc (c *Client) step2(in []byte) error {\n\tc.authMsg.WriteByte(',')\n\tc.authMsg.Write(in)\n\n\tfields := bytes.Split(in, []byte(\",\"))\n\tif len(fields) != 3 {\n\t\treturn fmt.Errorf(\"expected 3 fields in first SCRAM-SHA-256 server message, got %d: %q\", len(fields), in)\n\t}\n\tif !bytes.HasPrefix(fields[0], []byte(\"r=\")) || len(fields[0]) < 2 {\n\t\treturn fmt.Errorf(\"server sent an invalid SCRAM-SHA-256 nonce: %q\", fields[0])\n\t}\n\tif !bytes.HasPrefix(fields[1], []byte(\"s=\")) || len(fields[1]) < 6 {\n\t\treturn fmt.Errorf(\"server sent an invalid SCRAM-SHA-256 salt: %q\", fields[1])\n\t}\n\tif !bytes.HasPrefix(fields[2], []byte(\"i=\")) || len(fields[2]) < 6 {\n\t\treturn fmt.Errorf(\"server sent an invalid SCRAM-SHA-256 iteration count: %q\", fields[2])\n\t}\n\n\tc.serverNonce = fields[0][2:]\n\tif !bytes.HasPrefix(c.serverNonce, c.clientNonce) {\n\t\treturn fmt.Errorf(\"server SCRAM-SHA-256 nonce is not prefixed by client nonce: got %q, want %q+\\\"...\\\"\", c.serverNonce, c.clientNonce)\n\t}\n\n\tsalt := make([]byte, b64.DecodedLen(len(fields[1][2:])))\n\tn, err := b64.Decode(salt, fields[1][2:])\n\tif err != nil {\n\t\treturn fmt.Errorf(\"cannot decode SCRAM-SHA-256 salt sent by server: %q\", fields[1])\n\t}\n\tsalt = salt[:n]\n\titerCount, err := strconv.Atoi(string(fields[2][2:]))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"server sent an invalid SCRAM-SHA-256 iteration count: %q\", fields[2])\n\t}\n\tc.saltPassword(salt, iterCount)\n\n\tc.authMsg.WriteString(\",c=biws,r=\")\n\tc.authMsg.Write(c.serverNonce)\n\n\tc.out.WriteString(\"c=biws,r=\")\n\tc.out.Write(c.serverNonce)\n\tc.out.WriteString(\",p=\")\n\tc.out.Write(c.clientProof())\n\treturn nil\n}\n\nfunc (c *Client) step3(in []byte) error {\n\tvar isv, ise bool\n\tvar fields = bytes.Split(in, []byte(\",\"))\n\tif len(fields) == 1 {\n\t\tisv = bytes.HasPrefix(fields[0], []byte(\"v=\"))\n\t\tise = bytes.HasPrefix(fields[0], []byte(\"e=\"))\n\t}\n\tif ise {\n\t\treturn fmt.Errorf(\"SCRAM-SHA-256 authentication error: %s\", fields[0][2:])\n\t} else if !isv {\n\t\treturn fmt.Errorf(\"unsupported SCRAM-SHA-256 final message from server: %q\", in)\n\t}\n\tif !bytes.Equal(c.serverSignature(), fields[0][2:]) {\n\t\treturn fmt.Errorf(\"cannot authenticate SCRAM-SHA-256 server signature: %q\", fields[0][2:])\n\t}\n\treturn nil\n}\n\nfunc (c *Client) saltPassword(salt []byte, iterCount int) {\n\tmac := hmac.New(c.newHash, []byte(c.pass))\n\tmac.Write(salt)\n\tmac.Write([]byte{0, 0, 0, 1})\n\tui := mac.Sum(nil)\n\thi := make([]byte, len(ui))\n\tcopy(hi, ui)\n\tfor i := 1; i < iterCount; i++ {\n\t\tmac.Reset()\n\t\tmac.Write(ui)\n\t\tmac.Sum(ui[:0])\n\t\tfor j, b := range ui {\n\t\t\thi[j] ^= b\n\t\t}\n\t}\n\tc.saltedPass = hi\n}\n\nfunc (c *Client) clientProof() []byte {\n\tmac := hmac.New(c.newHash, c.saltedPass)\n\tmac.Write([]byte(\"Client Key\"))\n\tclientKey := mac.Sum(nil)\n\thash := c.newHash()\n\thash.Write(clientKey)\n\tstoredKey := hash.Sum(nil)\n\tmac = hmac.New(c.newHash, storedKey)\n\tmac.Write(c.authMsg.Bytes())\n\tclientProof := mac.Sum(nil)\n\tfor i, b := range clientKey {\n\t\tclientProof[i] ^= b\n\t}\n\tclientProof64 := make([]byte, b64.EncodedLen(len(clientProof)))\n\tb64.Encode(clientProof64, clientProof)\n\treturn clientProof64\n}\n\nfunc (c *Client) serverSignature() []byte {\n\tmac := hmac.New(c.newHash, c.saltedPass)\n\tmac.Write([]byte(\"Server Key\"))\n\tserverKey := mac.Sum(nil)\n\n\tmac = hmac.New(c.newHash, serverKey)\n\tmac.Write(c.authMsg.Bytes())\n\tserverSignature := mac.Sum(nil)\n\n\tencoded := make([]byte, b64.EncodedLen(len(serverSignature)))\n\tb64.Encode(encoded, serverSignature)\n\treturn encoded\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/ssl.go",
    "content": "package pq\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"io/ioutil\"\n\t\"net\"\n\t\"os\"\n\t\"os/user\"\n\t\"path/filepath\"\n\t\"strings\"\n)\n\n// ssl generates a function to upgrade a net.Conn based on the \"sslmode\" and\n// related settings. The function is nil when no upgrade should take place.\nfunc ssl(o values) (func(net.Conn) (net.Conn, error), error) {\n\tverifyCaOnly := false\n\ttlsConf := tls.Config{}\n\tswitch mode := o[\"sslmode\"]; mode {\n\t// \"require\" is the default.\n\tcase \"\", \"require\":\n\t\t// We must skip TLS's own verification since it requires full\n\t\t// verification since Go 1.3.\n\t\ttlsConf.InsecureSkipVerify = true\n\n\t\t// From http://www.postgresql.org/docs/current/static/libpq-ssl.html:\n\t\t//\n\t\t// Note: For backwards compatibility with earlier versions of\n\t\t// PostgreSQL, if a root CA file exists, the behavior of\n\t\t// sslmode=require will be the same as that of verify-ca, meaning the\n\t\t// server certificate is validated against the CA. Relying on this\n\t\t// behavior is discouraged, and applications that need certificate\n\t\t// validation should always use verify-ca or verify-full.\n\t\tif sslrootcert, ok := o[\"sslrootcert\"]; ok {\n\t\t\tif _, err := os.Stat(sslrootcert); err == nil {\n\t\t\t\tverifyCaOnly = true\n\t\t\t} else {\n\t\t\t\tdelete(o, \"sslrootcert\")\n\t\t\t}\n\t\t}\n\tcase \"verify-ca\":\n\t\t// We must skip TLS's own verification since it requires full\n\t\t// verification since Go 1.3.\n\t\ttlsConf.InsecureSkipVerify = true\n\t\tverifyCaOnly = true\n\tcase \"verify-full\":\n\t\ttlsConf.ServerName = o[\"host\"]\n\tcase \"disable\":\n\t\treturn nil, nil\n\tdefault:\n\t\treturn nil, fmterrorf(`unsupported sslmode %q; only \"require\" (default), \"verify-full\", \"verify-ca\", and \"disable\" supported`, mode)\n\t}\n\n\t// Set Server Name Indication (SNI), if enabled by connection parameters.\n\t// By default SNI is on, any value which is not starting with \"1\" disables\n\t// SNI -- that is the same check vanilla libpq uses.\n\tif sslsni := o[\"sslsni\"]; sslsni == \"\" || strings.HasPrefix(sslsni, \"1\") {\n\t\t// RFC 6066 asks to not set SNI if the host is a literal IP address (IPv4\n\t\t// or IPv6). This check is coded already crypto.tls.hostnameInSNI, so\n\t\t// just always set ServerName here and let crypto/tls do the filtering.\n\t\ttlsConf.ServerName = o[\"host\"]\n\t}\n\n\terr := sslClientCertificates(&tlsConf, o)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\terr = sslCertificateAuthority(&tlsConf, o)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Accept renegotiation requests initiated by the backend.\n\t//\n\t// Renegotiation was deprecated then removed from PostgreSQL 9.5, but\n\t// the default configuration of older versions has it enabled. Redshift\n\t// also initiates renegotiations and cannot be reconfigured.\n\ttlsConf.Renegotiation = tls.RenegotiateFreelyAsClient\n\n\treturn func(conn net.Conn) (net.Conn, error) {\n\t\tclient := tls.Client(conn, &tlsConf)\n\t\tif verifyCaOnly {\n\t\t\terr := sslVerifyCertificateAuthority(client, &tlsConf)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t\treturn client, nil\n\t}, nil\n}\n\n// sslClientCertificates adds the certificate specified in the \"sslcert\" and\n// \"sslkey\" settings, or if they aren't set, from the .postgresql directory\n// in the user's home directory. The configured files must exist and have\n// the correct permissions.\nfunc sslClientCertificates(tlsConf *tls.Config, o values) error {\n\tsslinline := o[\"sslinline\"]\n\tif sslinline == \"true\" {\n\t\tcert, err := tls.X509KeyPair([]byte(o[\"sslcert\"]), []byte(o[\"sslkey\"]))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ttlsConf.Certificates = []tls.Certificate{cert}\n\t\treturn nil\n\t}\n\n\t// user.Current() might fail when cross-compiling. We have to ignore the\n\t// error and continue without home directory defaults, since we wouldn't\n\t// know from where to load them.\n\tuser, _ := user.Current()\n\n\t// In libpq, the client certificate is only loaded if the setting is not blank.\n\t//\n\t// https://github.com/postgres/postgres/blob/REL9_6_2/src/interfaces/libpq/fe-secure-openssl.c#L1036-L1037\n\tsslcert := o[\"sslcert\"]\n\tif len(sslcert) == 0 && user != nil {\n\t\tsslcert = filepath.Join(user.HomeDir, \".postgresql\", \"postgresql.crt\")\n\t}\n\t// https://github.com/postgres/postgres/blob/REL9_6_2/src/interfaces/libpq/fe-secure-openssl.c#L1045\n\tif len(sslcert) == 0 {\n\t\treturn nil\n\t}\n\t// https://github.com/postgres/postgres/blob/REL9_6_2/src/interfaces/libpq/fe-secure-openssl.c#L1050:L1054\n\tif _, err := os.Stat(sslcert); os.IsNotExist(err) {\n\t\treturn nil\n\t} else if err != nil {\n\t\treturn err\n\t}\n\n\t// In libpq, the ssl key is only loaded if the setting is not blank.\n\t//\n\t// https://github.com/postgres/postgres/blob/REL9_6_2/src/interfaces/libpq/fe-secure-openssl.c#L1123-L1222\n\tsslkey := o[\"sslkey\"]\n\tif len(sslkey) == 0 && user != nil {\n\t\tsslkey = filepath.Join(user.HomeDir, \".postgresql\", \"postgresql.key\")\n\t}\n\n\tif len(sslkey) > 0 {\n\t\tif err := sslKeyPermissions(sslkey); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tcert, err := tls.LoadX509KeyPair(sslcert, sslkey)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\ttlsConf.Certificates = []tls.Certificate{cert}\n\treturn nil\n}\n\n// sslCertificateAuthority adds the RootCA specified in the \"sslrootcert\" setting.\nfunc sslCertificateAuthority(tlsConf *tls.Config, o values) error {\n\t// In libpq, the root certificate is only loaded if the setting is not blank.\n\t//\n\t// https://github.com/postgres/postgres/blob/REL9_6_2/src/interfaces/libpq/fe-secure-openssl.c#L950-L951\n\tif sslrootcert := o[\"sslrootcert\"]; len(sslrootcert) > 0 {\n\t\ttlsConf.RootCAs = x509.NewCertPool()\n\n\t\tsslinline := o[\"sslinline\"]\n\n\t\tvar cert []byte\n\t\tif sslinline == \"true\" {\n\t\t\tcert = []byte(sslrootcert)\n\t\t} else {\n\t\t\tvar err error\n\t\t\tcert, err = ioutil.ReadFile(sslrootcert)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\tif !tlsConf.RootCAs.AppendCertsFromPEM(cert) {\n\t\t\treturn fmterrorf(\"couldn't parse pem in sslrootcert\")\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// sslVerifyCertificateAuthority carries out a TLS handshake to the server and\n// verifies the presented certificate against the CA, i.e. the one specified in\n// sslrootcert or the system CA if sslrootcert was not specified.\nfunc sslVerifyCertificateAuthority(client *tls.Conn, tlsConf *tls.Config) error {\n\terr := client.Handshake()\n\tif err != nil {\n\t\treturn err\n\t}\n\tcerts := client.ConnectionState().PeerCertificates\n\topts := x509.VerifyOptions{\n\t\tDNSName:       client.ConnectionState().ServerName,\n\t\tIntermediates: x509.NewCertPool(),\n\t\tRoots:         tlsConf.RootCAs,\n\t}\n\tfor i, cert := range certs {\n\t\tif i == 0 {\n\t\t\tcontinue\n\t\t}\n\t\topts.Intermediates.AddCert(cert)\n\t}\n\t_, err = certs[0].Verify(opts)\n\treturn err\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/ssl_permissions.go",
    "content": "//go:build !windows\n// +build !windows\n\npackage pq\n\nimport (\n\t\"errors\"\n\t\"os\"\n\t\"syscall\"\n)\n\nconst (\n\trootUserID = uint32(0)\n\n\t// The maximum permissions that a private key file owned by a regular user\n\t// is allowed to have. This translates to u=rw.\n\tmaxUserOwnedKeyPermissions os.FileMode = 0600\n\n\t// The maximum permissions that a private key file owned by root is allowed\n\t// to have. This translates to u=rw,g=r.\n\tmaxRootOwnedKeyPermissions os.FileMode = 0640\n)\n\nvar (\n\terrSSLKeyHasUnacceptableUserPermissions = errors.New(\"permissions for files not owned by root should be u=rw (0600) or less\")\n\terrSSLKeyHasUnacceptableRootPermissions = errors.New(\"permissions for root owned files should be u=rw,g=r (0640) or less\")\n)\n\n// sslKeyPermissions checks the permissions on user-supplied ssl key files.\n// The key file should have very little access.\n//\n// libpq does not check key file permissions on Windows.\nfunc sslKeyPermissions(sslkey string) error {\n\tinfo, err := os.Stat(sslkey)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = hasCorrectPermissions(info)\n\n\t// return ErrSSLKeyHasWorldPermissions for backwards compatability with\n\t// existing code.\n\tif err == errSSLKeyHasUnacceptableUserPermissions || err == errSSLKeyHasUnacceptableRootPermissions {\n\t\terr = ErrSSLKeyHasWorldPermissions\n\t}\n\treturn err\n}\n\n// hasCorrectPermissions checks the file info (and the unix-specific stat_t\n// output) to verify that the permissions on the file are correct.\n//\n// If the file is owned by the same user the process is running as,\n// the file should only have 0600 (u=rw). If the file is owned by root,\n// and the group matches the group that the process is running in, the\n// permissions cannot be more than 0640 (u=rw,g=r). The file should\n// never have world permissions.\n//\n// Returns an error when the permission check fails.\nfunc hasCorrectPermissions(info os.FileInfo) error {\n\t// if file's permission matches 0600, allow access.\n\tuserPermissionMask := (os.FileMode(0777) ^ maxUserOwnedKeyPermissions)\n\n\t// regardless of if we're running as root or not, 0600 is acceptable,\n\t// so we return if we match the regular user permission mask.\n\tif info.Mode().Perm()&userPermissionMask == 0 {\n\t\treturn nil\n\t}\n\n\t// We need to pull the Unix file information to get the file's owner.\n\t// If we can't access it, there's some sort of operating system level error\n\t// and we should fail rather than attempting to use faulty information.\n\tsysInfo := info.Sys()\n\tif sysInfo == nil {\n\t\treturn ErrSSLKeyUnknownOwnership\n\t}\n\n\tunixStat, ok := sysInfo.(*syscall.Stat_t)\n\tif !ok {\n\t\treturn ErrSSLKeyUnknownOwnership\n\t}\n\n\t// if the file is owned by root, we allow 0640 (u=rw,g=r) to match what\n\t// Postgres does.\n\tif unixStat.Uid == rootUserID {\n\t\trootPermissionMask := (os.FileMode(0777) ^ maxRootOwnedKeyPermissions)\n\t\tif info.Mode().Perm()&rootPermissionMask != 0 {\n\t\t\treturn errSSLKeyHasUnacceptableRootPermissions\n\t\t}\n\t\treturn nil\n\t}\n\n\treturn errSSLKeyHasUnacceptableUserPermissions\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/ssl_windows.go",
    "content": "//go:build windows\n// +build windows\n\npackage pq\n\n// sslKeyPermissions checks the permissions on user-supplied ssl key files.\n// The key file should have very little access.\n//\n// libpq does not check key file permissions on Windows.\nfunc sslKeyPermissions(string) error { return nil }\n"
  },
  {
    "path": "vendor/github.com/lib/pq/url.go",
    "content": "package pq\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\tnurl \"net/url\"\n\t\"sort\"\n\t\"strings\"\n)\n\n// ParseURL no longer needs to be used by clients of this library since supplying a URL as a\n// connection string to sql.Open() is now supported:\n//\n//\tsql.Open(\"postgres\", \"postgres://bob:secret@1.2.3.4:5432/mydb?sslmode=verify-full\")\n//\n// It remains exported here for backwards-compatibility.\n//\n// ParseURL converts a url to a connection string for driver.Open.\n// Example:\n//\n//\t\"postgres://bob:secret@1.2.3.4:5432/mydb?sslmode=verify-full\"\n//\n// converts to:\n//\n//\t\"user=bob password=secret host=1.2.3.4 port=5432 dbname=mydb sslmode=verify-full\"\n//\n// A minimal example:\n//\n//\t\"postgres://\"\n//\n// This will be blank, causing driver.Open to use all of the defaults\nfunc ParseURL(url string) (string, error) {\n\tu, err := nurl.Parse(url)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tif u.Scheme != \"postgres\" && u.Scheme != \"postgresql\" {\n\t\treturn \"\", fmt.Errorf(\"invalid connection protocol: %s\", u.Scheme)\n\t}\n\n\tvar kvs []string\n\tescaper := strings.NewReplacer(`'`, `\\'`, `\\`, `\\\\`)\n\taccrue := func(k, v string) {\n\t\tif v != \"\" {\n\t\t\tkvs = append(kvs, k+\"='\"+escaper.Replace(v)+\"'\")\n\t\t}\n\t}\n\n\tif u.User != nil {\n\t\tv := u.User.Username()\n\t\taccrue(\"user\", v)\n\n\t\tv, _ = u.User.Password()\n\t\taccrue(\"password\", v)\n\t}\n\n\tif host, port, err := net.SplitHostPort(u.Host); err != nil {\n\t\taccrue(\"host\", u.Host)\n\t} else {\n\t\taccrue(\"host\", host)\n\t\taccrue(\"port\", port)\n\t}\n\n\tif u.Path != \"\" {\n\t\taccrue(\"dbname\", u.Path[1:])\n\t}\n\n\tq := u.Query()\n\tfor k := range q {\n\t\taccrue(k, q.Get(k))\n\t}\n\n\tsort.Strings(kvs) // Makes testing easier (not a performance concern)\n\treturn strings.Join(kvs, \" \"), nil\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/user_other.go",
    "content": "// Package pq is a pure Go Postgres driver for the database/sql package.\n\n//go:build js || android || hurd || zos\n// +build js android hurd zos\n\npackage pq\n\nfunc userCurrent() (string, error) {\n\treturn \"\", ErrCouldNotDetectUsername\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/user_posix.go",
    "content": "// Package pq is a pure Go Postgres driver for the database/sql package.\n\n//go:build aix || darwin || dragonfly || freebsd || (linux && !android) || nacl || netbsd || openbsd || plan9 || solaris || rumprun || illumos\n// +build aix darwin dragonfly freebsd linux,!android nacl netbsd openbsd plan9 solaris rumprun illumos\n\npackage pq\n\nimport (\n\t\"os\"\n\t\"os/user\"\n)\n\nfunc userCurrent() (string, error) {\n\tu, err := user.Current()\n\tif err == nil {\n\t\treturn u.Username, nil\n\t}\n\n\tname := os.Getenv(\"USER\")\n\tif name != \"\" {\n\t\treturn name, nil\n\t}\n\n\treturn \"\", ErrCouldNotDetectUsername\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/user_windows.go",
    "content": "// Package pq is a pure Go Postgres driver for the database/sql package.\npackage pq\n\nimport (\n\t\"path/filepath\"\n\t\"syscall\"\n)\n\n// Perform Windows user name lookup identically to libpq.\n//\n// The PostgreSQL code makes use of the legacy Win32 function\n// GetUserName, and that function has not been imported into stock Go.\n// GetUserNameEx is available though, the difference being that a\n// wider range of names are available.  To get the output to be the\n// same as GetUserName, only the base (or last) component of the\n// result is returned.\nfunc userCurrent() (string, error) {\n\tpw_name := make([]uint16, 128)\n\tpwname_size := uint32(len(pw_name)) - 1\n\terr := syscall.GetUserNameEx(syscall.NameSamCompatible, &pw_name[0], &pwname_size)\n\tif err != nil {\n\t\treturn \"\", ErrCouldNotDetectUsername\n\t}\n\ts := syscall.UTF16ToString(pw_name)\n\tu := filepath.Base(s)\n\treturn u, nil\n}\n"
  },
  {
    "path": "vendor/github.com/lib/pq/uuid.go",
    "content": "package pq\n\nimport (\n\t\"encoding/hex\"\n\t\"fmt\"\n)\n\n// decodeUUIDBinary interprets the binary format of a uuid, returning it in text format.\nfunc decodeUUIDBinary(src []byte) ([]byte, error) {\n\tif len(src) != 16 {\n\t\treturn nil, fmt.Errorf(\"pq: unable to decode uuid; bad length: %d\", len(src))\n\t}\n\n\tdst := make([]byte, 36)\n\tdst[8], dst[13], dst[18], dst[23] = '-', '-', '-', '-'\n\thex.Encode(dst[0:], src[0:4])\n\thex.Encode(dst[9:], src[4:6])\n\thex.Encode(dst[14:], src[6:8])\n\thex.Encode(dst[19:], src[8:10])\n\thex.Encode(dst[24:], src[10:16])\n\n\treturn dst, nil\n}\n"
  },
  {
    "path": "vendor/github.com/pmezard/go-difflib/LICENSE",
    "content": "Copyright (c) 2013, Patrick Mezard\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n    Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n    Redistributions in binary form must reproduce the above copyright\nnotice, this list of conditions and the following disclaimer in the\ndocumentation and/or other materials provided with the distribution.\n    The names of its contributors may not be used to endorse or promote\nproducts derived from this software without specific prior written\npermission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS\nIS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED\nTO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\nPARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nHOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED\nTO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "vendor/github.com/pmezard/go-difflib/difflib/difflib.go",
    "content": "// Package difflib is a partial port of Python difflib module.\n//\n// It provides tools to compare sequences of strings and generate textual diffs.\n//\n// The following class and functions have been ported:\n//\n// - SequenceMatcher\n//\n// - unified_diff\n//\n// - context_diff\n//\n// Getting unified diffs was the main goal of the port. Keep in mind this code\n// is mostly suitable to output text differences in a human friendly way, there\n// are no guarantees generated diffs are consumable by patch(1).\npackage difflib\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n)\n\nfunc min(a, b int) int {\n\tif a < b {\n\t\treturn a\n\t}\n\treturn b\n}\n\nfunc max(a, b int) int {\n\tif a > b {\n\t\treturn a\n\t}\n\treturn b\n}\n\nfunc calculateRatio(matches, length int) float64 {\n\tif length > 0 {\n\t\treturn 2.0 * float64(matches) / float64(length)\n\t}\n\treturn 1.0\n}\n\ntype Match struct {\n\tA    int\n\tB    int\n\tSize int\n}\n\ntype OpCode struct {\n\tTag byte\n\tI1  int\n\tI2  int\n\tJ1  int\n\tJ2  int\n}\n\n// SequenceMatcher compares sequence of strings. The basic\n// algorithm predates, and is a little fancier than, an algorithm\n// published in the late 1980's by Ratcliff and Obershelp under the\n// hyperbolic name \"gestalt pattern matching\".  The basic idea is to find\n// the longest contiguous matching subsequence that contains no \"junk\"\n// elements (R-O doesn't address junk).  The same idea is then applied\n// recursively to the pieces of the sequences to the left and to the right\n// of the matching subsequence.  This does not yield minimal edit\n// sequences, but does tend to yield matches that \"look right\" to people.\n//\n// SequenceMatcher tries to compute a \"human-friendly diff\" between two\n// sequences.  Unlike e.g. UNIX(tm) diff, the fundamental notion is the\n// longest *contiguous* & junk-free matching subsequence.  That's what\n// catches peoples' eyes.  The Windows(tm) windiff has another interesting\n// notion, pairing up elements that appear uniquely in each sequence.\n// That, and the method here, appear to yield more intuitive difference\n// reports than does diff.  This method appears to be the least vulnerable\n// to synching up on blocks of \"junk lines\", though (like blank lines in\n// ordinary text files, or maybe \"<P>\" lines in HTML files).  That may be\n// because this is the only method of the 3 that has a *concept* of\n// \"junk\" <wink>.\n//\n// Timing:  Basic R-O is cubic time worst case and quadratic time expected\n// case.  SequenceMatcher is quadratic time for the worst case and has\n// expected-case behavior dependent in a complicated way on how many\n// elements the sequences have in common; best case time is linear.\ntype SequenceMatcher struct {\n\ta              []string\n\tb              []string\n\tb2j            map[string][]int\n\tIsJunk         func(string) bool\n\tautoJunk       bool\n\tbJunk          map[string]struct{}\n\tmatchingBlocks []Match\n\tfullBCount     map[string]int\n\tbPopular       map[string]struct{}\n\topCodes        []OpCode\n}\n\nfunc NewMatcher(a, b []string) *SequenceMatcher {\n\tm := SequenceMatcher{autoJunk: true}\n\tm.SetSeqs(a, b)\n\treturn &m\n}\n\nfunc NewMatcherWithJunk(a, b []string, autoJunk bool,\n\tisJunk func(string) bool) *SequenceMatcher {\n\n\tm := SequenceMatcher{IsJunk: isJunk, autoJunk: autoJunk}\n\tm.SetSeqs(a, b)\n\treturn &m\n}\n\n// Set two sequences to be compared.\nfunc (m *SequenceMatcher) SetSeqs(a, b []string) {\n\tm.SetSeq1(a)\n\tm.SetSeq2(b)\n}\n\n// Set the first sequence to be compared. The second sequence to be compared is\n// not changed.\n//\n// SequenceMatcher computes and caches detailed information about the second\n// sequence, so if you want to compare one sequence S against many sequences,\n// use .SetSeq2(s) once and call .SetSeq1(x) repeatedly for each of the other\n// sequences.\n//\n// See also SetSeqs() and SetSeq2().\nfunc (m *SequenceMatcher) SetSeq1(a []string) {\n\tif &a == &m.a {\n\t\treturn\n\t}\n\tm.a = a\n\tm.matchingBlocks = nil\n\tm.opCodes = nil\n}\n\n// Set the second sequence to be compared. The first sequence to be compared is\n// not changed.\nfunc (m *SequenceMatcher) SetSeq2(b []string) {\n\tif &b == &m.b {\n\t\treturn\n\t}\n\tm.b = b\n\tm.matchingBlocks = nil\n\tm.opCodes = nil\n\tm.fullBCount = nil\n\tm.chainB()\n}\n\nfunc (m *SequenceMatcher) chainB() {\n\t// Populate line -> index mapping\n\tb2j := map[string][]int{}\n\tfor i, s := range m.b {\n\t\tindices := b2j[s]\n\t\tindices = append(indices, i)\n\t\tb2j[s] = indices\n\t}\n\n\t// Purge junk elements\n\tm.bJunk = map[string]struct{}{}\n\tif m.IsJunk != nil {\n\t\tjunk := m.bJunk\n\t\tfor s, _ := range b2j {\n\t\t\tif m.IsJunk(s) {\n\t\t\t\tjunk[s] = struct{}{}\n\t\t\t}\n\t\t}\n\t\tfor s, _ := range junk {\n\t\t\tdelete(b2j, s)\n\t\t}\n\t}\n\n\t// Purge remaining popular elements\n\tpopular := map[string]struct{}{}\n\tn := len(m.b)\n\tif m.autoJunk && n >= 200 {\n\t\tntest := n/100 + 1\n\t\tfor s, indices := range b2j {\n\t\t\tif len(indices) > ntest {\n\t\t\t\tpopular[s] = struct{}{}\n\t\t\t}\n\t\t}\n\t\tfor s, _ := range popular {\n\t\t\tdelete(b2j, s)\n\t\t}\n\t}\n\tm.bPopular = popular\n\tm.b2j = b2j\n}\n\nfunc (m *SequenceMatcher) isBJunk(s string) bool {\n\t_, ok := m.bJunk[s]\n\treturn ok\n}\n\n// Find longest matching block in a[alo:ahi] and b[blo:bhi].\n//\n// If IsJunk is not defined:\n//\n// Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where\n//     alo <= i <= i+k <= ahi\n//     blo <= j <= j+k <= bhi\n// and for all (i',j',k') meeting those conditions,\n//     k >= k'\n//     i <= i'\n//     and if i == i', j <= j'\n//\n// In other words, of all maximal matching blocks, return one that\n// starts earliest in a, and of all those maximal matching blocks that\n// start earliest in a, return the one that starts earliest in b.\n//\n// If IsJunk is defined, first the longest matching block is\n// determined as above, but with the additional restriction that no\n// junk element appears in the block.  Then that block is extended as\n// far as possible by matching (only) junk elements on both sides.  So\n// the resulting block never matches on junk except as identical junk\n// happens to be adjacent to an \"interesting\" match.\n//\n// If no blocks match, return (alo, blo, 0).\nfunc (m *SequenceMatcher) findLongestMatch(alo, ahi, blo, bhi int) Match {\n\t// CAUTION:  stripping common prefix or suffix would be incorrect.\n\t// E.g.,\n\t//    ab\n\t//    acab\n\t// Longest matching block is \"ab\", but if common prefix is\n\t// stripped, it's \"a\" (tied with \"b\").  UNIX(tm) diff does so\n\t// strip, so ends up claiming that ab is changed to acab by\n\t// inserting \"ca\" in the middle.  That's minimal but unintuitive:\n\t// \"it's obvious\" that someone inserted \"ac\" at the front.\n\t// Windiff ends up at the same place as diff, but by pairing up\n\t// the unique 'b's and then matching the first two 'a's.\n\tbesti, bestj, bestsize := alo, blo, 0\n\n\t// find longest junk-free match\n\t// during an iteration of the loop, j2len[j] = length of longest\n\t// junk-free match ending with a[i-1] and b[j]\n\tj2len := map[int]int{}\n\tfor i := alo; i != ahi; i++ {\n\t\t// look at all instances of a[i] in b; note that because\n\t\t// b2j has no junk keys, the loop is skipped if a[i] is junk\n\t\tnewj2len := map[int]int{}\n\t\tfor _, j := range m.b2j[m.a[i]] {\n\t\t\t// a[i] matches b[j]\n\t\t\tif j < blo {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif j >= bhi {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tk := j2len[j-1] + 1\n\t\t\tnewj2len[j] = k\n\t\t\tif k > bestsize {\n\t\t\t\tbesti, bestj, bestsize = i-k+1, j-k+1, k\n\t\t\t}\n\t\t}\n\t\tj2len = newj2len\n\t}\n\n\t// Extend the best by non-junk elements on each end.  In particular,\n\t// \"popular\" non-junk elements aren't in b2j, which greatly speeds\n\t// the inner loop above, but also means \"the best\" match so far\n\t// doesn't contain any junk *or* popular non-junk elements.\n\tfor besti > alo && bestj > blo && !m.isBJunk(m.b[bestj-1]) &&\n\t\tm.a[besti-1] == m.b[bestj-1] {\n\t\tbesti, bestj, bestsize = besti-1, bestj-1, bestsize+1\n\t}\n\tfor besti+bestsize < ahi && bestj+bestsize < bhi &&\n\t\t!m.isBJunk(m.b[bestj+bestsize]) &&\n\t\tm.a[besti+bestsize] == m.b[bestj+bestsize] {\n\t\tbestsize += 1\n\t}\n\n\t// Now that we have a wholly interesting match (albeit possibly\n\t// empty!), we may as well suck up the matching junk on each\n\t// side of it too.  Can't think of a good reason not to, and it\n\t// saves post-processing the (possibly considerable) expense of\n\t// figuring out what to do with it.  In the case of an empty\n\t// interesting match, this is clearly the right thing to do,\n\t// because no other kind of match is possible in the regions.\n\tfor besti > alo && bestj > blo && m.isBJunk(m.b[bestj-1]) &&\n\t\tm.a[besti-1] == m.b[bestj-1] {\n\t\tbesti, bestj, bestsize = besti-1, bestj-1, bestsize+1\n\t}\n\tfor besti+bestsize < ahi && bestj+bestsize < bhi &&\n\t\tm.isBJunk(m.b[bestj+bestsize]) &&\n\t\tm.a[besti+bestsize] == m.b[bestj+bestsize] {\n\t\tbestsize += 1\n\t}\n\n\treturn Match{A: besti, B: bestj, Size: bestsize}\n}\n\n// Return list of triples describing matching subsequences.\n//\n// Each triple is of the form (i, j, n), and means that\n// a[i:i+n] == b[j:j+n].  The triples are monotonically increasing in\n// i and in j. It's also guaranteed that if (i, j, n) and (i', j', n') are\n// adjacent triples in the list, and the second is not the last triple in the\n// list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe\n// adjacent equal blocks.\n//\n// The last triple is a dummy, (len(a), len(b), 0), and is the only\n// triple with n==0.\nfunc (m *SequenceMatcher) GetMatchingBlocks() []Match {\n\tif m.matchingBlocks != nil {\n\t\treturn m.matchingBlocks\n\t}\n\n\tvar matchBlocks func(alo, ahi, blo, bhi int, matched []Match) []Match\n\tmatchBlocks = func(alo, ahi, blo, bhi int, matched []Match) []Match {\n\t\tmatch := m.findLongestMatch(alo, ahi, blo, bhi)\n\t\ti, j, k := match.A, match.B, match.Size\n\t\tif match.Size > 0 {\n\t\t\tif alo < i && blo < j {\n\t\t\t\tmatched = matchBlocks(alo, i, blo, j, matched)\n\t\t\t}\n\t\t\tmatched = append(matched, match)\n\t\t\tif i+k < ahi && j+k < bhi {\n\t\t\t\tmatched = matchBlocks(i+k, ahi, j+k, bhi, matched)\n\t\t\t}\n\t\t}\n\t\treturn matched\n\t}\n\tmatched := matchBlocks(0, len(m.a), 0, len(m.b), nil)\n\n\t// It's possible that we have adjacent equal blocks in the\n\t// matching_blocks list now.\n\tnonAdjacent := []Match{}\n\ti1, j1, k1 := 0, 0, 0\n\tfor _, b := range matched {\n\t\t// Is this block adjacent to i1, j1, k1?\n\t\ti2, j2, k2 := b.A, b.B, b.Size\n\t\tif i1+k1 == i2 && j1+k1 == j2 {\n\t\t\t// Yes, so collapse them -- this just increases the length of\n\t\t\t// the first block by the length of the second, and the first\n\t\t\t// block so lengthened remains the block to compare against.\n\t\t\tk1 += k2\n\t\t} else {\n\t\t\t// Not adjacent.  Remember the first block (k1==0 means it's\n\t\t\t// the dummy we started with), and make the second block the\n\t\t\t// new block to compare against.\n\t\t\tif k1 > 0 {\n\t\t\t\tnonAdjacent = append(nonAdjacent, Match{i1, j1, k1})\n\t\t\t}\n\t\t\ti1, j1, k1 = i2, j2, k2\n\t\t}\n\t}\n\tif k1 > 0 {\n\t\tnonAdjacent = append(nonAdjacent, Match{i1, j1, k1})\n\t}\n\n\tnonAdjacent = append(nonAdjacent, Match{len(m.a), len(m.b), 0})\n\tm.matchingBlocks = nonAdjacent\n\treturn m.matchingBlocks\n}\n\n// Return list of 5-tuples describing how to turn a into b.\n//\n// Each tuple is of the form (tag, i1, i2, j1, j2).  The first tuple\n// has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the\n// tuple preceding it, and likewise for j1 == the previous j2.\n//\n// The tags are characters, with these meanings:\n//\n// 'r' (replace):  a[i1:i2] should be replaced by b[j1:j2]\n//\n// 'd' (delete):   a[i1:i2] should be deleted, j1==j2 in this case.\n//\n// 'i' (insert):   b[j1:j2] should be inserted at a[i1:i1], i1==i2 in this case.\n//\n// 'e' (equal):    a[i1:i2] == b[j1:j2]\nfunc (m *SequenceMatcher) GetOpCodes() []OpCode {\n\tif m.opCodes != nil {\n\t\treturn m.opCodes\n\t}\n\ti, j := 0, 0\n\tmatching := m.GetMatchingBlocks()\n\topCodes := make([]OpCode, 0, len(matching))\n\tfor _, m := range matching {\n\t\t//  invariant:  we've pumped out correct diffs to change\n\t\t//  a[:i] into b[:j], and the next matching block is\n\t\t//  a[ai:ai+size] == b[bj:bj+size]. So we need to pump\n\t\t//  out a diff to change a[i:ai] into b[j:bj], pump out\n\t\t//  the matching block, and move (i,j) beyond the match\n\t\tai, bj, size := m.A, m.B, m.Size\n\t\ttag := byte(0)\n\t\tif i < ai && j < bj {\n\t\t\ttag = 'r'\n\t\t} else if i < ai {\n\t\t\ttag = 'd'\n\t\t} else if j < bj {\n\t\t\ttag = 'i'\n\t\t}\n\t\tif tag > 0 {\n\t\t\topCodes = append(opCodes, OpCode{tag, i, ai, j, bj})\n\t\t}\n\t\ti, j = ai+size, bj+size\n\t\t// the list of matching blocks is terminated by a\n\t\t// sentinel with size 0\n\t\tif size > 0 {\n\t\t\topCodes = append(opCodes, OpCode{'e', ai, i, bj, j})\n\t\t}\n\t}\n\tm.opCodes = opCodes\n\treturn m.opCodes\n}\n\n// Isolate change clusters by eliminating ranges with no changes.\n//\n// Return a generator of groups with up to n lines of context.\n// Each group is in the same format as returned by GetOpCodes().\nfunc (m *SequenceMatcher) GetGroupedOpCodes(n int) [][]OpCode {\n\tif n < 0 {\n\t\tn = 3\n\t}\n\tcodes := m.GetOpCodes()\n\tif len(codes) == 0 {\n\t\tcodes = []OpCode{OpCode{'e', 0, 1, 0, 1}}\n\t}\n\t// Fixup leading and trailing groups if they show no changes.\n\tif codes[0].Tag == 'e' {\n\t\tc := codes[0]\n\t\ti1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2\n\t\tcodes[0] = OpCode{c.Tag, max(i1, i2-n), i2, max(j1, j2-n), j2}\n\t}\n\tif codes[len(codes)-1].Tag == 'e' {\n\t\tc := codes[len(codes)-1]\n\t\ti1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2\n\t\tcodes[len(codes)-1] = OpCode{c.Tag, i1, min(i2, i1+n), j1, min(j2, j1+n)}\n\t}\n\tnn := n + n\n\tgroups := [][]OpCode{}\n\tgroup := []OpCode{}\n\tfor _, c := range codes {\n\t\ti1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2\n\t\t// End the current group and start a new one whenever\n\t\t// there is a large range with no changes.\n\t\tif c.Tag == 'e' && i2-i1 > nn {\n\t\t\tgroup = append(group, OpCode{c.Tag, i1, min(i2, i1+n),\n\t\t\t\tj1, min(j2, j1+n)})\n\t\t\tgroups = append(groups, group)\n\t\t\tgroup = []OpCode{}\n\t\t\ti1, j1 = max(i1, i2-n), max(j1, j2-n)\n\t\t}\n\t\tgroup = append(group, OpCode{c.Tag, i1, i2, j1, j2})\n\t}\n\tif len(group) > 0 && !(len(group) == 1 && group[0].Tag == 'e') {\n\t\tgroups = append(groups, group)\n\t}\n\treturn groups\n}\n\n// Return a measure of the sequences' similarity (float in [0,1]).\n//\n// Where T is the total number of elements in both sequences, and\n// M is the number of matches, this is 2.0*M / T.\n// Note that this is 1 if the sequences are identical, and 0 if\n// they have nothing in common.\n//\n// .Ratio() is expensive to compute if you haven't already computed\n// .GetMatchingBlocks() or .GetOpCodes(), in which case you may\n// want to try .QuickRatio() or .RealQuickRation() first to get an\n// upper bound.\nfunc (m *SequenceMatcher) Ratio() float64 {\n\tmatches := 0\n\tfor _, m := range m.GetMatchingBlocks() {\n\t\tmatches += m.Size\n\t}\n\treturn calculateRatio(matches, len(m.a)+len(m.b))\n}\n\n// Return an upper bound on ratio() relatively quickly.\n//\n// This isn't defined beyond that it is an upper bound on .Ratio(), and\n// is faster to compute.\nfunc (m *SequenceMatcher) QuickRatio() float64 {\n\t// viewing a and b as multisets, set matches to the cardinality\n\t// of their intersection; this counts the number of matches\n\t// without regard to order, so is clearly an upper bound\n\tif m.fullBCount == nil {\n\t\tm.fullBCount = map[string]int{}\n\t\tfor _, s := range m.b {\n\t\t\tm.fullBCount[s] = m.fullBCount[s] + 1\n\t\t}\n\t}\n\n\t// avail[x] is the number of times x appears in 'b' less the\n\t// number of times we've seen it in 'a' so far ... kinda\n\tavail := map[string]int{}\n\tmatches := 0\n\tfor _, s := range m.a {\n\t\tn, ok := avail[s]\n\t\tif !ok {\n\t\t\tn = m.fullBCount[s]\n\t\t}\n\t\tavail[s] = n - 1\n\t\tif n > 0 {\n\t\t\tmatches += 1\n\t\t}\n\t}\n\treturn calculateRatio(matches, len(m.a)+len(m.b))\n}\n\n// Return an upper bound on ratio() very quickly.\n//\n// This isn't defined beyond that it is an upper bound on .Ratio(), and\n// is faster to compute than either .Ratio() or .QuickRatio().\nfunc (m *SequenceMatcher) RealQuickRatio() float64 {\n\tla, lb := len(m.a), len(m.b)\n\treturn calculateRatio(min(la, lb), la+lb)\n}\n\n// Convert range to the \"ed\" format\nfunc formatRangeUnified(start, stop int) string {\n\t// Per the diff spec at http://www.unix.org/single_unix_specification/\n\tbeginning := start + 1 // lines start numbering with one\n\tlength := stop - start\n\tif length == 1 {\n\t\treturn fmt.Sprintf(\"%d\", beginning)\n\t}\n\tif length == 0 {\n\t\tbeginning -= 1 // empty ranges begin at line just before the range\n\t}\n\treturn fmt.Sprintf(\"%d,%d\", beginning, length)\n}\n\n// Unified diff parameters\ntype UnifiedDiff struct {\n\tA        []string // First sequence lines\n\tFromFile string   // First file name\n\tFromDate string   // First file time\n\tB        []string // Second sequence lines\n\tToFile   string   // Second file name\n\tToDate   string   // Second file time\n\tEol      string   // Headers end of line, defaults to LF\n\tContext  int      // Number of context lines\n}\n\n// Compare two sequences of lines; generate the delta as a unified diff.\n//\n// Unified diffs are a compact way of showing line changes and a few\n// lines of context.  The number of context lines is set by 'n' which\n// defaults to three.\n//\n// By default, the diff control lines (those with ---, +++, or @@) are\n// created with a trailing newline.  This is helpful so that inputs\n// created from file.readlines() result in diffs that are suitable for\n// file.writelines() since both the inputs and outputs have trailing\n// newlines.\n//\n// For inputs that do not have trailing newlines, set the lineterm\n// argument to \"\" so that the output will be uniformly newline free.\n//\n// The unidiff format normally has a header for filenames and modification\n// times.  Any or all of these may be specified using strings for\n// 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.\n// The modification times are normally expressed in the ISO 8601 format.\nfunc WriteUnifiedDiff(writer io.Writer, diff UnifiedDiff) error {\n\tbuf := bufio.NewWriter(writer)\n\tdefer buf.Flush()\n\twf := func(format string, args ...interface{}) error {\n\t\t_, err := buf.WriteString(fmt.Sprintf(format, args...))\n\t\treturn err\n\t}\n\tws := func(s string) error {\n\t\t_, err := buf.WriteString(s)\n\t\treturn err\n\t}\n\n\tif len(diff.Eol) == 0 {\n\t\tdiff.Eol = \"\\n\"\n\t}\n\n\tstarted := false\n\tm := NewMatcher(diff.A, diff.B)\n\tfor _, g := range m.GetGroupedOpCodes(diff.Context) {\n\t\tif !started {\n\t\t\tstarted = true\n\t\t\tfromDate := \"\"\n\t\t\tif len(diff.FromDate) > 0 {\n\t\t\t\tfromDate = \"\\t\" + diff.FromDate\n\t\t\t}\n\t\t\ttoDate := \"\"\n\t\t\tif len(diff.ToDate) > 0 {\n\t\t\t\ttoDate = \"\\t\" + diff.ToDate\n\t\t\t}\n\t\t\tif diff.FromFile != \"\" || diff.ToFile != \"\" {\n\t\t\t\terr := wf(\"--- %s%s%s\", diff.FromFile, fromDate, diff.Eol)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\terr = wf(\"+++ %s%s%s\", diff.ToFile, toDate, diff.Eol)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tfirst, last := g[0], g[len(g)-1]\n\t\trange1 := formatRangeUnified(first.I1, last.I2)\n\t\trange2 := formatRangeUnified(first.J1, last.J2)\n\t\tif err := wf(\"@@ -%s +%s @@%s\", range1, range2, diff.Eol); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tfor _, c := range g {\n\t\t\ti1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2\n\t\t\tif c.Tag == 'e' {\n\t\t\t\tfor _, line := range diff.A[i1:i2] {\n\t\t\t\t\tif err := ws(\" \" + line); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif c.Tag == 'r' || c.Tag == 'd' {\n\t\t\t\tfor _, line := range diff.A[i1:i2] {\n\t\t\t\t\tif err := ws(\"-\" + line); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif c.Tag == 'r' || c.Tag == 'i' {\n\t\t\t\tfor _, line := range diff.B[j1:j2] {\n\t\t\t\t\tif err := ws(\"+\" + line); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// Like WriteUnifiedDiff but returns the diff a string.\nfunc GetUnifiedDiffString(diff UnifiedDiff) (string, error) {\n\tw := &bytes.Buffer{}\n\terr := WriteUnifiedDiff(w, diff)\n\treturn string(w.Bytes()), err\n}\n\n// Convert range to the \"ed\" format.\nfunc formatRangeContext(start, stop int) string {\n\t// Per the diff spec at http://www.unix.org/single_unix_specification/\n\tbeginning := start + 1 // lines start numbering with one\n\tlength := stop - start\n\tif length == 0 {\n\t\tbeginning -= 1 // empty ranges begin at line just before the range\n\t}\n\tif length <= 1 {\n\t\treturn fmt.Sprintf(\"%d\", beginning)\n\t}\n\treturn fmt.Sprintf(\"%d,%d\", beginning, beginning+length-1)\n}\n\ntype ContextDiff UnifiedDiff\n\n// Compare two sequences of lines; generate the delta as a context diff.\n//\n// Context diffs are a compact way of showing line changes and a few\n// lines of context. The number of context lines is set by diff.Context\n// which defaults to three.\n//\n// By default, the diff control lines (those with *** or ---) are\n// created with a trailing newline.\n//\n// For inputs that do not have trailing newlines, set the diff.Eol\n// argument to \"\" so that the output will be uniformly newline free.\n//\n// The context diff format normally has a header for filenames and\n// modification times.  Any or all of these may be specified using\n// strings for diff.FromFile, diff.ToFile, diff.FromDate, diff.ToDate.\n// The modification times are normally expressed in the ISO 8601 format.\n// If not specified, the strings default to blanks.\nfunc WriteContextDiff(writer io.Writer, diff ContextDiff) error {\n\tbuf := bufio.NewWriter(writer)\n\tdefer buf.Flush()\n\tvar diffErr error\n\twf := func(format string, args ...interface{}) {\n\t\t_, err := buf.WriteString(fmt.Sprintf(format, args...))\n\t\tif diffErr == nil && err != nil {\n\t\t\tdiffErr = err\n\t\t}\n\t}\n\tws := func(s string) {\n\t\t_, err := buf.WriteString(s)\n\t\tif diffErr == nil && err != nil {\n\t\t\tdiffErr = err\n\t\t}\n\t}\n\n\tif len(diff.Eol) == 0 {\n\t\tdiff.Eol = \"\\n\"\n\t}\n\n\tprefix := map[byte]string{\n\t\t'i': \"+ \",\n\t\t'd': \"- \",\n\t\t'r': \"! \",\n\t\t'e': \"  \",\n\t}\n\n\tstarted := false\n\tm := NewMatcher(diff.A, diff.B)\n\tfor _, g := range m.GetGroupedOpCodes(diff.Context) {\n\t\tif !started {\n\t\t\tstarted = true\n\t\t\tfromDate := \"\"\n\t\t\tif len(diff.FromDate) > 0 {\n\t\t\t\tfromDate = \"\\t\" + diff.FromDate\n\t\t\t}\n\t\t\ttoDate := \"\"\n\t\t\tif len(diff.ToDate) > 0 {\n\t\t\t\ttoDate = \"\\t\" + diff.ToDate\n\t\t\t}\n\t\t\tif diff.FromFile != \"\" || diff.ToFile != \"\" {\n\t\t\t\twf(\"*** %s%s%s\", diff.FromFile, fromDate, diff.Eol)\n\t\t\t\twf(\"--- %s%s%s\", diff.ToFile, toDate, diff.Eol)\n\t\t\t}\n\t\t}\n\n\t\tfirst, last := g[0], g[len(g)-1]\n\t\tws(\"***************\" + diff.Eol)\n\n\t\trange1 := formatRangeContext(first.I1, last.I2)\n\t\twf(\"*** %s ****%s\", range1, diff.Eol)\n\t\tfor _, c := range g {\n\t\t\tif c.Tag == 'r' || c.Tag == 'd' {\n\t\t\t\tfor _, cc := range g {\n\t\t\t\t\tif cc.Tag == 'i' {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tfor _, line := range diff.A[cc.I1:cc.I2] {\n\t\t\t\t\t\tws(prefix[cc.Tag] + line)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\trange2 := formatRangeContext(first.J1, last.J2)\n\t\twf(\"--- %s ----%s\", range2, diff.Eol)\n\t\tfor _, c := range g {\n\t\t\tif c.Tag == 'r' || c.Tag == 'i' {\n\t\t\t\tfor _, cc := range g {\n\t\t\t\t\tif cc.Tag == 'd' {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tfor _, line := range diff.B[cc.J1:cc.J2] {\n\t\t\t\t\t\tws(prefix[cc.Tag] + line)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\treturn diffErr\n}\n\n// Like WriteContextDiff but returns the diff a string.\nfunc GetContextDiffString(diff ContextDiff) (string, error) {\n\tw := &bytes.Buffer{}\n\terr := WriteContextDiff(w, diff)\n\treturn string(w.Bytes()), err\n}\n\n// Split a string on \"\\n\" while preserving them. The output can be used\n// as input for UnifiedDiff and ContextDiff structures.\nfunc SplitLines(s string) []string {\n\tlines := strings.SplitAfter(s, \"\\n\")\n\tlines[len(lines)-1] += \"\\n\"\n\treturn lines\n}\n"
  },
  {
    "path": "vendor/github.com/serenize/snaker/.travis.yml",
    "content": "language: go\narch:\n  - amd64\n  - ppc64le\ngo:\n  - 1.8\n  - 1.9\n  - tip\n# Disable version go:1.8\njobs:\n  exclude:\n    - arch: amd64\n      go: 1.8\n    - arch: ppc64le\n      go: 1.8\n\ninstall: go get -t -d -v ./... && go build -v ./...\n"
  },
  {
    "path": "vendor/github.com/serenize/snaker/LICENSE.txt",
    "content": "Copyright (c) 2015 Serenize UG (haftungsbeschränkt)\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/serenize/snaker/README.md",
    "content": "# snaker\n\n[![Build Status](https://travis-ci.org/serenize/snaker.svg?branch=master)](https://travis-ci.org/serenize/snaker)\n[![GoDoc](https://godoc.org/github.com/serenize/snaker?status.svg)](https://godoc.org/github.com/serenize/snaker)\n\nThis is a small utility to convert camel cased strings to snake case and back, except some defined words.\n\n## QBS Usage\n\nTo replace the original toSnake and back algorithms for [https://github.com/coocood/qbs](https://github.com/coocood/qbs)\nyou can easily use snaker:\n\nImport snaker\n```go\nimport (\n  github.com/coocood/qbs\n  github.com/serenize/snaker\n)\n```\n\nRegister the snaker methods to qbs\n```go\nqbs.ColumnNameToFieldName = snaker.SnakeToCamel\nqbs.FieldNameToColumnName = snaker.CamelToSnake\n```\n"
  },
  {
    "path": "vendor/github.com/serenize/snaker/snaker.go",
    "content": "// Package snaker provides methods to convert CamelCase names to snake_case and back.\n// It considers the list of allowed initialsms used by github.com/golang/lint/golint (e.g. ID or HTTP)\npackage snaker\n\nimport (\n\t\"strings\"\n\t\"unicode\"\n)\n\n// CamelToSnake converts a given string to snake case\nfunc CamelToSnake(s string) string {\n\tvar result string\n\tvar words []string\n\tvar lastPos int\n\trs := []rune(s)\n\n\tfor i := 0; i < len(rs); i++ {\n\t\tif i > 0 && unicode.IsUpper(rs[i]) {\n\t\t\tif initialism := startsWithInitialism(s[lastPos:]); initialism != \"\" {\n\t\t\t\twords = append(words, initialism)\n\n\t\t\t\ti += len(initialism) - 1\n\t\t\t\tlastPos = i\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\twords = append(words, s[lastPos:i])\n\t\t\tlastPos = i\n\t\t}\n\t}\n\n\t// append the last word\n\tif s[lastPos:] != \"\" {\n\t\twords = append(words, s[lastPos:])\n\t}\n\n\tfor k, word := range words {\n\t\tif k > 0 {\n\t\t\tresult += \"_\"\n\t\t}\n\n\t\tresult += strings.ToLower(word)\n\t}\n\n\treturn result\n}\n\nfunc snakeToCamel(s string, upperCase bool) string {\n\tvar result string\n\n\twords := strings.Split(s, \"_\")\n\n\tfor i, word := range words {\n\t\tif exception := snakeToCamelExceptions[word]; len(exception) > 0 {\n\t\t\tresult += exception\n\t\t\tcontinue\n\t\t}\n\n\t\tif upperCase || i > 0 {\n\t\t\tif upper := strings.ToUpper(word); commonInitialisms[upper] {\n\t\t\t\tresult += upper\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tif (upperCase || i > 0) && len(word) > 0 {\n\t\t\tw := []rune(word)\n\t\t\tw[0] = unicode.ToUpper(w[0])\n\t\t\tresult += string(w)\n\t\t} else {\n\t\t\tresult += word\n\t\t}\n\t}\n\n\treturn result\n}\n\n// SnakeToCamel returns a string converted from snake case to uppercase\nfunc SnakeToCamel(s string) string {\n\treturn snakeToCamel(s, true)\n}\n\n// SnakeToCamelLower returns a string converted from snake case to lowercase\nfunc SnakeToCamelLower(s string) string {\n\treturn snakeToCamel(s, false)\n}\n\n// startsWithInitialism returns the initialism if the given string begins with it\nfunc startsWithInitialism(s string) string {\n\tvar initialism string\n\t// the longest initialism is 5 char, the shortest 2\n\tfor i := 1; i <= 5; i++ {\n\t\tif len(s) > i-1 && commonInitialisms[s[:i]] {\n\t\t\tinitialism = s[:i]\n\t\t}\n\t}\n\treturn initialism\n}\n\n// commonInitialisms, taken from\n// https://github.com/golang/lint/blob/206c0f020eba0f7fbcfbc467a5eb808037df2ed6/lint.go#L731\nvar commonInitialisms = map[string]bool{\n\t\"ACL\":   true,\n\t\"API\":   true,\n\t\"ASCII\": true,\n\t\"CPU\":   true,\n\t\"CSS\":   true,\n\t\"DNS\":   true,\n\t\"EOF\":   true,\n\t\"ETA\":   true,\n\t\"GPU\":   true,\n\t\"GUID\":  true,\n\t\"HTML\":  true,\n\t\"HTTP\":  true,\n\t\"HTTPS\": true,\n\t\"ID\":    true,\n\t\"IP\":    true,\n\t\"JSON\":  true,\n\t\"LHS\":   true,\n\t\"OS\":    true,\n\t\"QPS\":   true,\n\t\"RAM\":   true,\n\t\"RHS\":   true,\n\t\"RPC\":   true,\n\t\"SLA\":   true,\n\t\"SMTP\":  true,\n\t\"SQL\":   true,\n\t\"SSH\":   true,\n\t\"TCP\":   true,\n\t\"TLS\":   true,\n\t\"TTL\":   true,\n\t\"UDP\":   true,\n\t\"UI\":    true,\n\t\"UID\":   true,\n\t\"UUID\":  true,\n\t\"URI\":   true,\n\t\"URL\":   true,\n\t\"UTF8\":  true,\n\t\"VM\":    true,\n\t\"XML\":   true,\n\t\"XMPP\":  true,\n\t\"XSRF\":  true,\n\t\"XSS\":   true,\n\t\"OAuth\": true,\n}\n\n// add exceptions here for things that are not automatically convertable\nvar snakeToCamelExceptions = map[string]string{\n\t\"oauth\": \"OAuth\",\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/.codeclimate.yml",
    "content": "engines:\n  gofmt:\n    enabled: true\n  golint:\n    enabled: true\n  govet:\n    enabled: true\n\nexclude_patterns:\n- \".github/\"\n- \"vendor/\"\n- \"codegen/\"\n- \"*.yml\"\n- \".*.yml\"\n- \"*.md\"\n- \"Gopkg.*\"\n- \"doc.go\"\n- \"type_specific_codegen_test.go\"\n- \"type_specific_codegen.go\"\n- \".gitignore\"\n- \"LICENSE\"\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/.gitignore",
    "content": "# Binaries for programs and plugins\n*.exe\n*.dll\n*.so\n*.dylib\n\n# Test binary, build with `go test -c`\n*.test\n\n# Output of the go coverage tool, specifically when used with LiteIDE\n*.out\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/LICENSE",
    "content": "The MIT License\n\nCopyright (c) 2014 Stretchr, Inc.\nCopyright (c) 2017-2018 objx contributors\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/README.md",
    "content": "# Objx\n[![Build Status](https://travis-ci.org/stretchr/objx.svg?branch=master)](https://travis-ci.org/stretchr/objx)\n[![Go Report Card](https://goreportcard.com/badge/github.com/stretchr/objx)](https://goreportcard.com/report/github.com/stretchr/objx)\n[![Maintainability](https://api.codeclimate.com/v1/badges/1d64bc6c8474c2074f2b/maintainability)](https://codeclimate.com/github/stretchr/objx/maintainability)\n[![Test Coverage](https://api.codeclimate.com/v1/badges/1d64bc6c8474c2074f2b/test_coverage)](https://codeclimate.com/github/stretchr/objx/test_coverage)\n[![Sourcegraph](https://sourcegraph.com/github.com/stretchr/objx/-/badge.svg)](https://sourcegraph.com/github.com/stretchr/objx)\n[![GoDoc](https://godoc.org/github.com/stretchr/objx?status.svg)](https://godoc.org/github.com/stretchr/objx)\n\nObjx - Go package for dealing with maps, slices, JSON and other data.\n\nGet started:\n\n- Install Objx with [one line of code](#installation), or [update it with another](#staying-up-to-date)\n- Check out the API Documentation http://godoc.org/github.com/stretchr/objx\n\n## Overview\nObjx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc.\n\n### Pattern\nObjx uses a preditable pattern to make access data from within `map[string]interface{}` easy. Call one of the `objx.` functions to create your `objx.Map` to get going:\n\n    m, err := objx.FromJSON(json)\n\nNOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking.\n\nUse `Get` to access the value you're interested in.  You can use dot and array\nnotation too:\n\n     m.Get(\"places[0].latlng\")\n\nOnce you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type.\n\n     if m.Get(\"code\").IsStr() { // Your code... }\n\nOr you can just assume the type, and use one of the strong type methods to extract the real value:\n\n    m.Get(\"code\").Int()\n\nIf there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value.\n\n     Get(\"code\").Int(-1)\n\nIf you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data.  You can find out more by exploring the index below.\n\n### Reading data\nA simple example of how to use Objx:\n\n    // Use MustFromJSON to make an objx.Map from some JSON\n    m := objx.MustFromJSON(`{\"name\": \"Mat\", \"age\": 30}`)\n\n    // Get the details\n    name := m.Get(\"name\").Str()\n    age := m.Get(\"age\").Int()\n\n    // Get their nickname (or use their name if they don't have one)\n    nickname := m.Get(\"nickname\").Str(name)\n\n### Ranging\nSince `objx.Map` is a `map[string]interface{}` you can treat it as such.  For example, to `range` the data, do what you would expect:\n\n    m := objx.MustFromJSON(json)\n    for key, value := range m {\n      // Your code...\n    }\n\n## Installation\nTo install Objx, use go get:\n\n    go get github.com/stretchr/objx\n\n### Staying up to date\nTo update Objx to the latest version, run:\n\n    go get -u github.com/stretchr/objx\n\n### Supported go versions\nWe support the lastest three major Go versions, which are 1.10, 1.11 and 1.12 at the moment.\n\n## Contributing\nPlease feel free to submit issues, fork the repository and send pull requests!\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/Taskfile.yml",
    "content": "version: '2'\n\nenv:\n  GOFLAGS: -mod=vendor\n\ntasks:\n  default:\n    deps: [test]\n\n  lint:\n    desc: Checks code style\n    cmds:\n      - gofmt -d -s *.go\n      - go vet ./...\n    silent: true\n\n  lint-fix:\n    desc: Fixes code style\n    cmds:\n      - gofmt -w -s *.go\n\n  test:\n    desc: Runs go tests\n    cmds:\n      - go test -race  ./...\n\n  test-coverage:\n    desc: Runs go tests and calculates test coverage\n    cmds:\n      - go test -race -coverprofile=c.out ./...\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/accessors.go",
    "content": "package objx\n\nimport (\n\t\"reflect\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n)\n\nconst (\n\t// PathSeparator is the character used to separate the elements\n\t// of the keypath.\n\t//\n\t// For example, `location.address.city`\n\tPathSeparator string = \".\"\n\n\t// arrayAccesRegexString is the regex used to extract the array number\n\t// from the access path\n\tarrayAccesRegexString = `^(.+)\\[([0-9]+)\\]$`\n\n\t// mapAccessRegexString is the regex used to extract the map key\n\t// from the access path\n\tmapAccessRegexString = `^([^\\[]*)\\[([^\\]]+)\\](.*)$`\n)\n\n// arrayAccesRegex is the compiled arrayAccesRegexString\nvar arrayAccesRegex = regexp.MustCompile(arrayAccesRegexString)\n\n// mapAccessRegex is the compiled mapAccessRegexString\nvar mapAccessRegex = regexp.MustCompile(mapAccessRegexString)\n\n// Get gets the value using the specified selector and\n// returns it inside a new Obj object.\n//\n// If it cannot find the value, Get will return a nil\n// value inside an instance of Obj.\n//\n// Get can only operate directly on map[string]interface{} and []interface.\n//\n// Example\n//\n// To access the title of the third chapter of the second book, do:\n//\n//    o.Get(\"books[1].chapters[2].title\")\nfunc (m Map) Get(selector string) *Value {\n\trawObj := access(m, selector, nil, false)\n\treturn &Value{data: rawObj}\n}\n\n// Set sets the value using the specified selector and\n// returns the object on which Set was called.\n//\n// Set can only operate directly on map[string]interface{} and []interface\n//\n// Example\n//\n// To set the title of the third chapter of the second book, do:\n//\n//    o.Set(\"books[1].chapters[2].title\",\"Time to Go\")\nfunc (m Map) Set(selector string, value interface{}) Map {\n\taccess(m, selector, value, true)\n\treturn m\n}\n\n// getIndex returns the index, which is hold in s by two braches.\n// It also returns s withour the index part, e.g. name[1] will return (1, name).\n// If no index is found, -1 is returned\nfunc getIndex(s string) (int, string) {\n\tarrayMatches := arrayAccesRegex.FindStringSubmatch(s)\n\tif len(arrayMatches) > 0 {\n\t\t// Get the key into the map\n\t\tselector := arrayMatches[1]\n\t\t// Get the index into the array at the key\n\t\t// We know this cannt fail because arrayMatches[2] is an int for sure\n\t\tindex, _ := strconv.Atoi(arrayMatches[2])\n\t\treturn index, selector\n\t}\n\treturn -1, s\n}\n\n// getKey returns the key which is held in s by two brackets.\n// It also returns the next selector.\nfunc getKey(s string) (string, string) {\n\tselSegs := strings.SplitN(s, PathSeparator, 2)\n\tthisSel := selSegs[0]\n\tnextSel := \"\"\n\n\tif len(selSegs) > 1 {\n\t\tnextSel = selSegs[1]\n\t}\n\n\tmapMatches := mapAccessRegex.FindStringSubmatch(s)\n\tif len(mapMatches) > 0 {\n\t\tif _, err := strconv.Atoi(mapMatches[2]); err != nil {\n\t\t\tthisSel = mapMatches[1]\n\t\t\tnextSel = \"[\" + mapMatches[2] + \"]\" + mapMatches[3]\n\n\t\t\tif thisSel == \"\" {\n\t\t\t\tthisSel = mapMatches[2]\n\t\t\t\tnextSel = mapMatches[3]\n\t\t\t}\n\n\t\t\tif nextSel == \"\" {\n\t\t\t\tselSegs = []string{\"\", \"\"}\n\t\t\t} else if nextSel[0] == '.' {\n\t\t\t\tnextSel = nextSel[1:]\n\t\t\t}\n\t\t}\n\t}\n\n\treturn thisSel, nextSel\n}\n\n// access accesses the object using the selector and performs the\n// appropriate action.\nfunc access(current interface{}, selector string, value interface{}, isSet bool) interface{} {\n\tthisSel, nextSel := getKey(selector)\n\n\tindexes := []int{}\n\tfor strings.Contains(thisSel, \"[\") {\n\t\tprevSel := thisSel\n\t\tindex := -1\n\t\tindex, thisSel = getIndex(thisSel)\n\t\tindexes = append(indexes, index)\n\t\tif prevSel == thisSel {\n\t\t\tbreak\n\t\t}\n\t}\n\n\tif curMap, ok := current.(Map); ok {\n\t\tcurrent = map[string]interface{}(curMap)\n\t}\n\t// get the object in question\n\tswitch current.(type) {\n\tcase map[string]interface{}:\n\t\tcurMSI := current.(map[string]interface{})\n\t\tif nextSel == \"\" && isSet {\n\t\t\tcurMSI[thisSel] = value\n\t\t\treturn nil\n\t\t}\n\n\t\t_, ok := curMSI[thisSel].(map[string]interface{})\n\t\tif !ok {\n\t\t\t_, ok = curMSI[thisSel].(Map)\n\t\t}\n\n\t\tif (curMSI[thisSel] == nil || !ok) && len(indexes) == 0 && isSet {\n\t\t\tcurMSI[thisSel] = map[string]interface{}{}\n\t\t}\n\n\t\tcurrent = curMSI[thisSel]\n\tdefault:\n\t\tcurrent = nil\n\t}\n\n\t// do we need to access the item of an array?\n\tif len(indexes) > 0 {\n\t\tnum := len(indexes)\n\t\tfor num > 0 {\n\t\t\tnum--\n\t\t\tindex := indexes[num]\n\t\t\tindexes = indexes[:num]\n\t\t\tif array, ok := interSlice(current); ok {\n\t\t\t\tif index < len(array) {\n\t\t\t\t\tcurrent = array[index]\n\t\t\t\t} else {\n\t\t\t\t\tcurrent = nil\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif nextSel != \"\" {\n\t\tcurrent = access(current, nextSel, value, isSet)\n\t}\n\treturn current\n}\n\nfunc interSlice(slice interface{}) ([]interface{}, bool) {\n\tif array, ok := slice.([]interface{}); ok {\n\t\treturn array, ok\n\t}\n\n\ts := reflect.ValueOf(slice)\n\tif s.Kind() != reflect.Slice {\n\t\treturn nil, false\n\t}\n\n\tret := make([]interface{}, s.Len())\n\n\tfor i := 0; i < s.Len(); i++ {\n\t\tret[i] = s.Index(i).Interface()\n\t}\n\n\treturn ret, true\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/conversions.go",
    "content": "package objx\n\nimport (\n\t\"bytes\"\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"strconv\"\n)\n\n// SignatureSeparator is the character that is used to\n// separate the Base64 string from the security signature.\nconst SignatureSeparator = \"_\"\n\n// URLValuesSliceKeySuffix is the character that is used to\n// specify a suffic for slices parsed by URLValues.\n// If the suffix is set to \"[i]\", then the index of the slice\n// is used in place of i\n// Ex: Suffix \"[]\" would have the form a[]=b&a[]=c\n// OR Suffix \"[i]\" would have the form a[0]=b&a[1]=c\n// OR Suffix \"\" would have the form a=b&a=c\nvar urlValuesSliceKeySuffix = \"[]\"\n\nconst (\n\tURLValuesSliceKeySuffixEmpty = \"\"\n\tURLValuesSliceKeySuffixArray = \"[]\"\n\tURLValuesSliceKeySuffixIndex = \"[i]\"\n)\n\n// SetURLValuesSliceKeySuffix sets the character that is used to\n// specify a suffic for slices parsed by URLValues.\n// If the suffix is set to \"[i]\", then the index of the slice\n// is used in place of i\n// Ex: Suffix \"[]\" would have the form a[]=b&a[]=c\n// OR Suffix \"[i]\" would have the form a[0]=b&a[1]=c\n// OR Suffix \"\" would have the form a=b&a=c\nfunc SetURLValuesSliceKeySuffix(s string) error {\n\tif s == URLValuesSliceKeySuffixEmpty || s == URLValuesSliceKeySuffixArray || s == URLValuesSliceKeySuffixIndex {\n\t\turlValuesSliceKeySuffix = s\n\t\treturn nil\n\t}\n\n\treturn errors.New(\"objx: Invalid URLValuesSliceKeySuffix provided.\")\n}\n\n// JSON converts the contained object to a JSON string\n// representation\nfunc (m Map) JSON() (string, error) {\n\tfor k, v := range m {\n\t\tm[k] = cleanUp(v)\n\t}\n\n\tresult, err := json.Marshal(m)\n\tif err != nil {\n\t\terr = errors.New(\"objx: JSON encode failed with: \" + err.Error())\n\t}\n\treturn string(result), err\n}\n\nfunc cleanUpInterfaceArray(in []interface{}) []interface{} {\n\tresult := make([]interface{}, len(in))\n\tfor i, v := range in {\n\t\tresult[i] = cleanUp(v)\n\t}\n\treturn result\n}\n\nfunc cleanUpInterfaceMap(in map[interface{}]interface{}) Map {\n\tresult := Map{}\n\tfor k, v := range in {\n\t\tresult[fmt.Sprintf(\"%v\", k)] = cleanUp(v)\n\t}\n\treturn result\n}\n\nfunc cleanUpStringMap(in map[string]interface{}) Map {\n\tresult := Map{}\n\tfor k, v := range in {\n\t\tresult[k] = cleanUp(v)\n\t}\n\treturn result\n}\n\nfunc cleanUpMSIArray(in []map[string]interface{}) []Map {\n\tresult := make([]Map, len(in))\n\tfor i, v := range in {\n\t\tresult[i] = cleanUpStringMap(v)\n\t}\n\treturn result\n}\n\nfunc cleanUpMapArray(in []Map) []Map {\n\tresult := make([]Map, len(in))\n\tfor i, v := range in {\n\t\tresult[i] = cleanUpStringMap(v)\n\t}\n\treturn result\n}\n\nfunc cleanUp(v interface{}) interface{} {\n\tswitch v := v.(type) {\n\tcase []interface{}:\n\t\treturn cleanUpInterfaceArray(v)\n\tcase []map[string]interface{}:\n\t\treturn cleanUpMSIArray(v)\n\tcase map[interface{}]interface{}:\n\t\treturn cleanUpInterfaceMap(v)\n\tcase Map:\n\t\treturn cleanUpStringMap(v)\n\tcase []Map:\n\t\treturn cleanUpMapArray(v)\n\tdefault:\n\t\treturn v\n\t}\n}\n\n// MustJSON converts the contained object to a JSON string\n// representation and panics if there is an error\nfunc (m Map) MustJSON() string {\n\tresult, err := m.JSON()\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\treturn result\n}\n\n// Base64 converts the contained object to a Base64 string\n// representation of the JSON string representation\nfunc (m Map) Base64() (string, error) {\n\tvar buf bytes.Buffer\n\n\tjsonData, err := m.JSON()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tencoder := base64.NewEncoder(base64.StdEncoding, &buf)\n\t_, _ = encoder.Write([]byte(jsonData))\n\t_ = encoder.Close()\n\n\treturn buf.String(), nil\n}\n\n// MustBase64 converts the contained object to a Base64 string\n// representation of the JSON string representation and panics\n// if there is an error\nfunc (m Map) MustBase64() string {\n\tresult, err := m.Base64()\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\treturn result\n}\n\n// SignedBase64 converts the contained object to a Base64 string\n// representation of the JSON string representation and signs it\n// using the provided key.\nfunc (m Map) SignedBase64(key string) (string, error) {\n\tbase64, err := m.Base64()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tsig := HashWithKey(base64, key)\n\treturn base64 + SignatureSeparator + sig, nil\n}\n\n// MustSignedBase64 converts the contained object to a Base64 string\n// representation of the JSON string representation and signs it\n// using the provided key and panics if there is an error\nfunc (m Map) MustSignedBase64(key string) string {\n\tresult, err := m.SignedBase64(key)\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\treturn result\n}\n\n/*\n\tURL Query\n\t------------------------------------------------\n*/\n\n// URLValues creates a url.Values object from an Obj. This\n// function requires that the wrapped object be a map[string]interface{}\nfunc (m Map) URLValues() url.Values {\n\tvals := make(url.Values)\n\n\tm.parseURLValues(m, vals, \"\")\n\n\treturn vals\n}\n\nfunc (m Map) parseURLValues(queryMap Map, vals url.Values, key string) {\n\tuseSliceIndex := false\n\tif urlValuesSliceKeySuffix == \"[i]\" {\n\t\tuseSliceIndex = true\n\t}\n\n\tfor k, v := range queryMap {\n\t\tval := &Value{data: v}\n\t\tswitch {\n\t\tcase val.IsObjxMap():\n\t\t\tif key == \"\" {\n\t\t\t\tm.parseURLValues(val.ObjxMap(), vals, k)\n\t\t\t} else {\n\t\t\t\tm.parseURLValues(val.ObjxMap(), vals, key+\"[\"+k+\"]\")\n\t\t\t}\n\t\tcase val.IsObjxMapSlice():\n\t\t\tsliceKey := k\n\t\t\tif key != \"\" {\n\t\t\t\tsliceKey = key + \"[\" + k + \"]\"\n\t\t\t}\n\n\t\t\tif useSliceIndex {\n\t\t\t\tfor i, sv := range val.MustObjxMapSlice() {\n\t\t\t\t\tsk := sliceKey + \"[\" + strconv.FormatInt(int64(i), 10) + \"]\"\n\t\t\t\t\tm.parseURLValues(sv, vals, sk)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsliceKey = sliceKey + urlValuesSliceKeySuffix\n\t\t\t\tfor _, sv := range val.MustObjxMapSlice() {\n\t\t\t\t\tm.parseURLValues(sv, vals, sliceKey)\n\t\t\t\t}\n\t\t\t}\n\t\tcase val.IsMSISlice():\n\t\t\tsliceKey := k\n\t\t\tif key != \"\" {\n\t\t\t\tsliceKey = key + \"[\" + k + \"]\"\n\t\t\t}\n\n\t\t\tif useSliceIndex {\n\t\t\t\tfor i, sv := range val.MustMSISlice() {\n\t\t\t\t\tsk := sliceKey + \"[\" + strconv.FormatInt(int64(i), 10) + \"]\"\n\t\t\t\t\tm.parseURLValues(New(sv), vals, sk)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsliceKey = sliceKey + urlValuesSliceKeySuffix\n\t\t\t\tfor _, sv := range val.MustMSISlice() {\n\t\t\t\t\tm.parseURLValues(New(sv), vals, sliceKey)\n\t\t\t\t}\n\t\t\t}\n\t\tcase val.IsStrSlice(), val.IsBoolSlice(),\n\t\t\tval.IsFloat32Slice(), val.IsFloat64Slice(),\n\t\t\tval.IsIntSlice(), val.IsInt8Slice(), val.IsInt16Slice(), val.IsInt32Slice(), val.IsInt64Slice(),\n\t\t\tval.IsUintSlice(), val.IsUint8Slice(), val.IsUint16Slice(), val.IsUint32Slice(), val.IsUint64Slice():\n\n\t\t\tsliceKey := k\n\t\t\tif key != \"\" {\n\t\t\t\tsliceKey = key + \"[\" + k + \"]\"\n\t\t\t}\n\n\t\t\tif useSliceIndex {\n\t\t\t\tfor i, sv := range val.StringSlice() {\n\t\t\t\t\tsk := sliceKey + \"[\" + strconv.FormatInt(int64(i), 10) + \"]\"\n\t\t\t\t\tvals.Set(sk, sv)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsliceKey = sliceKey + urlValuesSliceKeySuffix\n\t\t\t\tvals[sliceKey] = val.StringSlice()\n\t\t\t}\n\n\t\tdefault:\n\t\t\tif key == \"\" {\n\t\t\t\tvals.Set(k, val.String())\n\t\t\t} else {\n\t\t\t\tvals.Set(key+\"[\"+k+\"]\", val.String())\n\t\t\t}\n\t\t}\n\t}\n}\n\n// URLQuery gets an encoded URL query representing the given\n// Obj. This function requires that the wrapped object be a\n// map[string]interface{}\nfunc (m Map) URLQuery() (string, error) {\n\treturn m.URLValues().Encode(), nil\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/doc.go",
    "content": "/*\nObjx - Go package for dealing with maps, slices, JSON and other data.\n\nOverview\n\nObjx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes\na powerful `Get` method (among others) that allows you to easily and quickly get\naccess to data within the map, without having to worry too much about type assertions,\nmissing data, default values etc.\n\nPattern\n\nObjx uses a preditable pattern to make access data from within `map[string]interface{}` easy.\nCall one of the `objx.` functions to create your `objx.Map` to get going:\n\n    m, err := objx.FromJSON(json)\n\nNOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong,\nthe rest will be optimistic and try to figure things out without panicking.\n\nUse `Get` to access the value you're interested in.  You can use dot and array\nnotation too:\n\n     m.Get(\"places[0].latlng\")\n\nOnce you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type.\n\n     if m.Get(\"code\").IsStr() { // Your code... }\n\nOr you can just assume the type, and use one of the strong type methods to extract the real value:\n\n   m.Get(\"code\").Int()\n\nIf there's no value there (or if it's the wrong type) then a default value will be returned,\nor you can be explicit about the default value.\n\n     Get(\"code\").Int(-1)\n\nIf you're dealing with a slice of data as a value, Objx provides many useful methods for iterating,\nmanipulating and selecting that data.  You can find out more by exploring the index below.\n\nReading data\n\nA simple example of how to use Objx:\n\n   // Use MustFromJSON to make an objx.Map from some JSON\n   m := objx.MustFromJSON(`{\"name\": \"Mat\", \"age\": 30}`)\n\n   // Get the details\n   name := m.Get(\"name\").Str()\n   age := m.Get(\"age\").Int()\n\n   // Get their nickname (or use their name if they don't have one)\n   nickname := m.Get(\"nickname\").Str(name)\n\nRanging\n\nSince `objx.Map` is a `map[string]interface{}` you can treat it as such.\nFor example, to `range` the data, do what you would expect:\n\n    m := objx.MustFromJSON(json)\n    for key, value := range m {\n      // Your code...\n    }\n*/\npackage objx\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/map.go",
    "content": "package objx\n\nimport (\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"io/ioutil\"\n\t\"net/url\"\n\t\"strings\"\n)\n\n// MSIConvertable is an interface that defines methods for converting your\n// custom types to a map[string]interface{} representation.\ntype MSIConvertable interface {\n\t// MSI gets a map[string]interface{} (msi) representing the\n\t// object.\n\tMSI() map[string]interface{}\n}\n\n// Map provides extended functionality for working with\n// untyped data, in particular map[string]interface (msi).\ntype Map map[string]interface{}\n\n// Value returns the internal value instance\nfunc (m Map) Value() *Value {\n\treturn &Value{data: m}\n}\n\n// Nil represents a nil Map.\nvar Nil = New(nil)\n\n// New creates a new Map containing the map[string]interface{} in the data argument.\n// If the data argument is not a map[string]interface, New attempts to call the\n// MSI() method on the MSIConvertable interface to create one.\nfunc New(data interface{}) Map {\n\tif _, ok := data.(map[string]interface{}); !ok {\n\t\tif converter, ok := data.(MSIConvertable); ok {\n\t\t\tdata = converter.MSI()\n\t\t} else {\n\t\t\treturn nil\n\t\t}\n\t}\n\treturn Map(data.(map[string]interface{}))\n}\n\n// MSI creates a map[string]interface{} and puts it inside a new Map.\n//\n// The arguments follow a key, value pattern.\n//\n//\n// Returns nil if any key argument is non-string or if there are an odd number of arguments.\n//\n// Example\n//\n// To easily create Maps:\n//\n//     m := objx.MSI(\"name\", \"Mat\", \"age\", 29, \"subobj\", objx.MSI(\"active\", true))\n//\n//     // creates an Map equivalent to\n//     m := objx.Map{\"name\": \"Mat\", \"age\": 29, \"subobj\": objx.Map{\"active\": true}}\nfunc MSI(keyAndValuePairs ...interface{}) Map {\n\tnewMap := Map{}\n\tkeyAndValuePairsLen := len(keyAndValuePairs)\n\tif keyAndValuePairsLen%2 != 0 {\n\t\treturn nil\n\t}\n\tfor i := 0; i < keyAndValuePairsLen; i = i + 2 {\n\t\tkey := keyAndValuePairs[i]\n\t\tvalue := keyAndValuePairs[i+1]\n\n\t\t// make sure the key is a string\n\t\tkeyString, keyStringOK := key.(string)\n\t\tif !keyStringOK {\n\t\t\treturn nil\n\t\t}\n\t\tnewMap[keyString] = value\n\t}\n\treturn newMap\n}\n\n// ****** Conversion Constructors\n\n// MustFromJSON creates a new Map containing the data specified in the\n// jsonString.\n//\n// Panics if the JSON is invalid.\nfunc MustFromJSON(jsonString string) Map {\n\to, err := FromJSON(jsonString)\n\tif err != nil {\n\t\tpanic(\"objx: MustFromJSON failed with error: \" + err.Error())\n\t}\n\treturn o\n}\n\n// MustFromJSONSlice creates a new slice of Map containing the data specified in the\n// jsonString. Works with jsons with a top level array\n//\n// Panics if the JSON is invalid.\nfunc MustFromJSONSlice(jsonString string) []Map {\n\tslice, err := FromJSONSlice(jsonString)\n\tif err != nil {\n\t\tpanic(\"objx: MustFromJSONSlice failed with error: \" + err.Error())\n\t}\n\treturn slice\n}\n\n// FromJSON creates a new Map containing the data specified in the\n// jsonString.\n//\n// Returns an error if the JSON is invalid.\nfunc FromJSON(jsonString string) (Map, error) {\n\tvar m Map\n\terr := json.Unmarshal([]byte(jsonString), &m)\n\tif err != nil {\n\t\treturn Nil, err\n\t}\n\treturn m, nil\n}\n\n// FromJSONSlice creates a new slice of Map containing the data specified in the\n// jsonString. Works with jsons with a top level array\n//\n// Returns an error if the JSON is invalid.\nfunc FromJSONSlice(jsonString string) ([]Map, error) {\n\tvar slice []Map\n\terr := json.Unmarshal([]byte(jsonString), &slice)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn slice, nil\n}\n\n// FromBase64 creates a new Obj containing the data specified\n// in the Base64 string.\n//\n// The string is an encoded JSON string returned by Base64\nfunc FromBase64(base64String string) (Map, error) {\n\tdecoder := base64.NewDecoder(base64.StdEncoding, strings.NewReader(base64String))\n\tdecoded, err := ioutil.ReadAll(decoder)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn FromJSON(string(decoded))\n}\n\n// MustFromBase64 creates a new Obj containing the data specified\n// in the Base64 string and panics if there is an error.\n//\n// The string is an encoded JSON string returned by Base64\nfunc MustFromBase64(base64String string) Map {\n\tresult, err := FromBase64(base64String)\n\tif err != nil {\n\t\tpanic(\"objx: MustFromBase64 failed with error: \" + err.Error())\n\t}\n\treturn result\n}\n\n// FromSignedBase64 creates a new Obj containing the data specified\n// in the Base64 string.\n//\n// The string is an encoded JSON string returned by SignedBase64\nfunc FromSignedBase64(base64String, key string) (Map, error) {\n\tparts := strings.Split(base64String, SignatureSeparator)\n\tif len(parts) != 2 {\n\t\treturn nil, errors.New(\"objx: Signed base64 string is malformed\")\n\t}\n\n\tsig := HashWithKey(parts[0], key)\n\tif parts[1] != sig {\n\t\treturn nil, errors.New(\"objx: Signature for base64 data does not match\")\n\t}\n\treturn FromBase64(parts[0])\n}\n\n// MustFromSignedBase64 creates a new Obj containing the data specified\n// in the Base64 string and panics if there is an error.\n//\n// The string is an encoded JSON string returned by Base64\nfunc MustFromSignedBase64(base64String, key string) Map {\n\tresult, err := FromSignedBase64(base64String, key)\n\tif err != nil {\n\t\tpanic(\"objx: MustFromSignedBase64 failed with error: \" + err.Error())\n\t}\n\treturn result\n}\n\n// FromURLQuery generates a new Obj by parsing the specified\n// query.\n//\n// For queries with multiple values, the first value is selected.\nfunc FromURLQuery(query string) (Map, error) {\n\tvals, err := url.ParseQuery(query)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tm := Map{}\n\tfor k, vals := range vals {\n\t\tm[k] = vals[0]\n\t}\n\treturn m, nil\n}\n\n// MustFromURLQuery generates a new Obj by parsing the specified\n// query.\n//\n// For queries with multiple values, the first value is selected.\n//\n// Panics if it encounters an error\nfunc MustFromURLQuery(query string) Map {\n\to, err := FromURLQuery(query)\n\tif err != nil {\n\t\tpanic(\"objx: MustFromURLQuery failed with error: \" + err.Error())\n\t}\n\treturn o\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/mutations.go",
    "content": "package objx\n\n// Exclude returns a new Map with the keys in the specified []string\n// excluded.\nfunc (m Map) Exclude(exclude []string) Map {\n\texcluded := make(Map)\n\tfor k, v := range m {\n\t\tif !contains(exclude, k) {\n\t\t\texcluded[k] = v\n\t\t}\n\t}\n\treturn excluded\n}\n\n// Copy creates a shallow copy of the Obj.\nfunc (m Map) Copy() Map {\n\tcopied := Map{}\n\tfor k, v := range m {\n\t\tcopied[k] = v\n\t}\n\treturn copied\n}\n\n// Merge blends the specified map with a copy of this map and returns the result.\n//\n// Keys that appear in both will be selected from the specified map.\n// This method requires that the wrapped object be a map[string]interface{}\nfunc (m Map) Merge(merge Map) Map {\n\treturn m.Copy().MergeHere(merge)\n}\n\n// MergeHere blends the specified map with this map and returns the current map.\n//\n// Keys that appear in both will be selected from the specified map. The original map\n// will be modified. This method requires that\n// the wrapped object be a map[string]interface{}\nfunc (m Map) MergeHere(merge Map) Map {\n\tfor k, v := range merge {\n\t\tm[k] = v\n\t}\n\treturn m\n}\n\n// Transform builds a new Obj giving the transformer a chance\n// to change the keys and values as it goes. This method requires that\n// the wrapped object be a map[string]interface{}\nfunc (m Map) Transform(transformer func(key string, value interface{}) (string, interface{})) Map {\n\tnewMap := Map{}\n\tfor k, v := range m {\n\t\tmodifiedKey, modifiedVal := transformer(k, v)\n\t\tnewMap[modifiedKey] = modifiedVal\n\t}\n\treturn newMap\n}\n\n// TransformKeys builds a new map using the specified key mapping.\n//\n// Unspecified keys will be unaltered.\n// This method requires that the wrapped object be a map[string]interface{}\nfunc (m Map) TransformKeys(mapping map[string]string) Map {\n\treturn m.Transform(func(key string, value interface{}) (string, interface{}) {\n\t\tif newKey, ok := mapping[key]; ok {\n\t\t\treturn newKey, value\n\t\t}\n\t\treturn key, value\n\t})\n}\n\n// Checks if a string slice contains a string\nfunc contains(s []string, e string) bool {\n\tfor _, a := range s {\n\t\tif a == e {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/security.go",
    "content": "package objx\n\nimport (\n\t\"crypto/sha1\"\n\t\"encoding/hex\"\n)\n\n// HashWithKey hashes the specified string using the security key\nfunc HashWithKey(data, key string) string {\n\td := sha1.Sum([]byte(data + \":\" + key))\n\treturn hex.EncodeToString(d[:])\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/tests.go",
    "content": "package objx\n\n// Has gets whether there is something at the specified selector\n// or not.\n//\n// If m is nil, Has will always return false.\nfunc (m Map) Has(selector string) bool {\n\tif m == nil {\n\t\treturn false\n\t}\n\treturn !m.Get(selector).IsNil()\n}\n\n// IsNil gets whether the data is nil or not.\nfunc (v *Value) IsNil() bool {\n\treturn v == nil || v.data == nil\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/type_specific.go",
    "content": "package objx\n\n/*\n   MSI (map[string]interface{} and []map[string]interface{})\n*/\n\n// MSI gets the value as a map[string]interface{}, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) MSI(optionalDefault ...map[string]interface{}) map[string]interface{} {\n\tif s, ok := v.data.(map[string]interface{}); ok {\n\t\treturn s\n\t}\n\tif s, ok := v.data.(Map); ok {\n\t\treturn map[string]interface{}(s)\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustMSI gets the value as a map[string]interface{}.\n//\n// Panics if the object is not a map[string]interface{}.\nfunc (v *Value) MustMSI() map[string]interface{} {\n\tif s, ok := v.data.(Map); ok {\n\t\treturn map[string]interface{}(s)\n\t}\n\treturn v.data.(map[string]interface{})\n}\n\n// MSISlice gets the value as a []map[string]interface{}, returns the optionalDefault\n// value or nil if the value is not a []map[string]interface{}.\nfunc (v *Value) MSISlice(optionalDefault ...[]map[string]interface{}) []map[string]interface{} {\n\tif s, ok := v.data.([]map[string]interface{}); ok {\n\t\treturn s\n\t}\n\n\ts := v.ObjxMapSlice()\n\tif s == nil {\n\t\tif len(optionalDefault) == 1 {\n\t\t\treturn optionalDefault[0]\n\t\t}\n\t\treturn nil\n\t}\n\n\tresult := make([]map[string]interface{}, len(s))\n\tfor i := range s {\n\t\tresult[i] = s[i].Value().MSI()\n\t}\n\treturn result\n}\n\n// MustMSISlice gets the value as a []map[string]interface{}.\n//\n// Panics if the object is not a []map[string]interface{}.\nfunc (v *Value) MustMSISlice() []map[string]interface{} {\n\tif s := v.MSISlice(); s != nil {\n\t\treturn s\n\t}\n\n\treturn v.data.([]map[string]interface{})\n}\n\n// IsMSI gets whether the object contained is a map[string]interface{} or not.\nfunc (v *Value) IsMSI() bool {\n\t_, ok := v.data.(map[string]interface{})\n\tif !ok {\n\t\t_, ok = v.data.(Map)\n\t}\n\treturn ok\n}\n\n// IsMSISlice gets whether the object contained is a []map[string]interface{} or not.\nfunc (v *Value) IsMSISlice() bool {\n\t_, ok := v.data.([]map[string]interface{})\n\tif !ok {\n\t\t_, ok = v.data.([]Map)\n\t\tif !ok {\n\t\t\ts, ok := v.data.([]interface{})\n\t\t\tif ok {\n\t\t\t\tfor i := range s {\n\t\t\t\t\tswitch s[i].(type) {\n\t\t\t\t\tcase Map:\n\t\t\t\t\tcase map[string]interface{}:\n\t\t\t\t\tdefault:\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn ok\n}\n\n// EachMSI calls the specified callback for each object\n// in the []map[string]interface{}.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachMSI(callback func(int, map[string]interface{}) bool) *Value {\n\tfor index, val := range v.MustMSISlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereMSI uses the specified decider function to select items\n// from the []map[string]interface{}.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereMSI(decider func(int, map[string]interface{}) bool) *Value {\n\tvar selected []map[string]interface{}\n\tv.EachMSI(func(index int, val map[string]interface{}) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupMSI uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]map[string]interface{}.\nfunc (v *Value) GroupMSI(grouper func(int, map[string]interface{}) string) *Value {\n\tgroups := make(map[string][]map[string]interface{})\n\tv.EachMSI(func(index int, val map[string]interface{}) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]map[string]interface{}, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceMSI uses the specified function to replace each map[string]interface{}s\n// by iterating each item.  The data in the returned result will be a\n// []map[string]interface{} containing the replaced items.\nfunc (v *Value) ReplaceMSI(replacer func(int, map[string]interface{}) map[string]interface{}) *Value {\n\tarr := v.MustMSISlice()\n\treplaced := make([]map[string]interface{}, len(arr))\n\tv.EachMSI(func(index int, val map[string]interface{}) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectMSI uses the specified collector function to collect a value\n// for each of the map[string]interface{}s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectMSI(collector func(int, map[string]interface{}) interface{}) *Value {\n\tarr := v.MustMSISlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachMSI(func(index int, val map[string]interface{}) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   ObjxMap ((Map) and [](Map))\n*/\n\n// ObjxMap gets the value as a (Map), returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) ObjxMap(optionalDefault ...(Map)) Map {\n\tif s, ok := v.data.((Map)); ok {\n\t\treturn s\n\t}\n\tif s, ok := v.data.(map[string]interface{}); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn New(nil)\n}\n\n// MustObjxMap gets the value as a (Map).\n//\n// Panics if the object is not a (Map).\nfunc (v *Value) MustObjxMap() Map {\n\tif s, ok := v.data.(map[string]interface{}); ok {\n\t\treturn s\n\t}\n\treturn v.data.((Map))\n}\n\n// ObjxMapSlice gets the value as a [](Map), returns the optionalDefault\n// value or nil if the value is not a [](Map).\nfunc (v *Value) ObjxMapSlice(optionalDefault ...[](Map)) [](Map) {\n\tif s, ok := v.data.([]Map); ok {\n\t\treturn s\n\t}\n\n\tif s, ok := v.data.([]map[string]interface{}); ok {\n\t\tresult := make([]Map, len(s))\n\t\tfor i := range s {\n\t\t\tresult[i] = s[i]\n\t\t}\n\t\treturn result\n\t}\n\n\ts, ok := v.data.([]interface{})\n\tif !ok {\n\t\tif len(optionalDefault) == 1 {\n\t\t\treturn optionalDefault[0]\n\t\t}\n\t\treturn nil\n\t}\n\n\tresult := make([]Map, len(s))\n\tfor i := range s {\n\t\tswitch s[i].(type) {\n\t\tcase Map:\n\t\t\tresult[i] = s[i].(Map)\n\t\tcase map[string]interface{}:\n\t\t\tresult[i] = New(s[i])\n\t\tdefault:\n\t\t\treturn nil\n\t\t}\n\t}\n\treturn result\n}\n\n// MustObjxMapSlice gets the value as a [](Map).\n//\n// Panics if the object is not a [](Map).\nfunc (v *Value) MustObjxMapSlice() [](Map) {\n\tif s := v.ObjxMapSlice(); s != nil {\n\t\treturn s\n\t}\n\treturn v.data.([](Map))\n}\n\n// IsObjxMap gets whether the object contained is a (Map) or not.\nfunc (v *Value) IsObjxMap() bool {\n\t_, ok := v.data.((Map))\n\tif !ok {\n\t\t_, ok = v.data.(map[string]interface{})\n\t}\n\treturn ok\n}\n\n// IsObjxMapSlice gets whether the object contained is a [](Map) or not.\nfunc (v *Value) IsObjxMapSlice() bool {\n\t_, ok := v.data.([](Map))\n\tif !ok {\n\t\t_, ok = v.data.([]map[string]interface{})\n\t\tif !ok {\n\t\t\ts, ok := v.data.([]interface{})\n\t\t\tif ok {\n\t\t\t\tfor i := range s {\n\t\t\t\t\tswitch s[i].(type) {\n\t\t\t\t\tcase Map:\n\t\t\t\t\tcase map[string]interface{}:\n\t\t\t\t\tdefault:\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ok\n}\n\n// EachObjxMap calls the specified callback for each object\n// in the [](Map).\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachObjxMap(callback func(int, Map) bool) *Value {\n\tfor index, val := range v.MustObjxMapSlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereObjxMap uses the specified decider function to select items\n// from the [](Map).  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereObjxMap(decider func(int, Map) bool) *Value {\n\tvar selected [](Map)\n\tv.EachObjxMap(func(index int, val Map) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupObjxMap uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][](Map).\nfunc (v *Value) GroupObjxMap(grouper func(int, Map) string) *Value {\n\tgroups := make(map[string][](Map))\n\tv.EachObjxMap(func(index int, val Map) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([](Map), 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceObjxMap uses the specified function to replace each (Map)s\n// by iterating each item.  The data in the returned result will be a\n// [](Map) containing the replaced items.\nfunc (v *Value) ReplaceObjxMap(replacer func(int, Map) Map) *Value {\n\tarr := v.MustObjxMapSlice()\n\treplaced := make([](Map), len(arr))\n\tv.EachObjxMap(func(index int, val Map) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectObjxMap uses the specified collector function to collect a value\n// for each of the (Map)s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectObjxMap(collector func(int, Map) interface{}) *Value {\n\tarr := v.MustObjxMapSlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachObjxMap(func(index int, val Map) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/type_specific_codegen.go",
    "content": "package objx\n\n/*\n   Inter (interface{} and []interface{})\n*/\n\n// Inter gets the value as a interface{}, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Inter(optionalDefault ...interface{}) interface{} {\n\tif s, ok := v.data.(interface{}); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustInter gets the value as a interface{}.\n//\n// Panics if the object is not a interface{}.\nfunc (v *Value) MustInter() interface{} {\n\treturn v.data.(interface{})\n}\n\n// InterSlice gets the value as a []interface{}, returns the optionalDefault\n// value or nil if the value is not a []interface{}.\nfunc (v *Value) InterSlice(optionalDefault ...[]interface{}) []interface{} {\n\tif s, ok := v.data.([]interface{}); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustInterSlice gets the value as a []interface{}.\n//\n// Panics if the object is not a []interface{}.\nfunc (v *Value) MustInterSlice() []interface{} {\n\treturn v.data.([]interface{})\n}\n\n// IsInter gets whether the object contained is a interface{} or not.\nfunc (v *Value) IsInter() bool {\n\t_, ok := v.data.(interface{})\n\treturn ok\n}\n\n// IsInterSlice gets whether the object contained is a []interface{} or not.\nfunc (v *Value) IsInterSlice() bool {\n\t_, ok := v.data.([]interface{})\n\treturn ok\n}\n\n// EachInter calls the specified callback for each object\n// in the []interface{}.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachInter(callback func(int, interface{}) bool) *Value {\n\tfor index, val := range v.MustInterSlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereInter uses the specified decider function to select items\n// from the []interface{}.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereInter(decider func(int, interface{}) bool) *Value {\n\tvar selected []interface{}\n\tv.EachInter(func(index int, val interface{}) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupInter uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]interface{}.\nfunc (v *Value) GroupInter(grouper func(int, interface{}) string) *Value {\n\tgroups := make(map[string][]interface{})\n\tv.EachInter(func(index int, val interface{}) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]interface{}, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceInter uses the specified function to replace each interface{}s\n// by iterating each item.  The data in the returned result will be a\n// []interface{} containing the replaced items.\nfunc (v *Value) ReplaceInter(replacer func(int, interface{}) interface{}) *Value {\n\tarr := v.MustInterSlice()\n\treplaced := make([]interface{}, len(arr))\n\tv.EachInter(func(index int, val interface{}) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectInter uses the specified collector function to collect a value\n// for each of the interface{}s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectInter(collector func(int, interface{}) interface{}) *Value {\n\tarr := v.MustInterSlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachInter(func(index int, val interface{}) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Bool (bool and []bool)\n*/\n\n// Bool gets the value as a bool, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Bool(optionalDefault ...bool) bool {\n\tif s, ok := v.data.(bool); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn false\n}\n\n// MustBool gets the value as a bool.\n//\n// Panics if the object is not a bool.\nfunc (v *Value) MustBool() bool {\n\treturn v.data.(bool)\n}\n\n// BoolSlice gets the value as a []bool, returns the optionalDefault\n// value or nil if the value is not a []bool.\nfunc (v *Value) BoolSlice(optionalDefault ...[]bool) []bool {\n\tif s, ok := v.data.([]bool); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustBoolSlice gets the value as a []bool.\n//\n// Panics if the object is not a []bool.\nfunc (v *Value) MustBoolSlice() []bool {\n\treturn v.data.([]bool)\n}\n\n// IsBool gets whether the object contained is a bool or not.\nfunc (v *Value) IsBool() bool {\n\t_, ok := v.data.(bool)\n\treturn ok\n}\n\n// IsBoolSlice gets whether the object contained is a []bool or not.\nfunc (v *Value) IsBoolSlice() bool {\n\t_, ok := v.data.([]bool)\n\treturn ok\n}\n\n// EachBool calls the specified callback for each object\n// in the []bool.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachBool(callback func(int, bool) bool) *Value {\n\tfor index, val := range v.MustBoolSlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereBool uses the specified decider function to select items\n// from the []bool.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereBool(decider func(int, bool) bool) *Value {\n\tvar selected []bool\n\tv.EachBool(func(index int, val bool) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupBool uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]bool.\nfunc (v *Value) GroupBool(grouper func(int, bool) string) *Value {\n\tgroups := make(map[string][]bool)\n\tv.EachBool(func(index int, val bool) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]bool, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceBool uses the specified function to replace each bools\n// by iterating each item.  The data in the returned result will be a\n// []bool containing the replaced items.\nfunc (v *Value) ReplaceBool(replacer func(int, bool) bool) *Value {\n\tarr := v.MustBoolSlice()\n\treplaced := make([]bool, len(arr))\n\tv.EachBool(func(index int, val bool) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectBool uses the specified collector function to collect a value\n// for each of the bools in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectBool(collector func(int, bool) interface{}) *Value {\n\tarr := v.MustBoolSlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachBool(func(index int, val bool) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Str (string and []string)\n*/\n\n// Str gets the value as a string, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Str(optionalDefault ...string) string {\n\tif s, ok := v.data.(string); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn \"\"\n}\n\n// MustStr gets the value as a string.\n//\n// Panics if the object is not a string.\nfunc (v *Value) MustStr() string {\n\treturn v.data.(string)\n}\n\n// StrSlice gets the value as a []string, returns the optionalDefault\n// value or nil if the value is not a []string.\nfunc (v *Value) StrSlice(optionalDefault ...[]string) []string {\n\tif s, ok := v.data.([]string); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustStrSlice gets the value as a []string.\n//\n// Panics if the object is not a []string.\nfunc (v *Value) MustStrSlice() []string {\n\treturn v.data.([]string)\n}\n\n// IsStr gets whether the object contained is a string or not.\nfunc (v *Value) IsStr() bool {\n\t_, ok := v.data.(string)\n\treturn ok\n}\n\n// IsStrSlice gets whether the object contained is a []string or not.\nfunc (v *Value) IsStrSlice() bool {\n\t_, ok := v.data.([]string)\n\treturn ok\n}\n\n// EachStr calls the specified callback for each object\n// in the []string.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachStr(callback func(int, string) bool) *Value {\n\tfor index, val := range v.MustStrSlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereStr uses the specified decider function to select items\n// from the []string.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereStr(decider func(int, string) bool) *Value {\n\tvar selected []string\n\tv.EachStr(func(index int, val string) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupStr uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]string.\nfunc (v *Value) GroupStr(grouper func(int, string) string) *Value {\n\tgroups := make(map[string][]string)\n\tv.EachStr(func(index int, val string) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]string, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceStr uses the specified function to replace each strings\n// by iterating each item.  The data in the returned result will be a\n// []string containing the replaced items.\nfunc (v *Value) ReplaceStr(replacer func(int, string) string) *Value {\n\tarr := v.MustStrSlice()\n\treplaced := make([]string, len(arr))\n\tv.EachStr(func(index int, val string) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectStr uses the specified collector function to collect a value\n// for each of the strings in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectStr(collector func(int, string) interface{}) *Value {\n\tarr := v.MustStrSlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachStr(func(index int, val string) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Int (int and []int)\n*/\n\n// Int gets the value as a int, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Int(optionalDefault ...int) int {\n\tif s, ok := v.data.(int); ok {\n\t\treturn s\n\t}\n\tif s, ok := v.data.(float64); ok {\n\t\tif float64(int(s)) == s {\n\t\t\treturn int(s)\n\t\t}\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustInt gets the value as a int.\n//\n// Panics if the object is not a int.\nfunc (v *Value) MustInt() int {\n\tif s, ok := v.data.(float64); ok {\n\t\tif float64(int(s)) == s {\n\t\t\treturn int(s)\n\t\t}\n\t}\n\treturn v.data.(int)\n}\n\n// IntSlice gets the value as a []int, returns the optionalDefault\n// value or nil if the value is not a []int.\nfunc (v *Value) IntSlice(optionalDefault ...[]int) []int {\n\tif s, ok := v.data.([]int); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustIntSlice gets the value as a []int.\n//\n// Panics if the object is not a []int.\nfunc (v *Value) MustIntSlice() []int {\n\treturn v.data.([]int)\n}\n\n// IsInt gets whether the object contained is a int or not.\nfunc (v *Value) IsInt() bool {\n\t_, ok := v.data.(int)\n\treturn ok\n}\n\n// IsIntSlice gets whether the object contained is a []int or not.\nfunc (v *Value) IsIntSlice() bool {\n\t_, ok := v.data.([]int)\n\treturn ok\n}\n\n// EachInt calls the specified callback for each object\n// in the []int.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachInt(callback func(int, int) bool) *Value {\n\tfor index, val := range v.MustIntSlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereInt uses the specified decider function to select items\n// from the []int.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereInt(decider func(int, int) bool) *Value {\n\tvar selected []int\n\tv.EachInt(func(index int, val int) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupInt uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]int.\nfunc (v *Value) GroupInt(grouper func(int, int) string) *Value {\n\tgroups := make(map[string][]int)\n\tv.EachInt(func(index int, val int) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]int, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceInt uses the specified function to replace each ints\n// by iterating each item.  The data in the returned result will be a\n// []int containing the replaced items.\nfunc (v *Value) ReplaceInt(replacer func(int, int) int) *Value {\n\tarr := v.MustIntSlice()\n\treplaced := make([]int, len(arr))\n\tv.EachInt(func(index int, val int) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectInt uses the specified collector function to collect a value\n// for each of the ints in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectInt(collector func(int, int) interface{}) *Value {\n\tarr := v.MustIntSlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachInt(func(index int, val int) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Int8 (int8 and []int8)\n*/\n\n// Int8 gets the value as a int8, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Int8(optionalDefault ...int8) int8 {\n\tif s, ok := v.data.(int8); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustInt8 gets the value as a int8.\n//\n// Panics if the object is not a int8.\nfunc (v *Value) MustInt8() int8 {\n\treturn v.data.(int8)\n}\n\n// Int8Slice gets the value as a []int8, returns the optionalDefault\n// value or nil if the value is not a []int8.\nfunc (v *Value) Int8Slice(optionalDefault ...[]int8) []int8 {\n\tif s, ok := v.data.([]int8); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustInt8Slice gets the value as a []int8.\n//\n// Panics if the object is not a []int8.\nfunc (v *Value) MustInt8Slice() []int8 {\n\treturn v.data.([]int8)\n}\n\n// IsInt8 gets whether the object contained is a int8 or not.\nfunc (v *Value) IsInt8() bool {\n\t_, ok := v.data.(int8)\n\treturn ok\n}\n\n// IsInt8Slice gets whether the object contained is a []int8 or not.\nfunc (v *Value) IsInt8Slice() bool {\n\t_, ok := v.data.([]int8)\n\treturn ok\n}\n\n// EachInt8 calls the specified callback for each object\n// in the []int8.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachInt8(callback func(int, int8) bool) *Value {\n\tfor index, val := range v.MustInt8Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereInt8 uses the specified decider function to select items\n// from the []int8.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereInt8(decider func(int, int8) bool) *Value {\n\tvar selected []int8\n\tv.EachInt8(func(index int, val int8) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupInt8 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]int8.\nfunc (v *Value) GroupInt8(grouper func(int, int8) string) *Value {\n\tgroups := make(map[string][]int8)\n\tv.EachInt8(func(index int, val int8) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]int8, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceInt8 uses the specified function to replace each int8s\n// by iterating each item.  The data in the returned result will be a\n// []int8 containing the replaced items.\nfunc (v *Value) ReplaceInt8(replacer func(int, int8) int8) *Value {\n\tarr := v.MustInt8Slice()\n\treplaced := make([]int8, len(arr))\n\tv.EachInt8(func(index int, val int8) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectInt8 uses the specified collector function to collect a value\n// for each of the int8s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectInt8(collector func(int, int8) interface{}) *Value {\n\tarr := v.MustInt8Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachInt8(func(index int, val int8) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Int16 (int16 and []int16)\n*/\n\n// Int16 gets the value as a int16, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Int16(optionalDefault ...int16) int16 {\n\tif s, ok := v.data.(int16); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustInt16 gets the value as a int16.\n//\n// Panics if the object is not a int16.\nfunc (v *Value) MustInt16() int16 {\n\treturn v.data.(int16)\n}\n\n// Int16Slice gets the value as a []int16, returns the optionalDefault\n// value or nil if the value is not a []int16.\nfunc (v *Value) Int16Slice(optionalDefault ...[]int16) []int16 {\n\tif s, ok := v.data.([]int16); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustInt16Slice gets the value as a []int16.\n//\n// Panics if the object is not a []int16.\nfunc (v *Value) MustInt16Slice() []int16 {\n\treturn v.data.([]int16)\n}\n\n// IsInt16 gets whether the object contained is a int16 or not.\nfunc (v *Value) IsInt16() bool {\n\t_, ok := v.data.(int16)\n\treturn ok\n}\n\n// IsInt16Slice gets whether the object contained is a []int16 or not.\nfunc (v *Value) IsInt16Slice() bool {\n\t_, ok := v.data.([]int16)\n\treturn ok\n}\n\n// EachInt16 calls the specified callback for each object\n// in the []int16.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachInt16(callback func(int, int16) bool) *Value {\n\tfor index, val := range v.MustInt16Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereInt16 uses the specified decider function to select items\n// from the []int16.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereInt16(decider func(int, int16) bool) *Value {\n\tvar selected []int16\n\tv.EachInt16(func(index int, val int16) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupInt16 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]int16.\nfunc (v *Value) GroupInt16(grouper func(int, int16) string) *Value {\n\tgroups := make(map[string][]int16)\n\tv.EachInt16(func(index int, val int16) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]int16, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceInt16 uses the specified function to replace each int16s\n// by iterating each item.  The data in the returned result will be a\n// []int16 containing the replaced items.\nfunc (v *Value) ReplaceInt16(replacer func(int, int16) int16) *Value {\n\tarr := v.MustInt16Slice()\n\treplaced := make([]int16, len(arr))\n\tv.EachInt16(func(index int, val int16) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectInt16 uses the specified collector function to collect a value\n// for each of the int16s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectInt16(collector func(int, int16) interface{}) *Value {\n\tarr := v.MustInt16Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachInt16(func(index int, val int16) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Int32 (int32 and []int32)\n*/\n\n// Int32 gets the value as a int32, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Int32(optionalDefault ...int32) int32 {\n\tif s, ok := v.data.(int32); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustInt32 gets the value as a int32.\n//\n// Panics if the object is not a int32.\nfunc (v *Value) MustInt32() int32 {\n\treturn v.data.(int32)\n}\n\n// Int32Slice gets the value as a []int32, returns the optionalDefault\n// value or nil if the value is not a []int32.\nfunc (v *Value) Int32Slice(optionalDefault ...[]int32) []int32 {\n\tif s, ok := v.data.([]int32); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustInt32Slice gets the value as a []int32.\n//\n// Panics if the object is not a []int32.\nfunc (v *Value) MustInt32Slice() []int32 {\n\treturn v.data.([]int32)\n}\n\n// IsInt32 gets whether the object contained is a int32 or not.\nfunc (v *Value) IsInt32() bool {\n\t_, ok := v.data.(int32)\n\treturn ok\n}\n\n// IsInt32Slice gets whether the object contained is a []int32 or not.\nfunc (v *Value) IsInt32Slice() bool {\n\t_, ok := v.data.([]int32)\n\treturn ok\n}\n\n// EachInt32 calls the specified callback for each object\n// in the []int32.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachInt32(callback func(int, int32) bool) *Value {\n\tfor index, val := range v.MustInt32Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereInt32 uses the specified decider function to select items\n// from the []int32.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereInt32(decider func(int, int32) bool) *Value {\n\tvar selected []int32\n\tv.EachInt32(func(index int, val int32) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupInt32 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]int32.\nfunc (v *Value) GroupInt32(grouper func(int, int32) string) *Value {\n\tgroups := make(map[string][]int32)\n\tv.EachInt32(func(index int, val int32) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]int32, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceInt32 uses the specified function to replace each int32s\n// by iterating each item.  The data in the returned result will be a\n// []int32 containing the replaced items.\nfunc (v *Value) ReplaceInt32(replacer func(int, int32) int32) *Value {\n\tarr := v.MustInt32Slice()\n\treplaced := make([]int32, len(arr))\n\tv.EachInt32(func(index int, val int32) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectInt32 uses the specified collector function to collect a value\n// for each of the int32s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectInt32(collector func(int, int32) interface{}) *Value {\n\tarr := v.MustInt32Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachInt32(func(index int, val int32) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Int64 (int64 and []int64)\n*/\n\n// Int64 gets the value as a int64, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Int64(optionalDefault ...int64) int64 {\n\tif s, ok := v.data.(int64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustInt64 gets the value as a int64.\n//\n// Panics if the object is not a int64.\nfunc (v *Value) MustInt64() int64 {\n\treturn v.data.(int64)\n}\n\n// Int64Slice gets the value as a []int64, returns the optionalDefault\n// value or nil if the value is not a []int64.\nfunc (v *Value) Int64Slice(optionalDefault ...[]int64) []int64 {\n\tif s, ok := v.data.([]int64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustInt64Slice gets the value as a []int64.\n//\n// Panics if the object is not a []int64.\nfunc (v *Value) MustInt64Slice() []int64 {\n\treturn v.data.([]int64)\n}\n\n// IsInt64 gets whether the object contained is a int64 or not.\nfunc (v *Value) IsInt64() bool {\n\t_, ok := v.data.(int64)\n\treturn ok\n}\n\n// IsInt64Slice gets whether the object contained is a []int64 or not.\nfunc (v *Value) IsInt64Slice() bool {\n\t_, ok := v.data.([]int64)\n\treturn ok\n}\n\n// EachInt64 calls the specified callback for each object\n// in the []int64.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachInt64(callback func(int, int64) bool) *Value {\n\tfor index, val := range v.MustInt64Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereInt64 uses the specified decider function to select items\n// from the []int64.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereInt64(decider func(int, int64) bool) *Value {\n\tvar selected []int64\n\tv.EachInt64(func(index int, val int64) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupInt64 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]int64.\nfunc (v *Value) GroupInt64(grouper func(int, int64) string) *Value {\n\tgroups := make(map[string][]int64)\n\tv.EachInt64(func(index int, val int64) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]int64, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceInt64 uses the specified function to replace each int64s\n// by iterating each item.  The data in the returned result will be a\n// []int64 containing the replaced items.\nfunc (v *Value) ReplaceInt64(replacer func(int, int64) int64) *Value {\n\tarr := v.MustInt64Slice()\n\treplaced := make([]int64, len(arr))\n\tv.EachInt64(func(index int, val int64) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectInt64 uses the specified collector function to collect a value\n// for each of the int64s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectInt64(collector func(int, int64) interface{}) *Value {\n\tarr := v.MustInt64Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachInt64(func(index int, val int64) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Uint (uint and []uint)\n*/\n\n// Uint gets the value as a uint, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Uint(optionalDefault ...uint) uint {\n\tif s, ok := v.data.(uint); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustUint gets the value as a uint.\n//\n// Panics if the object is not a uint.\nfunc (v *Value) MustUint() uint {\n\treturn v.data.(uint)\n}\n\n// UintSlice gets the value as a []uint, returns the optionalDefault\n// value or nil if the value is not a []uint.\nfunc (v *Value) UintSlice(optionalDefault ...[]uint) []uint {\n\tif s, ok := v.data.([]uint); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustUintSlice gets the value as a []uint.\n//\n// Panics if the object is not a []uint.\nfunc (v *Value) MustUintSlice() []uint {\n\treturn v.data.([]uint)\n}\n\n// IsUint gets whether the object contained is a uint or not.\nfunc (v *Value) IsUint() bool {\n\t_, ok := v.data.(uint)\n\treturn ok\n}\n\n// IsUintSlice gets whether the object contained is a []uint or not.\nfunc (v *Value) IsUintSlice() bool {\n\t_, ok := v.data.([]uint)\n\treturn ok\n}\n\n// EachUint calls the specified callback for each object\n// in the []uint.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachUint(callback func(int, uint) bool) *Value {\n\tfor index, val := range v.MustUintSlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereUint uses the specified decider function to select items\n// from the []uint.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereUint(decider func(int, uint) bool) *Value {\n\tvar selected []uint\n\tv.EachUint(func(index int, val uint) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupUint uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]uint.\nfunc (v *Value) GroupUint(grouper func(int, uint) string) *Value {\n\tgroups := make(map[string][]uint)\n\tv.EachUint(func(index int, val uint) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]uint, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceUint uses the specified function to replace each uints\n// by iterating each item.  The data in the returned result will be a\n// []uint containing the replaced items.\nfunc (v *Value) ReplaceUint(replacer func(int, uint) uint) *Value {\n\tarr := v.MustUintSlice()\n\treplaced := make([]uint, len(arr))\n\tv.EachUint(func(index int, val uint) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectUint uses the specified collector function to collect a value\n// for each of the uints in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectUint(collector func(int, uint) interface{}) *Value {\n\tarr := v.MustUintSlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachUint(func(index int, val uint) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Uint8 (uint8 and []uint8)\n*/\n\n// Uint8 gets the value as a uint8, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Uint8(optionalDefault ...uint8) uint8 {\n\tif s, ok := v.data.(uint8); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustUint8 gets the value as a uint8.\n//\n// Panics if the object is not a uint8.\nfunc (v *Value) MustUint8() uint8 {\n\treturn v.data.(uint8)\n}\n\n// Uint8Slice gets the value as a []uint8, returns the optionalDefault\n// value or nil if the value is not a []uint8.\nfunc (v *Value) Uint8Slice(optionalDefault ...[]uint8) []uint8 {\n\tif s, ok := v.data.([]uint8); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustUint8Slice gets the value as a []uint8.\n//\n// Panics if the object is not a []uint8.\nfunc (v *Value) MustUint8Slice() []uint8 {\n\treturn v.data.([]uint8)\n}\n\n// IsUint8 gets whether the object contained is a uint8 or not.\nfunc (v *Value) IsUint8() bool {\n\t_, ok := v.data.(uint8)\n\treturn ok\n}\n\n// IsUint8Slice gets whether the object contained is a []uint8 or not.\nfunc (v *Value) IsUint8Slice() bool {\n\t_, ok := v.data.([]uint8)\n\treturn ok\n}\n\n// EachUint8 calls the specified callback for each object\n// in the []uint8.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachUint8(callback func(int, uint8) bool) *Value {\n\tfor index, val := range v.MustUint8Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereUint8 uses the specified decider function to select items\n// from the []uint8.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereUint8(decider func(int, uint8) bool) *Value {\n\tvar selected []uint8\n\tv.EachUint8(func(index int, val uint8) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupUint8 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]uint8.\nfunc (v *Value) GroupUint8(grouper func(int, uint8) string) *Value {\n\tgroups := make(map[string][]uint8)\n\tv.EachUint8(func(index int, val uint8) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]uint8, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceUint8 uses the specified function to replace each uint8s\n// by iterating each item.  The data in the returned result will be a\n// []uint8 containing the replaced items.\nfunc (v *Value) ReplaceUint8(replacer func(int, uint8) uint8) *Value {\n\tarr := v.MustUint8Slice()\n\treplaced := make([]uint8, len(arr))\n\tv.EachUint8(func(index int, val uint8) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectUint8 uses the specified collector function to collect a value\n// for each of the uint8s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectUint8(collector func(int, uint8) interface{}) *Value {\n\tarr := v.MustUint8Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachUint8(func(index int, val uint8) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Uint16 (uint16 and []uint16)\n*/\n\n// Uint16 gets the value as a uint16, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Uint16(optionalDefault ...uint16) uint16 {\n\tif s, ok := v.data.(uint16); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustUint16 gets the value as a uint16.\n//\n// Panics if the object is not a uint16.\nfunc (v *Value) MustUint16() uint16 {\n\treturn v.data.(uint16)\n}\n\n// Uint16Slice gets the value as a []uint16, returns the optionalDefault\n// value or nil if the value is not a []uint16.\nfunc (v *Value) Uint16Slice(optionalDefault ...[]uint16) []uint16 {\n\tif s, ok := v.data.([]uint16); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustUint16Slice gets the value as a []uint16.\n//\n// Panics if the object is not a []uint16.\nfunc (v *Value) MustUint16Slice() []uint16 {\n\treturn v.data.([]uint16)\n}\n\n// IsUint16 gets whether the object contained is a uint16 or not.\nfunc (v *Value) IsUint16() bool {\n\t_, ok := v.data.(uint16)\n\treturn ok\n}\n\n// IsUint16Slice gets whether the object contained is a []uint16 or not.\nfunc (v *Value) IsUint16Slice() bool {\n\t_, ok := v.data.([]uint16)\n\treturn ok\n}\n\n// EachUint16 calls the specified callback for each object\n// in the []uint16.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachUint16(callback func(int, uint16) bool) *Value {\n\tfor index, val := range v.MustUint16Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereUint16 uses the specified decider function to select items\n// from the []uint16.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereUint16(decider func(int, uint16) bool) *Value {\n\tvar selected []uint16\n\tv.EachUint16(func(index int, val uint16) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupUint16 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]uint16.\nfunc (v *Value) GroupUint16(grouper func(int, uint16) string) *Value {\n\tgroups := make(map[string][]uint16)\n\tv.EachUint16(func(index int, val uint16) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]uint16, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceUint16 uses the specified function to replace each uint16s\n// by iterating each item.  The data in the returned result will be a\n// []uint16 containing the replaced items.\nfunc (v *Value) ReplaceUint16(replacer func(int, uint16) uint16) *Value {\n\tarr := v.MustUint16Slice()\n\treplaced := make([]uint16, len(arr))\n\tv.EachUint16(func(index int, val uint16) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectUint16 uses the specified collector function to collect a value\n// for each of the uint16s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectUint16(collector func(int, uint16) interface{}) *Value {\n\tarr := v.MustUint16Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachUint16(func(index int, val uint16) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Uint32 (uint32 and []uint32)\n*/\n\n// Uint32 gets the value as a uint32, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Uint32(optionalDefault ...uint32) uint32 {\n\tif s, ok := v.data.(uint32); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustUint32 gets the value as a uint32.\n//\n// Panics if the object is not a uint32.\nfunc (v *Value) MustUint32() uint32 {\n\treturn v.data.(uint32)\n}\n\n// Uint32Slice gets the value as a []uint32, returns the optionalDefault\n// value or nil if the value is not a []uint32.\nfunc (v *Value) Uint32Slice(optionalDefault ...[]uint32) []uint32 {\n\tif s, ok := v.data.([]uint32); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustUint32Slice gets the value as a []uint32.\n//\n// Panics if the object is not a []uint32.\nfunc (v *Value) MustUint32Slice() []uint32 {\n\treturn v.data.([]uint32)\n}\n\n// IsUint32 gets whether the object contained is a uint32 or not.\nfunc (v *Value) IsUint32() bool {\n\t_, ok := v.data.(uint32)\n\treturn ok\n}\n\n// IsUint32Slice gets whether the object contained is a []uint32 or not.\nfunc (v *Value) IsUint32Slice() bool {\n\t_, ok := v.data.([]uint32)\n\treturn ok\n}\n\n// EachUint32 calls the specified callback for each object\n// in the []uint32.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachUint32(callback func(int, uint32) bool) *Value {\n\tfor index, val := range v.MustUint32Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereUint32 uses the specified decider function to select items\n// from the []uint32.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereUint32(decider func(int, uint32) bool) *Value {\n\tvar selected []uint32\n\tv.EachUint32(func(index int, val uint32) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupUint32 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]uint32.\nfunc (v *Value) GroupUint32(grouper func(int, uint32) string) *Value {\n\tgroups := make(map[string][]uint32)\n\tv.EachUint32(func(index int, val uint32) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]uint32, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceUint32 uses the specified function to replace each uint32s\n// by iterating each item.  The data in the returned result will be a\n// []uint32 containing the replaced items.\nfunc (v *Value) ReplaceUint32(replacer func(int, uint32) uint32) *Value {\n\tarr := v.MustUint32Slice()\n\treplaced := make([]uint32, len(arr))\n\tv.EachUint32(func(index int, val uint32) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectUint32 uses the specified collector function to collect a value\n// for each of the uint32s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectUint32(collector func(int, uint32) interface{}) *Value {\n\tarr := v.MustUint32Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachUint32(func(index int, val uint32) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Uint64 (uint64 and []uint64)\n*/\n\n// Uint64 gets the value as a uint64, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Uint64(optionalDefault ...uint64) uint64 {\n\tif s, ok := v.data.(uint64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustUint64 gets the value as a uint64.\n//\n// Panics if the object is not a uint64.\nfunc (v *Value) MustUint64() uint64 {\n\treturn v.data.(uint64)\n}\n\n// Uint64Slice gets the value as a []uint64, returns the optionalDefault\n// value or nil if the value is not a []uint64.\nfunc (v *Value) Uint64Slice(optionalDefault ...[]uint64) []uint64 {\n\tif s, ok := v.data.([]uint64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustUint64Slice gets the value as a []uint64.\n//\n// Panics if the object is not a []uint64.\nfunc (v *Value) MustUint64Slice() []uint64 {\n\treturn v.data.([]uint64)\n}\n\n// IsUint64 gets whether the object contained is a uint64 or not.\nfunc (v *Value) IsUint64() bool {\n\t_, ok := v.data.(uint64)\n\treturn ok\n}\n\n// IsUint64Slice gets whether the object contained is a []uint64 or not.\nfunc (v *Value) IsUint64Slice() bool {\n\t_, ok := v.data.([]uint64)\n\treturn ok\n}\n\n// EachUint64 calls the specified callback for each object\n// in the []uint64.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachUint64(callback func(int, uint64) bool) *Value {\n\tfor index, val := range v.MustUint64Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereUint64 uses the specified decider function to select items\n// from the []uint64.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereUint64(decider func(int, uint64) bool) *Value {\n\tvar selected []uint64\n\tv.EachUint64(func(index int, val uint64) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupUint64 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]uint64.\nfunc (v *Value) GroupUint64(grouper func(int, uint64) string) *Value {\n\tgroups := make(map[string][]uint64)\n\tv.EachUint64(func(index int, val uint64) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]uint64, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceUint64 uses the specified function to replace each uint64s\n// by iterating each item.  The data in the returned result will be a\n// []uint64 containing the replaced items.\nfunc (v *Value) ReplaceUint64(replacer func(int, uint64) uint64) *Value {\n\tarr := v.MustUint64Slice()\n\treplaced := make([]uint64, len(arr))\n\tv.EachUint64(func(index int, val uint64) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectUint64 uses the specified collector function to collect a value\n// for each of the uint64s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectUint64(collector func(int, uint64) interface{}) *Value {\n\tarr := v.MustUint64Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachUint64(func(index int, val uint64) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Uintptr (uintptr and []uintptr)\n*/\n\n// Uintptr gets the value as a uintptr, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Uintptr(optionalDefault ...uintptr) uintptr {\n\tif s, ok := v.data.(uintptr); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustUintptr gets the value as a uintptr.\n//\n// Panics if the object is not a uintptr.\nfunc (v *Value) MustUintptr() uintptr {\n\treturn v.data.(uintptr)\n}\n\n// UintptrSlice gets the value as a []uintptr, returns the optionalDefault\n// value or nil if the value is not a []uintptr.\nfunc (v *Value) UintptrSlice(optionalDefault ...[]uintptr) []uintptr {\n\tif s, ok := v.data.([]uintptr); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustUintptrSlice gets the value as a []uintptr.\n//\n// Panics if the object is not a []uintptr.\nfunc (v *Value) MustUintptrSlice() []uintptr {\n\treturn v.data.([]uintptr)\n}\n\n// IsUintptr gets whether the object contained is a uintptr or not.\nfunc (v *Value) IsUintptr() bool {\n\t_, ok := v.data.(uintptr)\n\treturn ok\n}\n\n// IsUintptrSlice gets whether the object contained is a []uintptr or not.\nfunc (v *Value) IsUintptrSlice() bool {\n\t_, ok := v.data.([]uintptr)\n\treturn ok\n}\n\n// EachUintptr calls the specified callback for each object\n// in the []uintptr.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachUintptr(callback func(int, uintptr) bool) *Value {\n\tfor index, val := range v.MustUintptrSlice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereUintptr uses the specified decider function to select items\n// from the []uintptr.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereUintptr(decider func(int, uintptr) bool) *Value {\n\tvar selected []uintptr\n\tv.EachUintptr(func(index int, val uintptr) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupUintptr uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]uintptr.\nfunc (v *Value) GroupUintptr(grouper func(int, uintptr) string) *Value {\n\tgroups := make(map[string][]uintptr)\n\tv.EachUintptr(func(index int, val uintptr) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]uintptr, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceUintptr uses the specified function to replace each uintptrs\n// by iterating each item.  The data in the returned result will be a\n// []uintptr containing the replaced items.\nfunc (v *Value) ReplaceUintptr(replacer func(int, uintptr) uintptr) *Value {\n\tarr := v.MustUintptrSlice()\n\treplaced := make([]uintptr, len(arr))\n\tv.EachUintptr(func(index int, val uintptr) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectUintptr uses the specified collector function to collect a value\n// for each of the uintptrs in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectUintptr(collector func(int, uintptr) interface{}) *Value {\n\tarr := v.MustUintptrSlice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachUintptr(func(index int, val uintptr) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Float32 (float32 and []float32)\n*/\n\n// Float32 gets the value as a float32, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Float32(optionalDefault ...float32) float32 {\n\tif s, ok := v.data.(float32); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustFloat32 gets the value as a float32.\n//\n// Panics if the object is not a float32.\nfunc (v *Value) MustFloat32() float32 {\n\treturn v.data.(float32)\n}\n\n// Float32Slice gets the value as a []float32, returns the optionalDefault\n// value or nil if the value is not a []float32.\nfunc (v *Value) Float32Slice(optionalDefault ...[]float32) []float32 {\n\tif s, ok := v.data.([]float32); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustFloat32Slice gets the value as a []float32.\n//\n// Panics if the object is not a []float32.\nfunc (v *Value) MustFloat32Slice() []float32 {\n\treturn v.data.([]float32)\n}\n\n// IsFloat32 gets whether the object contained is a float32 or not.\nfunc (v *Value) IsFloat32() bool {\n\t_, ok := v.data.(float32)\n\treturn ok\n}\n\n// IsFloat32Slice gets whether the object contained is a []float32 or not.\nfunc (v *Value) IsFloat32Slice() bool {\n\t_, ok := v.data.([]float32)\n\treturn ok\n}\n\n// EachFloat32 calls the specified callback for each object\n// in the []float32.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachFloat32(callback func(int, float32) bool) *Value {\n\tfor index, val := range v.MustFloat32Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereFloat32 uses the specified decider function to select items\n// from the []float32.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereFloat32(decider func(int, float32) bool) *Value {\n\tvar selected []float32\n\tv.EachFloat32(func(index int, val float32) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupFloat32 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]float32.\nfunc (v *Value) GroupFloat32(grouper func(int, float32) string) *Value {\n\tgroups := make(map[string][]float32)\n\tv.EachFloat32(func(index int, val float32) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]float32, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceFloat32 uses the specified function to replace each float32s\n// by iterating each item.  The data in the returned result will be a\n// []float32 containing the replaced items.\nfunc (v *Value) ReplaceFloat32(replacer func(int, float32) float32) *Value {\n\tarr := v.MustFloat32Slice()\n\treplaced := make([]float32, len(arr))\n\tv.EachFloat32(func(index int, val float32) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectFloat32 uses the specified collector function to collect a value\n// for each of the float32s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectFloat32(collector func(int, float32) interface{}) *Value {\n\tarr := v.MustFloat32Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachFloat32(func(index int, val float32) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Float64 (float64 and []float64)\n*/\n\n// Float64 gets the value as a float64, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Float64(optionalDefault ...float64) float64 {\n\tif s, ok := v.data.(float64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustFloat64 gets the value as a float64.\n//\n// Panics if the object is not a float64.\nfunc (v *Value) MustFloat64() float64 {\n\treturn v.data.(float64)\n}\n\n// Float64Slice gets the value as a []float64, returns the optionalDefault\n// value or nil if the value is not a []float64.\nfunc (v *Value) Float64Slice(optionalDefault ...[]float64) []float64 {\n\tif s, ok := v.data.([]float64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustFloat64Slice gets the value as a []float64.\n//\n// Panics if the object is not a []float64.\nfunc (v *Value) MustFloat64Slice() []float64 {\n\treturn v.data.([]float64)\n}\n\n// IsFloat64 gets whether the object contained is a float64 or not.\nfunc (v *Value) IsFloat64() bool {\n\t_, ok := v.data.(float64)\n\treturn ok\n}\n\n// IsFloat64Slice gets whether the object contained is a []float64 or not.\nfunc (v *Value) IsFloat64Slice() bool {\n\t_, ok := v.data.([]float64)\n\treturn ok\n}\n\n// EachFloat64 calls the specified callback for each object\n// in the []float64.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachFloat64(callback func(int, float64) bool) *Value {\n\tfor index, val := range v.MustFloat64Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereFloat64 uses the specified decider function to select items\n// from the []float64.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereFloat64(decider func(int, float64) bool) *Value {\n\tvar selected []float64\n\tv.EachFloat64(func(index int, val float64) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupFloat64 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]float64.\nfunc (v *Value) GroupFloat64(grouper func(int, float64) string) *Value {\n\tgroups := make(map[string][]float64)\n\tv.EachFloat64(func(index int, val float64) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]float64, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceFloat64 uses the specified function to replace each float64s\n// by iterating each item.  The data in the returned result will be a\n// []float64 containing the replaced items.\nfunc (v *Value) ReplaceFloat64(replacer func(int, float64) float64) *Value {\n\tarr := v.MustFloat64Slice()\n\treplaced := make([]float64, len(arr))\n\tv.EachFloat64(func(index int, val float64) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectFloat64 uses the specified collector function to collect a value\n// for each of the float64s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectFloat64(collector func(int, float64) interface{}) *Value {\n\tarr := v.MustFloat64Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachFloat64(func(index int, val float64) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Complex64 (complex64 and []complex64)\n*/\n\n// Complex64 gets the value as a complex64, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Complex64(optionalDefault ...complex64) complex64 {\n\tif s, ok := v.data.(complex64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustComplex64 gets the value as a complex64.\n//\n// Panics if the object is not a complex64.\nfunc (v *Value) MustComplex64() complex64 {\n\treturn v.data.(complex64)\n}\n\n// Complex64Slice gets the value as a []complex64, returns the optionalDefault\n// value or nil if the value is not a []complex64.\nfunc (v *Value) Complex64Slice(optionalDefault ...[]complex64) []complex64 {\n\tif s, ok := v.data.([]complex64); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustComplex64Slice gets the value as a []complex64.\n//\n// Panics if the object is not a []complex64.\nfunc (v *Value) MustComplex64Slice() []complex64 {\n\treturn v.data.([]complex64)\n}\n\n// IsComplex64 gets whether the object contained is a complex64 or not.\nfunc (v *Value) IsComplex64() bool {\n\t_, ok := v.data.(complex64)\n\treturn ok\n}\n\n// IsComplex64Slice gets whether the object contained is a []complex64 or not.\nfunc (v *Value) IsComplex64Slice() bool {\n\t_, ok := v.data.([]complex64)\n\treturn ok\n}\n\n// EachComplex64 calls the specified callback for each object\n// in the []complex64.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachComplex64(callback func(int, complex64) bool) *Value {\n\tfor index, val := range v.MustComplex64Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereComplex64 uses the specified decider function to select items\n// from the []complex64.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereComplex64(decider func(int, complex64) bool) *Value {\n\tvar selected []complex64\n\tv.EachComplex64(func(index int, val complex64) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupComplex64 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]complex64.\nfunc (v *Value) GroupComplex64(grouper func(int, complex64) string) *Value {\n\tgroups := make(map[string][]complex64)\n\tv.EachComplex64(func(index int, val complex64) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]complex64, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceComplex64 uses the specified function to replace each complex64s\n// by iterating each item.  The data in the returned result will be a\n// []complex64 containing the replaced items.\nfunc (v *Value) ReplaceComplex64(replacer func(int, complex64) complex64) *Value {\n\tarr := v.MustComplex64Slice()\n\treplaced := make([]complex64, len(arr))\n\tv.EachComplex64(func(index int, val complex64) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectComplex64 uses the specified collector function to collect a value\n// for each of the complex64s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectComplex64(collector func(int, complex64) interface{}) *Value {\n\tarr := v.MustComplex64Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachComplex64(func(index int, val complex64) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n\n/*\n   Complex128 (complex128 and []complex128)\n*/\n\n// Complex128 gets the value as a complex128, returns the optionalDefault\n// value or a system default object if the value is the wrong type.\nfunc (v *Value) Complex128(optionalDefault ...complex128) complex128 {\n\tif s, ok := v.data.(complex128); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn 0\n}\n\n// MustComplex128 gets the value as a complex128.\n//\n// Panics if the object is not a complex128.\nfunc (v *Value) MustComplex128() complex128 {\n\treturn v.data.(complex128)\n}\n\n// Complex128Slice gets the value as a []complex128, returns the optionalDefault\n// value or nil if the value is not a []complex128.\nfunc (v *Value) Complex128Slice(optionalDefault ...[]complex128) []complex128 {\n\tif s, ok := v.data.([]complex128); ok {\n\t\treturn s\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\treturn nil\n}\n\n// MustComplex128Slice gets the value as a []complex128.\n//\n// Panics if the object is not a []complex128.\nfunc (v *Value) MustComplex128Slice() []complex128 {\n\treturn v.data.([]complex128)\n}\n\n// IsComplex128 gets whether the object contained is a complex128 or not.\nfunc (v *Value) IsComplex128() bool {\n\t_, ok := v.data.(complex128)\n\treturn ok\n}\n\n// IsComplex128Slice gets whether the object contained is a []complex128 or not.\nfunc (v *Value) IsComplex128Slice() bool {\n\t_, ok := v.data.([]complex128)\n\treturn ok\n}\n\n// EachComplex128 calls the specified callback for each object\n// in the []complex128.\n//\n// Panics if the object is the wrong type.\nfunc (v *Value) EachComplex128(callback func(int, complex128) bool) *Value {\n\tfor index, val := range v.MustComplex128Slice() {\n\t\tcarryon := callback(index, val)\n\t\tif !carryon {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn v\n}\n\n// WhereComplex128 uses the specified decider function to select items\n// from the []complex128.  The object contained in the result will contain\n// only the selected items.\nfunc (v *Value) WhereComplex128(decider func(int, complex128) bool) *Value {\n\tvar selected []complex128\n\tv.EachComplex128(func(index int, val complex128) bool {\n\t\tshouldSelect := decider(index, val)\n\t\tif !shouldSelect {\n\t\t\tselected = append(selected, val)\n\t\t}\n\t\treturn true\n\t})\n\treturn &Value{data: selected}\n}\n\n// GroupComplex128 uses the specified grouper function to group the items\n// keyed by the return of the grouper.  The object contained in the\n// result will contain a map[string][]complex128.\nfunc (v *Value) GroupComplex128(grouper func(int, complex128) string) *Value {\n\tgroups := make(map[string][]complex128)\n\tv.EachComplex128(func(index int, val complex128) bool {\n\t\tgroup := grouper(index, val)\n\t\tif _, ok := groups[group]; !ok {\n\t\t\tgroups[group] = make([]complex128, 0)\n\t\t}\n\t\tgroups[group] = append(groups[group], val)\n\t\treturn true\n\t})\n\treturn &Value{data: groups}\n}\n\n// ReplaceComplex128 uses the specified function to replace each complex128s\n// by iterating each item.  The data in the returned result will be a\n// []complex128 containing the replaced items.\nfunc (v *Value) ReplaceComplex128(replacer func(int, complex128) complex128) *Value {\n\tarr := v.MustComplex128Slice()\n\treplaced := make([]complex128, len(arr))\n\tv.EachComplex128(func(index int, val complex128) bool {\n\t\treplaced[index] = replacer(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: replaced}\n}\n\n// CollectComplex128 uses the specified collector function to collect a value\n// for each of the complex128s in the slice.  The data returned will be a\n// []interface{}.\nfunc (v *Value) CollectComplex128(collector func(int, complex128) interface{}) *Value {\n\tarr := v.MustComplex128Slice()\n\tcollected := make([]interface{}, len(arr))\n\tv.EachComplex128(func(index int, val complex128) bool {\n\t\tcollected[index] = collector(index, val)\n\t\treturn true\n\t})\n\treturn &Value{data: collected}\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/objx/value.go",
    "content": "package objx\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n)\n\n// Value provides methods for extracting interface{} data in various\n// types.\ntype Value struct {\n\t// data contains the raw data being managed by this Value\n\tdata interface{}\n}\n\n// Data returns the raw data contained by this Value\nfunc (v *Value) Data() interface{} {\n\treturn v.data\n}\n\n// String returns the value always as a string\nfunc (v *Value) String() string {\n\tswitch {\n\tcase v.IsNil():\n\t\treturn \"\"\n\tcase v.IsStr():\n\t\treturn v.Str()\n\tcase v.IsBool():\n\t\treturn strconv.FormatBool(v.Bool())\n\tcase v.IsFloat32():\n\t\treturn strconv.FormatFloat(float64(v.Float32()), 'f', -1, 32)\n\tcase v.IsFloat64():\n\t\treturn strconv.FormatFloat(v.Float64(), 'f', -1, 64)\n\tcase v.IsInt():\n\t\treturn strconv.FormatInt(int64(v.Int()), 10)\n\tcase v.IsInt8():\n\t\treturn strconv.FormatInt(int64(v.Int8()), 10)\n\tcase v.IsInt16():\n\t\treturn strconv.FormatInt(int64(v.Int16()), 10)\n\tcase v.IsInt32():\n\t\treturn strconv.FormatInt(int64(v.Int32()), 10)\n\tcase v.IsInt64():\n\t\treturn strconv.FormatInt(v.Int64(), 10)\n\tcase v.IsUint():\n\t\treturn strconv.FormatUint(uint64(v.Uint()), 10)\n\tcase v.IsUint8():\n\t\treturn strconv.FormatUint(uint64(v.Uint8()), 10)\n\tcase v.IsUint16():\n\t\treturn strconv.FormatUint(uint64(v.Uint16()), 10)\n\tcase v.IsUint32():\n\t\treturn strconv.FormatUint(uint64(v.Uint32()), 10)\n\tcase v.IsUint64():\n\t\treturn strconv.FormatUint(v.Uint64(), 10)\n\t}\n\treturn fmt.Sprintf(\"%#v\", v.Data())\n}\n\n// StringSlice returns the value always as a []string\nfunc (v *Value) StringSlice(optionalDefault ...[]string) []string {\n\tswitch {\n\tcase v.IsStrSlice():\n\t\treturn v.MustStrSlice()\n\tcase v.IsBoolSlice():\n\t\tslice := v.MustBoolSlice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatBool(iv)\n\t\t}\n\t\treturn vals\n\tcase v.IsFloat32Slice():\n\t\tslice := v.MustFloat32Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatFloat(float64(iv), 'f', -1, 32)\n\t\t}\n\t\treturn vals\n\tcase v.IsFloat64Slice():\n\t\tslice := v.MustFloat64Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatFloat(iv, 'f', -1, 64)\n\t\t}\n\t\treturn vals\n\tcase v.IsIntSlice():\n\t\tslice := v.MustIntSlice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatInt(int64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsInt8Slice():\n\t\tslice := v.MustInt8Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatInt(int64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsInt16Slice():\n\t\tslice := v.MustInt16Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatInt(int64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsInt32Slice():\n\t\tslice := v.MustInt32Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatInt(int64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsInt64Slice():\n\t\tslice := v.MustInt64Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatInt(iv, 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsUintSlice():\n\t\tslice := v.MustUintSlice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatUint(uint64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsUint8Slice():\n\t\tslice := v.MustUint8Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatUint(uint64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsUint16Slice():\n\t\tslice := v.MustUint16Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatUint(uint64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsUint32Slice():\n\t\tslice := v.MustUint32Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatUint(uint64(iv), 10)\n\t\t}\n\t\treturn vals\n\tcase v.IsUint64Slice():\n\t\tslice := v.MustUint64Slice()\n\t\tvals := make([]string, len(slice))\n\t\tfor i, iv := range slice {\n\t\t\tvals[i] = strconv.FormatUint(iv, 10)\n\t\t}\n\t\treturn vals\n\t}\n\tif len(optionalDefault) == 1 {\n\t\treturn optionalDefault[0]\n\t}\n\n\treturn []string{}\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2012-2020 Mat Ryer, Tyler Bunnell and contributors.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_compare.go",
    "content": "package assert\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"time\"\n)\n\ntype CompareType int\n\nconst (\n\tcompareLess CompareType = iota - 1\n\tcompareEqual\n\tcompareGreater\n)\n\nvar (\n\tintType   = reflect.TypeOf(int(1))\n\tint8Type  = reflect.TypeOf(int8(1))\n\tint16Type = reflect.TypeOf(int16(1))\n\tint32Type = reflect.TypeOf(int32(1))\n\tint64Type = reflect.TypeOf(int64(1))\n\n\tuintType   = reflect.TypeOf(uint(1))\n\tuint8Type  = reflect.TypeOf(uint8(1))\n\tuint16Type = reflect.TypeOf(uint16(1))\n\tuint32Type = reflect.TypeOf(uint32(1))\n\tuint64Type = reflect.TypeOf(uint64(1))\n\n\tfloat32Type = reflect.TypeOf(float32(1))\n\tfloat64Type = reflect.TypeOf(float64(1))\n\n\tstringType = reflect.TypeOf(\"\")\n\n\ttimeType  = reflect.TypeOf(time.Time{})\n\tbytesType = reflect.TypeOf([]byte{})\n)\n\nfunc compare(obj1, obj2 interface{}, kind reflect.Kind) (CompareType, bool) {\n\tobj1Value := reflect.ValueOf(obj1)\n\tobj2Value := reflect.ValueOf(obj2)\n\n\t// throughout this switch we try and avoid calling .Convert() if possible,\n\t// as this has a pretty big performance impact\n\tswitch kind {\n\tcase reflect.Int:\n\t\t{\n\t\t\tintobj1, ok := obj1.(int)\n\t\t\tif !ok {\n\t\t\t\tintobj1 = obj1Value.Convert(intType).Interface().(int)\n\t\t\t}\n\t\t\tintobj2, ok := obj2.(int)\n\t\t\tif !ok {\n\t\t\t\tintobj2 = obj2Value.Convert(intType).Interface().(int)\n\t\t\t}\n\t\t\tif intobj1 > intobj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif intobj1 == intobj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif intobj1 < intobj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Int8:\n\t\t{\n\t\t\tint8obj1, ok := obj1.(int8)\n\t\t\tif !ok {\n\t\t\t\tint8obj1 = obj1Value.Convert(int8Type).Interface().(int8)\n\t\t\t}\n\t\t\tint8obj2, ok := obj2.(int8)\n\t\t\tif !ok {\n\t\t\t\tint8obj2 = obj2Value.Convert(int8Type).Interface().(int8)\n\t\t\t}\n\t\t\tif int8obj1 > int8obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif int8obj1 == int8obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif int8obj1 < int8obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Int16:\n\t\t{\n\t\t\tint16obj1, ok := obj1.(int16)\n\t\t\tif !ok {\n\t\t\t\tint16obj1 = obj1Value.Convert(int16Type).Interface().(int16)\n\t\t\t}\n\t\t\tint16obj2, ok := obj2.(int16)\n\t\t\tif !ok {\n\t\t\t\tint16obj2 = obj2Value.Convert(int16Type).Interface().(int16)\n\t\t\t}\n\t\t\tif int16obj1 > int16obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif int16obj1 == int16obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif int16obj1 < int16obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Int32:\n\t\t{\n\t\t\tint32obj1, ok := obj1.(int32)\n\t\t\tif !ok {\n\t\t\t\tint32obj1 = obj1Value.Convert(int32Type).Interface().(int32)\n\t\t\t}\n\t\t\tint32obj2, ok := obj2.(int32)\n\t\t\tif !ok {\n\t\t\t\tint32obj2 = obj2Value.Convert(int32Type).Interface().(int32)\n\t\t\t}\n\t\t\tif int32obj1 > int32obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif int32obj1 == int32obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif int32obj1 < int32obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Int64:\n\t\t{\n\t\t\tint64obj1, ok := obj1.(int64)\n\t\t\tif !ok {\n\t\t\t\tint64obj1 = obj1Value.Convert(int64Type).Interface().(int64)\n\t\t\t}\n\t\t\tint64obj2, ok := obj2.(int64)\n\t\t\tif !ok {\n\t\t\t\tint64obj2 = obj2Value.Convert(int64Type).Interface().(int64)\n\t\t\t}\n\t\t\tif int64obj1 > int64obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif int64obj1 == int64obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif int64obj1 < int64obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Uint:\n\t\t{\n\t\t\tuintobj1, ok := obj1.(uint)\n\t\t\tif !ok {\n\t\t\t\tuintobj1 = obj1Value.Convert(uintType).Interface().(uint)\n\t\t\t}\n\t\t\tuintobj2, ok := obj2.(uint)\n\t\t\tif !ok {\n\t\t\t\tuintobj2 = obj2Value.Convert(uintType).Interface().(uint)\n\t\t\t}\n\t\t\tif uintobj1 > uintobj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif uintobj1 == uintobj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif uintobj1 < uintobj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Uint8:\n\t\t{\n\t\t\tuint8obj1, ok := obj1.(uint8)\n\t\t\tif !ok {\n\t\t\t\tuint8obj1 = obj1Value.Convert(uint8Type).Interface().(uint8)\n\t\t\t}\n\t\t\tuint8obj2, ok := obj2.(uint8)\n\t\t\tif !ok {\n\t\t\t\tuint8obj2 = obj2Value.Convert(uint8Type).Interface().(uint8)\n\t\t\t}\n\t\t\tif uint8obj1 > uint8obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif uint8obj1 == uint8obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif uint8obj1 < uint8obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Uint16:\n\t\t{\n\t\t\tuint16obj1, ok := obj1.(uint16)\n\t\t\tif !ok {\n\t\t\t\tuint16obj1 = obj1Value.Convert(uint16Type).Interface().(uint16)\n\t\t\t}\n\t\t\tuint16obj2, ok := obj2.(uint16)\n\t\t\tif !ok {\n\t\t\t\tuint16obj2 = obj2Value.Convert(uint16Type).Interface().(uint16)\n\t\t\t}\n\t\t\tif uint16obj1 > uint16obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif uint16obj1 == uint16obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif uint16obj1 < uint16obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Uint32:\n\t\t{\n\t\t\tuint32obj1, ok := obj1.(uint32)\n\t\t\tif !ok {\n\t\t\t\tuint32obj1 = obj1Value.Convert(uint32Type).Interface().(uint32)\n\t\t\t}\n\t\t\tuint32obj2, ok := obj2.(uint32)\n\t\t\tif !ok {\n\t\t\t\tuint32obj2 = obj2Value.Convert(uint32Type).Interface().(uint32)\n\t\t\t}\n\t\t\tif uint32obj1 > uint32obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif uint32obj1 == uint32obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif uint32obj1 < uint32obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Uint64:\n\t\t{\n\t\t\tuint64obj1, ok := obj1.(uint64)\n\t\t\tif !ok {\n\t\t\t\tuint64obj1 = obj1Value.Convert(uint64Type).Interface().(uint64)\n\t\t\t}\n\t\t\tuint64obj2, ok := obj2.(uint64)\n\t\t\tif !ok {\n\t\t\t\tuint64obj2 = obj2Value.Convert(uint64Type).Interface().(uint64)\n\t\t\t}\n\t\t\tif uint64obj1 > uint64obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif uint64obj1 == uint64obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif uint64obj1 < uint64obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Float32:\n\t\t{\n\t\t\tfloat32obj1, ok := obj1.(float32)\n\t\t\tif !ok {\n\t\t\t\tfloat32obj1 = obj1Value.Convert(float32Type).Interface().(float32)\n\t\t\t}\n\t\t\tfloat32obj2, ok := obj2.(float32)\n\t\t\tif !ok {\n\t\t\t\tfloat32obj2 = obj2Value.Convert(float32Type).Interface().(float32)\n\t\t\t}\n\t\t\tif float32obj1 > float32obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif float32obj1 == float32obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif float32obj1 < float32obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.Float64:\n\t\t{\n\t\t\tfloat64obj1, ok := obj1.(float64)\n\t\t\tif !ok {\n\t\t\t\tfloat64obj1 = obj1Value.Convert(float64Type).Interface().(float64)\n\t\t\t}\n\t\t\tfloat64obj2, ok := obj2.(float64)\n\t\t\tif !ok {\n\t\t\t\tfloat64obj2 = obj2Value.Convert(float64Type).Interface().(float64)\n\t\t\t}\n\t\t\tif float64obj1 > float64obj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif float64obj1 == float64obj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif float64obj1 < float64obj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\tcase reflect.String:\n\t\t{\n\t\t\tstringobj1, ok := obj1.(string)\n\t\t\tif !ok {\n\t\t\t\tstringobj1 = obj1Value.Convert(stringType).Interface().(string)\n\t\t\t}\n\t\t\tstringobj2, ok := obj2.(string)\n\t\t\tif !ok {\n\t\t\t\tstringobj2 = obj2Value.Convert(stringType).Interface().(string)\n\t\t\t}\n\t\t\tif stringobj1 > stringobj2 {\n\t\t\t\treturn compareGreater, true\n\t\t\t}\n\t\t\tif stringobj1 == stringobj2 {\n\t\t\t\treturn compareEqual, true\n\t\t\t}\n\t\t\tif stringobj1 < stringobj2 {\n\t\t\t\treturn compareLess, true\n\t\t\t}\n\t\t}\n\t// Check for known struct types we can check for compare results.\n\tcase reflect.Struct:\n\t\t{\n\t\t\t// All structs enter here. We're not interested in most types.\n\t\t\tif !canConvert(obj1Value, timeType) {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\t// time.Time can compared!\n\t\t\ttimeObj1, ok := obj1.(time.Time)\n\t\t\tif !ok {\n\t\t\t\ttimeObj1 = obj1Value.Convert(timeType).Interface().(time.Time)\n\t\t\t}\n\n\t\t\ttimeObj2, ok := obj2.(time.Time)\n\t\t\tif !ok {\n\t\t\t\ttimeObj2 = obj2Value.Convert(timeType).Interface().(time.Time)\n\t\t\t}\n\n\t\t\treturn compare(timeObj1.UnixNano(), timeObj2.UnixNano(), reflect.Int64)\n\t\t}\n\tcase reflect.Slice:\n\t\t{\n\t\t\t// We only care about the []byte type.\n\t\t\tif !canConvert(obj1Value, bytesType) {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\t// []byte can be compared!\n\t\t\tbytesObj1, ok := obj1.([]byte)\n\t\t\tif !ok {\n\t\t\t\tbytesObj1 = obj1Value.Convert(bytesType).Interface().([]byte)\n\n\t\t\t}\n\t\t\tbytesObj2, ok := obj2.([]byte)\n\t\t\tif !ok {\n\t\t\t\tbytesObj2 = obj2Value.Convert(bytesType).Interface().([]byte)\n\t\t\t}\n\n\t\t\treturn CompareType(bytes.Compare(bytesObj1, bytesObj2)), true\n\t\t}\n\t}\n\n\treturn compareEqual, false\n}\n\n// Greater asserts that the first element is greater than the second\n//\n//\tassert.Greater(t, 2, 1)\n//\tassert.Greater(t, float64(2), float64(1))\n//\tassert.Greater(t, \"b\", \"a\")\nfunc Greater(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn compareTwoValues(t, e1, e2, []CompareType{compareGreater}, \"\\\"%v\\\" is not greater than \\\"%v\\\"\", msgAndArgs...)\n}\n\n// GreaterOrEqual asserts that the first element is greater than or equal to the second\n//\n//\tassert.GreaterOrEqual(t, 2, 1)\n//\tassert.GreaterOrEqual(t, 2, 2)\n//\tassert.GreaterOrEqual(t, \"b\", \"a\")\n//\tassert.GreaterOrEqual(t, \"b\", \"b\")\nfunc GreaterOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn compareTwoValues(t, e1, e2, []CompareType{compareGreater, compareEqual}, \"\\\"%v\\\" is not greater than or equal to \\\"%v\\\"\", msgAndArgs...)\n}\n\n// Less asserts that the first element is less than the second\n//\n//\tassert.Less(t, 1, 2)\n//\tassert.Less(t, float64(1), float64(2))\n//\tassert.Less(t, \"a\", \"b\")\nfunc Less(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn compareTwoValues(t, e1, e2, []CompareType{compareLess}, \"\\\"%v\\\" is not less than \\\"%v\\\"\", msgAndArgs...)\n}\n\n// LessOrEqual asserts that the first element is less than or equal to the second\n//\n//\tassert.LessOrEqual(t, 1, 2)\n//\tassert.LessOrEqual(t, 2, 2)\n//\tassert.LessOrEqual(t, \"a\", \"b\")\n//\tassert.LessOrEqual(t, \"b\", \"b\")\nfunc LessOrEqual(t TestingT, e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn compareTwoValues(t, e1, e2, []CompareType{compareLess, compareEqual}, \"\\\"%v\\\" is not less than or equal to \\\"%v\\\"\", msgAndArgs...)\n}\n\n// Positive asserts that the specified element is positive\n//\n//\tassert.Positive(t, 1)\n//\tassert.Positive(t, 1.23)\nfunc Positive(t TestingT, e interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tzero := reflect.Zero(reflect.TypeOf(e))\n\treturn compareTwoValues(t, e, zero.Interface(), []CompareType{compareGreater}, \"\\\"%v\\\" is not positive\", msgAndArgs...)\n}\n\n// Negative asserts that the specified element is negative\n//\n//\tassert.Negative(t, -1)\n//\tassert.Negative(t, -1.23)\nfunc Negative(t TestingT, e interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tzero := reflect.Zero(reflect.TypeOf(e))\n\treturn compareTwoValues(t, e, zero.Interface(), []CompareType{compareLess}, \"\\\"%v\\\" is not negative\", msgAndArgs...)\n}\n\nfunc compareTwoValues(t TestingT, e1 interface{}, e2 interface{}, allowedComparesResults []CompareType, failMessage string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\te1Kind := reflect.ValueOf(e1).Kind()\n\te2Kind := reflect.ValueOf(e2).Kind()\n\tif e1Kind != e2Kind {\n\t\treturn Fail(t, \"Elements should be the same type\", msgAndArgs...)\n\t}\n\n\tcompareResult, isComparable := compare(e1, e2, e1Kind)\n\tif !isComparable {\n\t\treturn Fail(t, fmt.Sprintf(\"Can not compare type \\\"%s\\\"\", reflect.TypeOf(e1)), msgAndArgs...)\n\t}\n\n\tif !containsValue(allowedComparesResults, compareResult) {\n\t\treturn Fail(t, fmt.Sprintf(failMessage, e1, e2), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\nfunc containsValue(values []CompareType, value CompareType) bool {\n\tfor _, v := range values {\n\t\tif v == value {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn false\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_compare_can_convert.go",
    "content": "//go:build go1.17\n// +build go1.17\n\n// TODO: once support for Go 1.16 is dropped, this file can be\n//       merged/removed with assertion_compare_go1.17_test.go and\n//       assertion_compare_legacy.go\n\npackage assert\n\nimport \"reflect\"\n\n// Wrapper around reflect.Value.CanConvert, for compatibility\n// reasons.\nfunc canConvert(value reflect.Value, to reflect.Type) bool {\n\treturn value.CanConvert(to)\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_compare_legacy.go",
    "content": "//go:build !go1.17\n// +build !go1.17\n\n// TODO: once support for Go 1.16 is dropped, this file can be\n//       merged/removed with assertion_compare_go1.17_test.go and\n//       assertion_compare_can_convert.go\n\npackage assert\n\nimport \"reflect\"\n\n// Older versions of Go does not have the reflect.Value.CanConvert\n// method.\nfunc canConvert(value reflect.Value, to reflect.Type) bool {\n\treturn false\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_format.go",
    "content": "/*\n* CODE GENERATED AUTOMATICALLY WITH github.com/stretchr/testify/_codegen\n* THIS FILE MUST NOT BE EDITED BY HAND\n */\n\npackage assert\n\nimport (\n\thttp \"net/http\"\n\turl \"net/url\"\n\ttime \"time\"\n)\n\n// Conditionf uses a Comparison to assert a complex condition.\nfunc Conditionf(t TestingT, comp Comparison, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Condition(t, comp, append([]interface{}{msg}, args...)...)\n}\n\n// Containsf asserts that the specified string, list(array, slice...) or map contains the\n// specified substring or element.\n//\n//\tassert.Containsf(t, \"Hello World\", \"World\", \"error message %s\", \"formatted\")\n//\tassert.Containsf(t, [\"Hello\", \"World\"], \"World\", \"error message %s\", \"formatted\")\n//\tassert.Containsf(t, {\"Hello\": \"World\"}, \"Hello\", \"error message %s\", \"formatted\")\nfunc Containsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Contains(t, s, contains, append([]interface{}{msg}, args...)...)\n}\n\n// DirExistsf checks whether a directory exists in the given path. It also fails\n// if the path is a file rather a directory or there is an error checking whether it exists.\nfunc DirExistsf(t TestingT, path string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn DirExists(t, path, append([]interface{}{msg}, args...)...)\n}\n\n// ElementsMatchf asserts that the specified listA(array, slice...) is equal to specified\n// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,\n// the number of appearances of each of them in both lists should match.\n//\n// assert.ElementsMatchf(t, [1, 3, 2, 3], [1, 3, 3, 2], \"error message %s\", \"formatted\")\nfunc ElementsMatchf(t TestingT, listA interface{}, listB interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ElementsMatch(t, listA, listB, append([]interface{}{msg}, args...)...)\n}\n\n// Emptyf asserts that the specified object is empty.  I.e. nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\tassert.Emptyf(t, obj, \"error message %s\", \"formatted\")\nfunc Emptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Empty(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// Equalf asserts that two objects are equal.\n//\n//\tassert.Equalf(t, 123, 123, \"error message %s\", \"formatted\")\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses). Function equality\n// cannot be determined and will always fail.\nfunc Equalf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Equal(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// EqualErrorf asserts that a function returned an error (i.e. not `nil`)\n// and that it is equal to the provided error.\n//\n//\tactualObj, err := SomeFunction()\n//\tassert.EqualErrorf(t, err,  expectedErrorString, \"error message %s\", \"formatted\")\nfunc EqualErrorf(t TestingT, theError error, errString string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualError(t, theError, errString, append([]interface{}{msg}, args...)...)\n}\n\n// EqualExportedValuesf asserts that the types of two objects are equal and their public\n// fields are also equal. This is useful for comparing structs that have private fields\n// that could potentially differ.\n//\n//\t type S struct {\n//\t\tExported     \tint\n//\t\tnotExported   \tint\n//\t }\n//\t assert.EqualExportedValuesf(t, S{1, 2}, S{1, 3}, \"error message %s\", \"formatted\") => true\n//\t assert.EqualExportedValuesf(t, S{1, 2}, S{2, 3}, \"error message %s\", \"formatted\") => false\nfunc EqualExportedValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualExportedValues(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// EqualValuesf asserts that two objects are equal or convertable to the same types\n// and equal.\n//\n//\tassert.EqualValuesf(t, uint32(123), int32(123), \"error message %s\", \"formatted\")\nfunc EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualValues(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// Errorf asserts that a function returned an error (i.e. not `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if assert.Errorf(t, err, \"error message %s\", \"formatted\") {\n//\t\t   assert.Equal(t, expectedErrorf, err)\n//\t  }\nfunc Errorf(t TestingT, err error, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Error(t, err, append([]interface{}{msg}, args...)...)\n}\n\n// ErrorAsf asserts that at least one of the errors in err's chain matches target, and if so, sets target to that error value.\n// This is a wrapper for errors.As.\nfunc ErrorAsf(t TestingT, err error, target interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorAs(t, err, target, append([]interface{}{msg}, args...)...)\n}\n\n// ErrorContainsf asserts that a function returned an error (i.e. not `nil`)\n// and that the error contains the specified substring.\n//\n//\tactualObj, err := SomeFunction()\n//\tassert.ErrorContainsf(t, err,  expectedErrorSubString, \"error message %s\", \"formatted\")\nfunc ErrorContainsf(t TestingT, theError error, contains string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorContains(t, theError, contains, append([]interface{}{msg}, args...)...)\n}\n\n// ErrorIsf asserts that at least one of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc ErrorIsf(t TestingT, err error, target error, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorIs(t, err, target, append([]interface{}{msg}, args...)...)\n}\n\n// Eventuallyf asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick.\n//\n//\tassert.Eventuallyf(t, func() bool { return true; }, time.Second, 10*time.Millisecond, \"error message %s\", \"formatted\")\nfunc Eventuallyf(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Eventually(t, condition, waitFor, tick, append([]interface{}{msg}, args...)...)\n}\n\n// EventuallyWithTf asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick. In contrast to Eventually,\n// it supplies a CollectT to the condition function, so that the condition\n// function can use the CollectT to call other assertions.\n// The condition is considered \"met\" if no errors are raised in a tick.\n// The supplied CollectT collects all errors from one tick (if there are any).\n// If the condition is not met before waitFor, the collected errors of\n// the last tick are copied to t.\n//\n//\texternalValue := false\n//\tgo func() {\n//\t\ttime.Sleep(8*time.Second)\n//\t\texternalValue = true\n//\t}()\n//\tassert.EventuallyWithTf(t, func(c *assert.CollectT, \"error message %s\", \"formatted\") {\n//\t\t// add assertions as needed; any assertion failure will fail the current tick\n//\t\tassert.True(c, externalValue, \"expected 'externalValue' to be true\")\n//\t}, 1*time.Second, 10*time.Second, \"external state has not changed to 'true'; still false\")\nfunc EventuallyWithTf(t TestingT, condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EventuallyWithT(t, condition, waitFor, tick, append([]interface{}{msg}, args...)...)\n}\n\n// Exactlyf asserts that two objects are equal in value and type.\n//\n//\tassert.Exactlyf(t, int32(123), int64(123), \"error message %s\", \"formatted\")\nfunc Exactlyf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Exactly(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// Failf reports a failure through\nfunc Failf(t TestingT, failureMessage string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Fail(t, failureMessage, append([]interface{}{msg}, args...)...)\n}\n\n// FailNowf fails test\nfunc FailNowf(t TestingT, failureMessage string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn FailNow(t, failureMessage, append([]interface{}{msg}, args...)...)\n}\n\n// Falsef asserts that the specified value is false.\n//\n//\tassert.Falsef(t, myBool, \"error message %s\", \"formatted\")\nfunc Falsef(t TestingT, value bool, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn False(t, value, append([]interface{}{msg}, args...)...)\n}\n\n// FileExistsf checks whether a file exists in the given path. It also fails if\n// the path points to a directory or there is an error when trying to check the file.\nfunc FileExistsf(t TestingT, path string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn FileExists(t, path, append([]interface{}{msg}, args...)...)\n}\n\n// Greaterf asserts that the first element is greater than the second\n//\n//\tassert.Greaterf(t, 2, 1, \"error message %s\", \"formatted\")\n//\tassert.Greaterf(t, float64(2), float64(1), \"error message %s\", \"formatted\")\n//\tassert.Greaterf(t, \"b\", \"a\", \"error message %s\", \"formatted\")\nfunc Greaterf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Greater(t, e1, e2, append([]interface{}{msg}, args...)...)\n}\n\n// GreaterOrEqualf asserts that the first element is greater than or equal to the second\n//\n//\tassert.GreaterOrEqualf(t, 2, 1, \"error message %s\", \"formatted\")\n//\tassert.GreaterOrEqualf(t, 2, 2, \"error message %s\", \"formatted\")\n//\tassert.GreaterOrEqualf(t, \"b\", \"a\", \"error message %s\", \"formatted\")\n//\tassert.GreaterOrEqualf(t, \"b\", \"b\", \"error message %s\", \"formatted\")\nfunc GreaterOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn GreaterOrEqual(t, e1, e2, append([]interface{}{msg}, args...)...)\n}\n\n// HTTPBodyContainsf asserts that a specified handler returns a\n// body that contains a string.\n//\n//\tassert.HTTPBodyContainsf(t, myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\", \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPBodyContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPBodyContains(t, handler, method, url, values, str, append([]interface{}{msg}, args...)...)\n}\n\n// HTTPBodyNotContainsf asserts that a specified handler returns a\n// body that does not contain a string.\n//\n//\tassert.HTTPBodyNotContainsf(t, myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\", \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPBodyNotContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPBodyNotContains(t, handler, method, url, values, str, append([]interface{}{msg}, args...)...)\n}\n\n// HTTPErrorf asserts that a specified handler returns an error status code.\n//\n//\tassert.HTTPErrorf(t, myHandler, \"POST\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPErrorf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPError(t, handler, method, url, values, append([]interface{}{msg}, args...)...)\n}\n\n// HTTPRedirectf asserts that a specified handler returns a redirect status code.\n//\n//\tassert.HTTPRedirectf(t, myHandler, \"GET\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPRedirectf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPRedirect(t, handler, method, url, values, append([]interface{}{msg}, args...)...)\n}\n\n// HTTPStatusCodef asserts that a specified handler returns a specified status code.\n//\n//\tassert.HTTPStatusCodef(t, myHandler, \"GET\", \"/notImplemented\", nil, 501, \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPStatusCodef(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, statuscode int, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPStatusCode(t, handler, method, url, values, statuscode, append([]interface{}{msg}, args...)...)\n}\n\n// HTTPSuccessf asserts that a specified handler returns a success status code.\n//\n//\tassert.HTTPSuccessf(t, myHandler, \"POST\", \"http://www.google.com\", nil, \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPSuccessf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPSuccess(t, handler, method, url, values, append([]interface{}{msg}, args...)...)\n}\n\n// Implementsf asserts that an object is implemented by the specified interface.\n//\n//\tassert.Implementsf(t, (*MyInterface)(nil), new(MyObject), \"error message %s\", \"formatted\")\nfunc Implementsf(t TestingT, interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Implements(t, interfaceObject, object, append([]interface{}{msg}, args...)...)\n}\n\n// InDeltaf asserts that the two numerals are within delta of each other.\n//\n//\tassert.InDeltaf(t, math.Pi, 22/7.0, 0.01, \"error message %s\", \"formatted\")\nfunc InDeltaf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDelta(t, expected, actual, delta, append([]interface{}{msg}, args...)...)\n}\n\n// InDeltaMapValuesf is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys.\nfunc InDeltaMapValuesf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDeltaMapValues(t, expected, actual, delta, append([]interface{}{msg}, args...)...)\n}\n\n// InDeltaSlicef is the same as InDelta, except it compares two slices.\nfunc InDeltaSlicef(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDeltaSlice(t, expected, actual, delta, append([]interface{}{msg}, args...)...)\n}\n\n// InEpsilonf asserts that expected and actual have a relative error less than epsilon\nfunc InEpsilonf(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InEpsilon(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...)\n}\n\n// InEpsilonSlicef is the same as InEpsilon, except it compares each value from two slices.\nfunc InEpsilonSlicef(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InEpsilonSlice(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...)\n}\n\n// IsDecreasingf asserts that the collection is decreasing\n//\n//\tassert.IsDecreasingf(t, []int{2, 1, 0}, \"error message %s\", \"formatted\")\n//\tassert.IsDecreasingf(t, []float{2, 1}, \"error message %s\", \"formatted\")\n//\tassert.IsDecreasingf(t, []string{\"b\", \"a\"}, \"error message %s\", \"formatted\")\nfunc IsDecreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsDecreasing(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// IsIncreasingf asserts that the collection is increasing\n//\n//\tassert.IsIncreasingf(t, []int{1, 2, 3}, \"error message %s\", \"formatted\")\n//\tassert.IsIncreasingf(t, []float{1, 2}, \"error message %s\", \"formatted\")\n//\tassert.IsIncreasingf(t, []string{\"a\", \"b\"}, \"error message %s\", \"formatted\")\nfunc IsIncreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsIncreasing(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// IsNonDecreasingf asserts that the collection is not decreasing\n//\n//\tassert.IsNonDecreasingf(t, []int{1, 1, 2}, \"error message %s\", \"formatted\")\n//\tassert.IsNonDecreasingf(t, []float{1, 2}, \"error message %s\", \"formatted\")\n//\tassert.IsNonDecreasingf(t, []string{\"a\", \"b\"}, \"error message %s\", \"formatted\")\nfunc IsNonDecreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsNonDecreasing(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// IsNonIncreasingf asserts that the collection is not increasing\n//\n//\tassert.IsNonIncreasingf(t, []int{2, 1, 1}, \"error message %s\", \"formatted\")\n//\tassert.IsNonIncreasingf(t, []float{2, 1}, \"error message %s\", \"formatted\")\n//\tassert.IsNonIncreasingf(t, []string{\"b\", \"a\"}, \"error message %s\", \"formatted\")\nfunc IsNonIncreasingf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsNonIncreasing(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// IsTypef asserts that the specified objects are of the same type.\nfunc IsTypef(t TestingT, expectedType interface{}, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsType(t, expectedType, object, append([]interface{}{msg}, args...)...)\n}\n\n// JSONEqf asserts that two JSON strings are equivalent.\n//\n//\tassert.JSONEqf(t, `{\"hello\": \"world\", \"foo\": \"bar\"}`, `{\"foo\": \"bar\", \"hello\": \"world\"}`, \"error message %s\", \"formatted\")\nfunc JSONEqf(t TestingT, expected string, actual string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn JSONEq(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// Lenf asserts that the specified object has specific length.\n// Lenf also fails if the object has a type that len() not accept.\n//\n//\tassert.Lenf(t, mySlice, 3, \"error message %s\", \"formatted\")\nfunc Lenf(t TestingT, object interface{}, length int, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Len(t, object, length, append([]interface{}{msg}, args...)...)\n}\n\n// Lessf asserts that the first element is less than the second\n//\n//\tassert.Lessf(t, 1, 2, \"error message %s\", \"formatted\")\n//\tassert.Lessf(t, float64(1), float64(2), \"error message %s\", \"formatted\")\n//\tassert.Lessf(t, \"a\", \"b\", \"error message %s\", \"formatted\")\nfunc Lessf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Less(t, e1, e2, append([]interface{}{msg}, args...)...)\n}\n\n// LessOrEqualf asserts that the first element is less than or equal to the second\n//\n//\tassert.LessOrEqualf(t, 1, 2, \"error message %s\", \"formatted\")\n//\tassert.LessOrEqualf(t, 2, 2, \"error message %s\", \"formatted\")\n//\tassert.LessOrEqualf(t, \"a\", \"b\", \"error message %s\", \"formatted\")\n//\tassert.LessOrEqualf(t, \"b\", \"b\", \"error message %s\", \"formatted\")\nfunc LessOrEqualf(t TestingT, e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn LessOrEqual(t, e1, e2, append([]interface{}{msg}, args...)...)\n}\n\n// Negativef asserts that the specified element is negative\n//\n//\tassert.Negativef(t, -1, \"error message %s\", \"formatted\")\n//\tassert.Negativef(t, -1.23, \"error message %s\", \"formatted\")\nfunc Negativef(t TestingT, e interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Negative(t, e, append([]interface{}{msg}, args...)...)\n}\n\n// Neverf asserts that the given condition doesn't satisfy in waitFor time,\n// periodically checking the target function each tick.\n//\n//\tassert.Neverf(t, func() bool { return false; }, time.Second, 10*time.Millisecond, \"error message %s\", \"formatted\")\nfunc Neverf(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Never(t, condition, waitFor, tick, append([]interface{}{msg}, args...)...)\n}\n\n// Nilf asserts that the specified object is nil.\n//\n//\tassert.Nilf(t, err, \"error message %s\", \"formatted\")\nfunc Nilf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Nil(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// NoDirExistsf checks whether a directory does not exist in the given path.\n// It fails if the path points to an existing _directory_ only.\nfunc NoDirExistsf(t TestingT, path string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoDirExists(t, path, append([]interface{}{msg}, args...)...)\n}\n\n// NoErrorf asserts that a function returned no error (i.e. `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if assert.NoErrorf(t, err, \"error message %s\", \"formatted\") {\n//\t\t   assert.Equal(t, expectedObj, actualObj)\n//\t  }\nfunc NoErrorf(t TestingT, err error, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoError(t, err, append([]interface{}{msg}, args...)...)\n}\n\n// NoFileExistsf checks whether a file does not exist in a given path. It fails\n// if the path points to an existing _file_ only.\nfunc NoFileExistsf(t TestingT, path string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoFileExists(t, path, append([]interface{}{msg}, args...)...)\n}\n\n// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the\n// specified substring or element.\n//\n//\tassert.NotContainsf(t, \"Hello World\", \"Earth\", \"error message %s\", \"formatted\")\n//\tassert.NotContainsf(t, [\"Hello\", \"World\"], \"Earth\", \"error message %s\", \"formatted\")\n//\tassert.NotContainsf(t, {\"Hello\": \"World\"}, \"Earth\", \"error message %s\", \"formatted\")\nfunc NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotContains(t, s, contains, append([]interface{}{msg}, args...)...)\n}\n\n// NotEmptyf asserts that the specified object is NOT empty.  I.e. not nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\tif assert.NotEmptyf(t, obj, \"error message %s\", \"formatted\") {\n//\t  assert.Equal(t, \"two\", obj[1])\n//\t}\nfunc NotEmptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEmpty(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// NotEqualf asserts that the specified values are NOT equal.\n//\n//\tassert.NotEqualf(t, obj1, obj2, \"error message %s\", \"formatted\")\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses).\nfunc NotEqualf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEqual(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// NotEqualValuesf asserts that two objects are not equal even when converted to the same type\n//\n//\tassert.NotEqualValuesf(t, obj1, obj2, \"error message %s\", \"formatted\")\nfunc NotEqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEqualValues(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// NotErrorIsf asserts that at none of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc NotErrorIsf(t TestingT, err error, target error, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotErrorIs(t, err, target, append([]interface{}{msg}, args...)...)\n}\n\n// NotNilf asserts that the specified object is not nil.\n//\n//\tassert.NotNilf(t, err, \"error message %s\", \"formatted\")\nfunc NotNilf(t TestingT, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotNil(t, object, append([]interface{}{msg}, args...)...)\n}\n\n// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic.\n//\n//\tassert.NotPanicsf(t, func(){ RemainCalm() }, \"error message %s\", \"formatted\")\nfunc NotPanicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotPanics(t, f, append([]interface{}{msg}, args...)...)\n}\n\n// NotRegexpf asserts that a specified regexp does not match a string.\n//\n//\tassert.NotRegexpf(t, regexp.MustCompile(\"starts\"), \"it's starting\", \"error message %s\", \"formatted\")\n//\tassert.NotRegexpf(t, \"^start\", \"it's not starting\", \"error message %s\", \"formatted\")\nfunc NotRegexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotRegexp(t, rx, str, append([]interface{}{msg}, args...)...)\n}\n\n// NotSamef asserts that two pointers do not reference the same object.\n//\n//\tassert.NotSamef(t, ptr1, ptr2, \"error message %s\", \"formatted\")\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc NotSamef(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotSame(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// NotSubsetf asserts that the specified list(array, slice...) contains not all\n// elements given in the specified subset(array, slice...).\n//\n//\tassert.NotSubsetf(t, [1, 3, 4], [1, 2], \"But [1, 3, 4] does not contain [1, 2]\", \"error message %s\", \"formatted\")\nfunc NotSubsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotSubset(t, list, subset, append([]interface{}{msg}, args...)...)\n}\n\n// NotZerof asserts that i is not the zero value for its type.\nfunc NotZerof(t TestingT, i interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotZero(t, i, append([]interface{}{msg}, args...)...)\n}\n\n// Panicsf asserts that the code inside the specified PanicTestFunc panics.\n//\n//\tassert.Panicsf(t, func(){ GoCrazy() }, \"error message %s\", \"formatted\")\nfunc Panicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Panics(t, f, append([]interface{}{msg}, args...)...)\n}\n\n// PanicsWithErrorf asserts that the code inside the specified PanicTestFunc\n// panics, and that the recovered panic value is an error that satisfies the\n// EqualError comparison.\n//\n//\tassert.PanicsWithErrorf(t, \"crazy error\", func(){ GoCrazy() }, \"error message %s\", \"formatted\")\nfunc PanicsWithErrorf(t TestingT, errString string, f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn PanicsWithError(t, errString, f, append([]interface{}{msg}, args...)...)\n}\n\n// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that\n// the recovered panic value equals the expected panic value.\n//\n//\tassert.PanicsWithValuef(t, \"crazy error\", func(){ GoCrazy() }, \"error message %s\", \"formatted\")\nfunc PanicsWithValuef(t TestingT, expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn PanicsWithValue(t, expected, f, append([]interface{}{msg}, args...)...)\n}\n\n// Positivef asserts that the specified element is positive\n//\n//\tassert.Positivef(t, 1, \"error message %s\", \"formatted\")\n//\tassert.Positivef(t, 1.23, \"error message %s\", \"formatted\")\nfunc Positivef(t TestingT, e interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Positive(t, e, append([]interface{}{msg}, args...)...)\n}\n\n// Regexpf asserts that a specified regexp matches a string.\n//\n//\tassert.Regexpf(t, regexp.MustCompile(\"start\"), \"it's starting\", \"error message %s\", \"formatted\")\n//\tassert.Regexpf(t, \"start...$\", \"it's not starting\", \"error message %s\", \"formatted\")\nfunc Regexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Regexp(t, rx, str, append([]interface{}{msg}, args...)...)\n}\n\n// Samef asserts that two pointers reference the same object.\n//\n//\tassert.Samef(t, ptr1, ptr2, \"error message %s\", \"formatted\")\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc Samef(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Same(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// Subsetf asserts that the specified list(array, slice...) contains all\n// elements given in the specified subset(array, slice...).\n//\n//\tassert.Subsetf(t, [1, 2, 3], [1, 2], \"But [1, 2, 3] does contain [1, 2]\", \"error message %s\", \"formatted\")\nfunc Subsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Subset(t, list, subset, append([]interface{}{msg}, args...)...)\n}\n\n// Truef asserts that the specified value is true.\n//\n//\tassert.Truef(t, myBool, \"error message %s\", \"formatted\")\nfunc Truef(t TestingT, value bool, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn True(t, value, append([]interface{}{msg}, args...)...)\n}\n\n// WithinDurationf asserts that the two times are within duration delta of each other.\n//\n//\tassert.WithinDurationf(t, time.Now(), time.Now(), 10*time.Second, \"error message %s\", \"formatted\")\nfunc WithinDurationf(t TestingT, expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn WithinDuration(t, expected, actual, delta, append([]interface{}{msg}, args...)...)\n}\n\n// WithinRangef asserts that a time is within a time range (inclusive).\n//\n//\tassert.WithinRangef(t, time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second), \"error message %s\", \"formatted\")\nfunc WithinRangef(t TestingT, actual time.Time, start time.Time, end time.Time, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn WithinRange(t, actual, start, end, append([]interface{}{msg}, args...)...)\n}\n\n// YAMLEqf asserts that two YAML strings are equivalent.\nfunc YAMLEqf(t TestingT, expected string, actual string, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn YAMLEq(t, expected, actual, append([]interface{}{msg}, args...)...)\n}\n\n// Zerof asserts that i is the zero value for its type.\nfunc Zerof(t TestingT, i interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Zero(t, i, append([]interface{}{msg}, args...)...)\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_format.go.tmpl",
    "content": "{{.CommentFormat}}\nfunc {{.DocInfo.Name}}f(t TestingT, {{.ParamsFormat}}) bool {\n\tif h, ok := t.(tHelper); ok { h.Helper() }\n\treturn {{.DocInfo.Name}}(t, {{.ForwardedParamsFormat}})\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_forward.go",
    "content": "/*\n* CODE GENERATED AUTOMATICALLY WITH github.com/stretchr/testify/_codegen\n* THIS FILE MUST NOT BE EDITED BY HAND\n */\n\npackage assert\n\nimport (\n\thttp \"net/http\"\n\turl \"net/url\"\n\ttime \"time\"\n)\n\n// Condition uses a Comparison to assert a complex condition.\nfunc (a *Assertions) Condition(comp Comparison, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Condition(a.t, comp, msgAndArgs...)\n}\n\n// Conditionf uses a Comparison to assert a complex condition.\nfunc (a *Assertions) Conditionf(comp Comparison, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Conditionf(a.t, comp, msg, args...)\n}\n\n// Contains asserts that the specified string, list(array, slice...) or map contains the\n// specified substring or element.\n//\n//\ta.Contains(\"Hello World\", \"World\")\n//\ta.Contains([\"Hello\", \"World\"], \"World\")\n//\ta.Contains({\"Hello\": \"World\"}, \"Hello\")\nfunc (a *Assertions) Contains(s interface{}, contains interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Contains(a.t, s, contains, msgAndArgs...)\n}\n\n// Containsf asserts that the specified string, list(array, slice...) or map contains the\n// specified substring or element.\n//\n//\ta.Containsf(\"Hello World\", \"World\", \"error message %s\", \"formatted\")\n//\ta.Containsf([\"Hello\", \"World\"], \"World\", \"error message %s\", \"formatted\")\n//\ta.Containsf({\"Hello\": \"World\"}, \"Hello\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) Containsf(s interface{}, contains interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Containsf(a.t, s, contains, msg, args...)\n}\n\n// DirExists checks whether a directory exists in the given path. It also fails\n// if the path is a file rather a directory or there is an error checking whether it exists.\nfunc (a *Assertions) DirExists(path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn DirExists(a.t, path, msgAndArgs...)\n}\n\n// DirExistsf checks whether a directory exists in the given path. It also fails\n// if the path is a file rather a directory or there is an error checking whether it exists.\nfunc (a *Assertions) DirExistsf(path string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn DirExistsf(a.t, path, msg, args...)\n}\n\n// ElementsMatch asserts that the specified listA(array, slice...) is equal to specified\n// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,\n// the number of appearances of each of them in both lists should match.\n//\n// a.ElementsMatch([1, 3, 2, 3], [1, 3, 3, 2])\nfunc (a *Assertions) ElementsMatch(listA interface{}, listB interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ElementsMatch(a.t, listA, listB, msgAndArgs...)\n}\n\n// ElementsMatchf asserts that the specified listA(array, slice...) is equal to specified\n// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,\n// the number of appearances of each of them in both lists should match.\n//\n// a.ElementsMatchf([1, 3, 2, 3], [1, 3, 3, 2], \"error message %s\", \"formatted\")\nfunc (a *Assertions) ElementsMatchf(listA interface{}, listB interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ElementsMatchf(a.t, listA, listB, msg, args...)\n}\n\n// Empty asserts that the specified object is empty.  I.e. nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\ta.Empty(obj)\nfunc (a *Assertions) Empty(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Empty(a.t, object, msgAndArgs...)\n}\n\n// Emptyf asserts that the specified object is empty.  I.e. nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\ta.Emptyf(obj, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Emptyf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Emptyf(a.t, object, msg, args...)\n}\n\n// Equal asserts that two objects are equal.\n//\n//\ta.Equal(123, 123)\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses). Function equality\n// cannot be determined and will always fail.\nfunc (a *Assertions) Equal(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Equal(a.t, expected, actual, msgAndArgs...)\n}\n\n// EqualError asserts that a function returned an error (i.e. not `nil`)\n// and that it is equal to the provided error.\n//\n//\tactualObj, err := SomeFunction()\n//\ta.EqualError(err,  expectedErrorString)\nfunc (a *Assertions) EqualError(theError error, errString string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualError(a.t, theError, errString, msgAndArgs...)\n}\n\n// EqualErrorf asserts that a function returned an error (i.e. not `nil`)\n// and that it is equal to the provided error.\n//\n//\tactualObj, err := SomeFunction()\n//\ta.EqualErrorf(err,  expectedErrorString, \"error message %s\", \"formatted\")\nfunc (a *Assertions) EqualErrorf(theError error, errString string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualErrorf(a.t, theError, errString, msg, args...)\n}\n\n// EqualExportedValues asserts that the types of two objects are equal and their public\n// fields are also equal. This is useful for comparing structs that have private fields\n// that could potentially differ.\n//\n//\t type S struct {\n//\t\tExported     \tint\n//\t\tnotExported   \tint\n//\t }\n//\t a.EqualExportedValues(S{1, 2}, S{1, 3}) => true\n//\t a.EqualExportedValues(S{1, 2}, S{2, 3}) => false\nfunc (a *Assertions) EqualExportedValues(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualExportedValues(a.t, expected, actual, msgAndArgs...)\n}\n\n// EqualExportedValuesf asserts that the types of two objects are equal and their public\n// fields are also equal. This is useful for comparing structs that have private fields\n// that could potentially differ.\n//\n//\t type S struct {\n//\t\tExported     \tint\n//\t\tnotExported   \tint\n//\t }\n//\t a.EqualExportedValuesf(S{1, 2}, S{1, 3}, \"error message %s\", \"formatted\") => true\n//\t a.EqualExportedValuesf(S{1, 2}, S{2, 3}, \"error message %s\", \"formatted\") => false\nfunc (a *Assertions) EqualExportedValuesf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualExportedValuesf(a.t, expected, actual, msg, args...)\n}\n\n// EqualValues asserts that two objects are equal or convertable to the same types\n// and equal.\n//\n//\ta.EqualValues(uint32(123), int32(123))\nfunc (a *Assertions) EqualValues(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualValues(a.t, expected, actual, msgAndArgs...)\n}\n\n// EqualValuesf asserts that two objects are equal or convertable to the same types\n// and equal.\n//\n//\ta.EqualValuesf(uint32(123), int32(123), \"error message %s\", \"formatted\")\nfunc (a *Assertions) EqualValuesf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EqualValuesf(a.t, expected, actual, msg, args...)\n}\n\n// Equalf asserts that two objects are equal.\n//\n//\ta.Equalf(123, 123, \"error message %s\", \"formatted\")\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses). Function equality\n// cannot be determined and will always fail.\nfunc (a *Assertions) Equalf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Equalf(a.t, expected, actual, msg, args...)\n}\n\n// Error asserts that a function returned an error (i.e. not `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if a.Error(err) {\n//\t\t   assert.Equal(t, expectedError, err)\n//\t  }\nfunc (a *Assertions) Error(err error, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Error(a.t, err, msgAndArgs...)\n}\n\n// ErrorAs asserts that at least one of the errors in err's chain matches target, and if so, sets target to that error value.\n// This is a wrapper for errors.As.\nfunc (a *Assertions) ErrorAs(err error, target interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorAs(a.t, err, target, msgAndArgs...)\n}\n\n// ErrorAsf asserts that at least one of the errors in err's chain matches target, and if so, sets target to that error value.\n// This is a wrapper for errors.As.\nfunc (a *Assertions) ErrorAsf(err error, target interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorAsf(a.t, err, target, msg, args...)\n}\n\n// ErrorContains asserts that a function returned an error (i.e. not `nil`)\n// and that the error contains the specified substring.\n//\n//\tactualObj, err := SomeFunction()\n//\ta.ErrorContains(err,  expectedErrorSubString)\nfunc (a *Assertions) ErrorContains(theError error, contains string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorContains(a.t, theError, contains, msgAndArgs...)\n}\n\n// ErrorContainsf asserts that a function returned an error (i.e. not `nil`)\n// and that the error contains the specified substring.\n//\n//\tactualObj, err := SomeFunction()\n//\ta.ErrorContainsf(err,  expectedErrorSubString, \"error message %s\", \"formatted\")\nfunc (a *Assertions) ErrorContainsf(theError error, contains string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorContainsf(a.t, theError, contains, msg, args...)\n}\n\n// ErrorIs asserts that at least one of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc (a *Assertions) ErrorIs(err error, target error, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorIs(a.t, err, target, msgAndArgs...)\n}\n\n// ErrorIsf asserts that at least one of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc (a *Assertions) ErrorIsf(err error, target error, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn ErrorIsf(a.t, err, target, msg, args...)\n}\n\n// Errorf asserts that a function returned an error (i.e. not `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if a.Errorf(err, \"error message %s\", \"formatted\") {\n//\t\t   assert.Equal(t, expectedErrorf, err)\n//\t  }\nfunc (a *Assertions) Errorf(err error, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Errorf(a.t, err, msg, args...)\n}\n\n// Eventually asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick.\n//\n//\ta.Eventually(func() bool { return true; }, time.Second, 10*time.Millisecond)\nfunc (a *Assertions) Eventually(condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Eventually(a.t, condition, waitFor, tick, msgAndArgs...)\n}\n\n// EventuallyWithT asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick. In contrast to Eventually,\n// it supplies a CollectT to the condition function, so that the condition\n// function can use the CollectT to call other assertions.\n// The condition is considered \"met\" if no errors are raised in a tick.\n// The supplied CollectT collects all errors from one tick (if there are any).\n// If the condition is not met before waitFor, the collected errors of\n// the last tick are copied to t.\n//\n//\texternalValue := false\n//\tgo func() {\n//\t\ttime.Sleep(8*time.Second)\n//\t\texternalValue = true\n//\t}()\n//\ta.EventuallyWithT(func(c *assert.CollectT) {\n//\t\t// add assertions as needed; any assertion failure will fail the current tick\n//\t\tassert.True(c, externalValue, \"expected 'externalValue' to be true\")\n//\t}, 1*time.Second, 10*time.Second, \"external state has not changed to 'true'; still false\")\nfunc (a *Assertions) EventuallyWithT(condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EventuallyWithT(a.t, condition, waitFor, tick, msgAndArgs...)\n}\n\n// EventuallyWithTf asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick. In contrast to Eventually,\n// it supplies a CollectT to the condition function, so that the condition\n// function can use the CollectT to call other assertions.\n// The condition is considered \"met\" if no errors are raised in a tick.\n// The supplied CollectT collects all errors from one tick (if there are any).\n// If the condition is not met before waitFor, the collected errors of\n// the last tick are copied to t.\n//\n//\texternalValue := false\n//\tgo func() {\n//\t\ttime.Sleep(8*time.Second)\n//\t\texternalValue = true\n//\t}()\n//\ta.EventuallyWithTf(func(c *assert.CollectT, \"error message %s\", \"formatted\") {\n//\t\t// add assertions as needed; any assertion failure will fail the current tick\n//\t\tassert.True(c, externalValue, \"expected 'externalValue' to be true\")\n//\t}, 1*time.Second, 10*time.Second, \"external state has not changed to 'true'; still false\")\nfunc (a *Assertions) EventuallyWithTf(condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn EventuallyWithTf(a.t, condition, waitFor, tick, msg, args...)\n}\n\n// Eventuallyf asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick.\n//\n//\ta.Eventuallyf(func() bool { return true; }, time.Second, 10*time.Millisecond, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Eventuallyf(condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Eventuallyf(a.t, condition, waitFor, tick, msg, args...)\n}\n\n// Exactly asserts that two objects are equal in value and type.\n//\n//\ta.Exactly(int32(123), int64(123))\nfunc (a *Assertions) Exactly(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Exactly(a.t, expected, actual, msgAndArgs...)\n}\n\n// Exactlyf asserts that two objects are equal in value and type.\n//\n//\ta.Exactlyf(int32(123), int64(123), \"error message %s\", \"formatted\")\nfunc (a *Assertions) Exactlyf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Exactlyf(a.t, expected, actual, msg, args...)\n}\n\n// Fail reports a failure through\nfunc (a *Assertions) Fail(failureMessage string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Fail(a.t, failureMessage, msgAndArgs...)\n}\n\n// FailNow fails test\nfunc (a *Assertions) FailNow(failureMessage string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn FailNow(a.t, failureMessage, msgAndArgs...)\n}\n\n// FailNowf fails test\nfunc (a *Assertions) FailNowf(failureMessage string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn FailNowf(a.t, failureMessage, msg, args...)\n}\n\n// Failf reports a failure through\nfunc (a *Assertions) Failf(failureMessage string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Failf(a.t, failureMessage, msg, args...)\n}\n\n// False asserts that the specified value is false.\n//\n//\ta.False(myBool)\nfunc (a *Assertions) False(value bool, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn False(a.t, value, msgAndArgs...)\n}\n\n// Falsef asserts that the specified value is false.\n//\n//\ta.Falsef(myBool, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Falsef(value bool, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Falsef(a.t, value, msg, args...)\n}\n\n// FileExists checks whether a file exists in the given path. It also fails if\n// the path points to a directory or there is an error when trying to check the file.\nfunc (a *Assertions) FileExists(path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn FileExists(a.t, path, msgAndArgs...)\n}\n\n// FileExistsf checks whether a file exists in the given path. It also fails if\n// the path points to a directory or there is an error when trying to check the file.\nfunc (a *Assertions) FileExistsf(path string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn FileExistsf(a.t, path, msg, args...)\n}\n\n// Greater asserts that the first element is greater than the second\n//\n//\ta.Greater(2, 1)\n//\ta.Greater(float64(2), float64(1))\n//\ta.Greater(\"b\", \"a\")\nfunc (a *Assertions) Greater(e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Greater(a.t, e1, e2, msgAndArgs...)\n}\n\n// GreaterOrEqual asserts that the first element is greater than or equal to the second\n//\n//\ta.GreaterOrEqual(2, 1)\n//\ta.GreaterOrEqual(2, 2)\n//\ta.GreaterOrEqual(\"b\", \"a\")\n//\ta.GreaterOrEqual(\"b\", \"b\")\nfunc (a *Assertions) GreaterOrEqual(e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn GreaterOrEqual(a.t, e1, e2, msgAndArgs...)\n}\n\n// GreaterOrEqualf asserts that the first element is greater than or equal to the second\n//\n//\ta.GreaterOrEqualf(2, 1, \"error message %s\", \"formatted\")\n//\ta.GreaterOrEqualf(2, 2, \"error message %s\", \"formatted\")\n//\ta.GreaterOrEqualf(\"b\", \"a\", \"error message %s\", \"formatted\")\n//\ta.GreaterOrEqualf(\"b\", \"b\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) GreaterOrEqualf(e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn GreaterOrEqualf(a.t, e1, e2, msg, args...)\n}\n\n// Greaterf asserts that the first element is greater than the second\n//\n//\ta.Greaterf(2, 1, \"error message %s\", \"formatted\")\n//\ta.Greaterf(float64(2), float64(1), \"error message %s\", \"formatted\")\n//\ta.Greaterf(\"b\", \"a\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) Greaterf(e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Greaterf(a.t, e1, e2, msg, args...)\n}\n\n// HTTPBodyContains asserts that a specified handler returns a\n// body that contains a string.\n//\n//\ta.HTTPBodyContains(myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPBodyContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPBodyContains(a.t, handler, method, url, values, str, msgAndArgs...)\n}\n\n// HTTPBodyContainsf asserts that a specified handler returns a\n// body that contains a string.\n//\n//\ta.HTTPBodyContainsf(myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\", \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPBodyContainsf(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPBodyContainsf(a.t, handler, method, url, values, str, msg, args...)\n}\n\n// HTTPBodyNotContains asserts that a specified handler returns a\n// body that does not contain a string.\n//\n//\ta.HTTPBodyNotContains(myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPBodyNotContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPBodyNotContains(a.t, handler, method, url, values, str, msgAndArgs...)\n}\n\n// HTTPBodyNotContainsf asserts that a specified handler returns a\n// body that does not contain a string.\n//\n//\ta.HTTPBodyNotContainsf(myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\", \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPBodyNotContainsf(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPBodyNotContainsf(a.t, handler, method, url, values, str, msg, args...)\n}\n\n// HTTPError asserts that a specified handler returns an error status code.\n//\n//\ta.HTTPError(myHandler, \"POST\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPError(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPError(a.t, handler, method, url, values, msgAndArgs...)\n}\n\n// HTTPErrorf asserts that a specified handler returns an error status code.\n//\n//\ta.HTTPErrorf(myHandler, \"POST\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPErrorf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPErrorf(a.t, handler, method, url, values, msg, args...)\n}\n\n// HTTPRedirect asserts that a specified handler returns a redirect status code.\n//\n//\ta.HTTPRedirect(myHandler, \"GET\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPRedirect(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPRedirect(a.t, handler, method, url, values, msgAndArgs...)\n}\n\n// HTTPRedirectf asserts that a specified handler returns a redirect status code.\n//\n//\ta.HTTPRedirectf(myHandler, \"GET\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPRedirectf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPRedirectf(a.t, handler, method, url, values, msg, args...)\n}\n\n// HTTPStatusCode asserts that a specified handler returns a specified status code.\n//\n//\ta.HTTPStatusCode(myHandler, \"GET\", \"/notImplemented\", nil, 501)\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPStatusCode(handler http.HandlerFunc, method string, url string, values url.Values, statuscode int, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPStatusCode(a.t, handler, method, url, values, statuscode, msgAndArgs...)\n}\n\n// HTTPStatusCodef asserts that a specified handler returns a specified status code.\n//\n//\ta.HTTPStatusCodef(myHandler, \"GET\", \"/notImplemented\", nil, 501, \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPStatusCodef(handler http.HandlerFunc, method string, url string, values url.Values, statuscode int, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPStatusCodef(a.t, handler, method, url, values, statuscode, msg, args...)\n}\n\n// HTTPSuccess asserts that a specified handler returns a success status code.\n//\n//\ta.HTTPSuccess(myHandler, \"POST\", \"http://www.google.com\", nil)\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPSuccess(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPSuccess(a.t, handler, method, url, values, msgAndArgs...)\n}\n\n// HTTPSuccessf asserts that a specified handler returns a success status code.\n//\n//\ta.HTTPSuccessf(myHandler, \"POST\", \"http://www.google.com\", nil, \"error message %s\", \"formatted\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc (a *Assertions) HTTPSuccessf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn HTTPSuccessf(a.t, handler, method, url, values, msg, args...)\n}\n\n// Implements asserts that an object is implemented by the specified interface.\n//\n//\ta.Implements((*MyInterface)(nil), new(MyObject))\nfunc (a *Assertions) Implements(interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Implements(a.t, interfaceObject, object, msgAndArgs...)\n}\n\n// Implementsf asserts that an object is implemented by the specified interface.\n//\n//\ta.Implementsf((*MyInterface)(nil), new(MyObject), \"error message %s\", \"formatted\")\nfunc (a *Assertions) Implementsf(interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Implementsf(a.t, interfaceObject, object, msg, args...)\n}\n\n// InDelta asserts that the two numerals are within delta of each other.\n//\n//\ta.InDelta(math.Pi, 22/7.0, 0.01)\nfunc (a *Assertions) InDelta(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDelta(a.t, expected, actual, delta, msgAndArgs...)\n}\n\n// InDeltaMapValues is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys.\nfunc (a *Assertions) InDeltaMapValues(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDeltaMapValues(a.t, expected, actual, delta, msgAndArgs...)\n}\n\n// InDeltaMapValuesf is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys.\nfunc (a *Assertions) InDeltaMapValuesf(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDeltaMapValuesf(a.t, expected, actual, delta, msg, args...)\n}\n\n// InDeltaSlice is the same as InDelta, except it compares two slices.\nfunc (a *Assertions) InDeltaSlice(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDeltaSlice(a.t, expected, actual, delta, msgAndArgs...)\n}\n\n// InDeltaSlicef is the same as InDelta, except it compares two slices.\nfunc (a *Assertions) InDeltaSlicef(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDeltaSlicef(a.t, expected, actual, delta, msg, args...)\n}\n\n// InDeltaf asserts that the two numerals are within delta of each other.\n//\n//\ta.InDeltaf(math.Pi, 22/7.0, 0.01, \"error message %s\", \"formatted\")\nfunc (a *Assertions) InDeltaf(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InDeltaf(a.t, expected, actual, delta, msg, args...)\n}\n\n// InEpsilon asserts that expected and actual have a relative error less than epsilon\nfunc (a *Assertions) InEpsilon(expected interface{}, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InEpsilon(a.t, expected, actual, epsilon, msgAndArgs...)\n}\n\n// InEpsilonSlice is the same as InEpsilon, except it compares each value from two slices.\nfunc (a *Assertions) InEpsilonSlice(expected interface{}, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InEpsilonSlice(a.t, expected, actual, epsilon, msgAndArgs...)\n}\n\n// InEpsilonSlicef is the same as InEpsilon, except it compares each value from two slices.\nfunc (a *Assertions) InEpsilonSlicef(expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InEpsilonSlicef(a.t, expected, actual, epsilon, msg, args...)\n}\n\n// InEpsilonf asserts that expected and actual have a relative error less than epsilon\nfunc (a *Assertions) InEpsilonf(expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn InEpsilonf(a.t, expected, actual, epsilon, msg, args...)\n}\n\n// IsDecreasing asserts that the collection is decreasing\n//\n//\ta.IsDecreasing([]int{2, 1, 0})\n//\ta.IsDecreasing([]float{2, 1})\n//\ta.IsDecreasing([]string{\"b\", \"a\"})\nfunc (a *Assertions) IsDecreasing(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsDecreasing(a.t, object, msgAndArgs...)\n}\n\n// IsDecreasingf asserts that the collection is decreasing\n//\n//\ta.IsDecreasingf([]int{2, 1, 0}, \"error message %s\", \"formatted\")\n//\ta.IsDecreasingf([]float{2, 1}, \"error message %s\", \"formatted\")\n//\ta.IsDecreasingf([]string{\"b\", \"a\"}, \"error message %s\", \"formatted\")\nfunc (a *Assertions) IsDecreasingf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsDecreasingf(a.t, object, msg, args...)\n}\n\n// IsIncreasing asserts that the collection is increasing\n//\n//\ta.IsIncreasing([]int{1, 2, 3})\n//\ta.IsIncreasing([]float{1, 2})\n//\ta.IsIncreasing([]string{\"a\", \"b\"})\nfunc (a *Assertions) IsIncreasing(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsIncreasing(a.t, object, msgAndArgs...)\n}\n\n// IsIncreasingf asserts that the collection is increasing\n//\n//\ta.IsIncreasingf([]int{1, 2, 3}, \"error message %s\", \"formatted\")\n//\ta.IsIncreasingf([]float{1, 2}, \"error message %s\", \"formatted\")\n//\ta.IsIncreasingf([]string{\"a\", \"b\"}, \"error message %s\", \"formatted\")\nfunc (a *Assertions) IsIncreasingf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsIncreasingf(a.t, object, msg, args...)\n}\n\n// IsNonDecreasing asserts that the collection is not decreasing\n//\n//\ta.IsNonDecreasing([]int{1, 1, 2})\n//\ta.IsNonDecreasing([]float{1, 2})\n//\ta.IsNonDecreasing([]string{\"a\", \"b\"})\nfunc (a *Assertions) IsNonDecreasing(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsNonDecreasing(a.t, object, msgAndArgs...)\n}\n\n// IsNonDecreasingf asserts that the collection is not decreasing\n//\n//\ta.IsNonDecreasingf([]int{1, 1, 2}, \"error message %s\", \"formatted\")\n//\ta.IsNonDecreasingf([]float{1, 2}, \"error message %s\", \"formatted\")\n//\ta.IsNonDecreasingf([]string{\"a\", \"b\"}, \"error message %s\", \"formatted\")\nfunc (a *Assertions) IsNonDecreasingf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsNonDecreasingf(a.t, object, msg, args...)\n}\n\n// IsNonIncreasing asserts that the collection is not increasing\n//\n//\ta.IsNonIncreasing([]int{2, 1, 1})\n//\ta.IsNonIncreasing([]float{2, 1})\n//\ta.IsNonIncreasing([]string{\"b\", \"a\"})\nfunc (a *Assertions) IsNonIncreasing(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsNonIncreasing(a.t, object, msgAndArgs...)\n}\n\n// IsNonIncreasingf asserts that the collection is not increasing\n//\n//\ta.IsNonIncreasingf([]int{2, 1, 1}, \"error message %s\", \"formatted\")\n//\ta.IsNonIncreasingf([]float{2, 1}, \"error message %s\", \"formatted\")\n//\ta.IsNonIncreasingf([]string{\"b\", \"a\"}, \"error message %s\", \"formatted\")\nfunc (a *Assertions) IsNonIncreasingf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsNonIncreasingf(a.t, object, msg, args...)\n}\n\n// IsType asserts that the specified objects are of the same type.\nfunc (a *Assertions) IsType(expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsType(a.t, expectedType, object, msgAndArgs...)\n}\n\n// IsTypef asserts that the specified objects are of the same type.\nfunc (a *Assertions) IsTypef(expectedType interface{}, object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn IsTypef(a.t, expectedType, object, msg, args...)\n}\n\n// JSONEq asserts that two JSON strings are equivalent.\n//\n//\ta.JSONEq(`{\"hello\": \"world\", \"foo\": \"bar\"}`, `{\"foo\": \"bar\", \"hello\": \"world\"}`)\nfunc (a *Assertions) JSONEq(expected string, actual string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn JSONEq(a.t, expected, actual, msgAndArgs...)\n}\n\n// JSONEqf asserts that two JSON strings are equivalent.\n//\n//\ta.JSONEqf(`{\"hello\": \"world\", \"foo\": \"bar\"}`, `{\"foo\": \"bar\", \"hello\": \"world\"}`, \"error message %s\", \"formatted\")\nfunc (a *Assertions) JSONEqf(expected string, actual string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn JSONEqf(a.t, expected, actual, msg, args...)\n}\n\n// Len asserts that the specified object has specific length.\n// Len also fails if the object has a type that len() not accept.\n//\n//\ta.Len(mySlice, 3)\nfunc (a *Assertions) Len(object interface{}, length int, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Len(a.t, object, length, msgAndArgs...)\n}\n\n// Lenf asserts that the specified object has specific length.\n// Lenf also fails if the object has a type that len() not accept.\n//\n//\ta.Lenf(mySlice, 3, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Lenf(object interface{}, length int, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Lenf(a.t, object, length, msg, args...)\n}\n\n// Less asserts that the first element is less than the second\n//\n//\ta.Less(1, 2)\n//\ta.Less(float64(1), float64(2))\n//\ta.Less(\"a\", \"b\")\nfunc (a *Assertions) Less(e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Less(a.t, e1, e2, msgAndArgs...)\n}\n\n// LessOrEqual asserts that the first element is less than or equal to the second\n//\n//\ta.LessOrEqual(1, 2)\n//\ta.LessOrEqual(2, 2)\n//\ta.LessOrEqual(\"a\", \"b\")\n//\ta.LessOrEqual(\"b\", \"b\")\nfunc (a *Assertions) LessOrEqual(e1 interface{}, e2 interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn LessOrEqual(a.t, e1, e2, msgAndArgs...)\n}\n\n// LessOrEqualf asserts that the first element is less than or equal to the second\n//\n//\ta.LessOrEqualf(1, 2, \"error message %s\", \"formatted\")\n//\ta.LessOrEqualf(2, 2, \"error message %s\", \"formatted\")\n//\ta.LessOrEqualf(\"a\", \"b\", \"error message %s\", \"formatted\")\n//\ta.LessOrEqualf(\"b\", \"b\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) LessOrEqualf(e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn LessOrEqualf(a.t, e1, e2, msg, args...)\n}\n\n// Lessf asserts that the first element is less than the second\n//\n//\ta.Lessf(1, 2, \"error message %s\", \"formatted\")\n//\ta.Lessf(float64(1), float64(2), \"error message %s\", \"formatted\")\n//\ta.Lessf(\"a\", \"b\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) Lessf(e1 interface{}, e2 interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Lessf(a.t, e1, e2, msg, args...)\n}\n\n// Negative asserts that the specified element is negative\n//\n//\ta.Negative(-1)\n//\ta.Negative(-1.23)\nfunc (a *Assertions) Negative(e interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Negative(a.t, e, msgAndArgs...)\n}\n\n// Negativef asserts that the specified element is negative\n//\n//\ta.Negativef(-1, \"error message %s\", \"formatted\")\n//\ta.Negativef(-1.23, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Negativef(e interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Negativef(a.t, e, msg, args...)\n}\n\n// Never asserts that the given condition doesn't satisfy in waitFor time,\n// periodically checking the target function each tick.\n//\n//\ta.Never(func() bool { return false; }, time.Second, 10*time.Millisecond)\nfunc (a *Assertions) Never(condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Never(a.t, condition, waitFor, tick, msgAndArgs...)\n}\n\n// Neverf asserts that the given condition doesn't satisfy in waitFor time,\n// periodically checking the target function each tick.\n//\n//\ta.Neverf(func() bool { return false; }, time.Second, 10*time.Millisecond, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Neverf(condition func() bool, waitFor time.Duration, tick time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Neverf(a.t, condition, waitFor, tick, msg, args...)\n}\n\n// Nil asserts that the specified object is nil.\n//\n//\ta.Nil(err)\nfunc (a *Assertions) Nil(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Nil(a.t, object, msgAndArgs...)\n}\n\n// Nilf asserts that the specified object is nil.\n//\n//\ta.Nilf(err, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Nilf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Nilf(a.t, object, msg, args...)\n}\n\n// NoDirExists checks whether a directory does not exist in the given path.\n// It fails if the path points to an existing _directory_ only.\nfunc (a *Assertions) NoDirExists(path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoDirExists(a.t, path, msgAndArgs...)\n}\n\n// NoDirExistsf checks whether a directory does not exist in the given path.\n// It fails if the path points to an existing _directory_ only.\nfunc (a *Assertions) NoDirExistsf(path string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoDirExistsf(a.t, path, msg, args...)\n}\n\n// NoError asserts that a function returned no error (i.e. `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if a.NoError(err) {\n//\t\t   assert.Equal(t, expectedObj, actualObj)\n//\t  }\nfunc (a *Assertions) NoError(err error, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoError(a.t, err, msgAndArgs...)\n}\n\n// NoErrorf asserts that a function returned no error (i.e. `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if a.NoErrorf(err, \"error message %s\", \"formatted\") {\n//\t\t   assert.Equal(t, expectedObj, actualObj)\n//\t  }\nfunc (a *Assertions) NoErrorf(err error, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoErrorf(a.t, err, msg, args...)\n}\n\n// NoFileExists checks whether a file does not exist in a given path. It fails\n// if the path points to an existing _file_ only.\nfunc (a *Assertions) NoFileExists(path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoFileExists(a.t, path, msgAndArgs...)\n}\n\n// NoFileExistsf checks whether a file does not exist in a given path. It fails\n// if the path points to an existing _file_ only.\nfunc (a *Assertions) NoFileExistsf(path string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NoFileExistsf(a.t, path, msg, args...)\n}\n\n// NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the\n// specified substring or element.\n//\n//\ta.NotContains(\"Hello World\", \"Earth\")\n//\ta.NotContains([\"Hello\", \"World\"], \"Earth\")\n//\ta.NotContains({\"Hello\": \"World\"}, \"Earth\")\nfunc (a *Assertions) NotContains(s interface{}, contains interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotContains(a.t, s, contains, msgAndArgs...)\n}\n\n// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the\n// specified substring or element.\n//\n//\ta.NotContainsf(\"Hello World\", \"Earth\", \"error message %s\", \"formatted\")\n//\ta.NotContainsf([\"Hello\", \"World\"], \"Earth\", \"error message %s\", \"formatted\")\n//\ta.NotContainsf({\"Hello\": \"World\"}, \"Earth\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) NotContainsf(s interface{}, contains interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotContainsf(a.t, s, contains, msg, args...)\n}\n\n// NotEmpty asserts that the specified object is NOT empty.  I.e. not nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\tif a.NotEmpty(obj) {\n//\t  assert.Equal(t, \"two\", obj[1])\n//\t}\nfunc (a *Assertions) NotEmpty(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEmpty(a.t, object, msgAndArgs...)\n}\n\n// NotEmptyf asserts that the specified object is NOT empty.  I.e. not nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\tif a.NotEmptyf(obj, \"error message %s\", \"formatted\") {\n//\t  assert.Equal(t, \"two\", obj[1])\n//\t}\nfunc (a *Assertions) NotEmptyf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEmptyf(a.t, object, msg, args...)\n}\n\n// NotEqual asserts that the specified values are NOT equal.\n//\n//\ta.NotEqual(obj1, obj2)\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses).\nfunc (a *Assertions) NotEqual(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEqual(a.t, expected, actual, msgAndArgs...)\n}\n\n// NotEqualValues asserts that two objects are not equal even when converted to the same type\n//\n//\ta.NotEqualValues(obj1, obj2)\nfunc (a *Assertions) NotEqualValues(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEqualValues(a.t, expected, actual, msgAndArgs...)\n}\n\n// NotEqualValuesf asserts that two objects are not equal even when converted to the same type\n//\n//\ta.NotEqualValuesf(obj1, obj2, \"error message %s\", \"formatted\")\nfunc (a *Assertions) NotEqualValuesf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEqualValuesf(a.t, expected, actual, msg, args...)\n}\n\n// NotEqualf asserts that the specified values are NOT equal.\n//\n//\ta.NotEqualf(obj1, obj2, \"error message %s\", \"formatted\")\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses).\nfunc (a *Assertions) NotEqualf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotEqualf(a.t, expected, actual, msg, args...)\n}\n\n// NotErrorIs asserts that at none of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc (a *Assertions) NotErrorIs(err error, target error, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotErrorIs(a.t, err, target, msgAndArgs...)\n}\n\n// NotErrorIsf asserts that at none of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc (a *Assertions) NotErrorIsf(err error, target error, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotErrorIsf(a.t, err, target, msg, args...)\n}\n\n// NotNil asserts that the specified object is not nil.\n//\n//\ta.NotNil(err)\nfunc (a *Assertions) NotNil(object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotNil(a.t, object, msgAndArgs...)\n}\n\n// NotNilf asserts that the specified object is not nil.\n//\n//\ta.NotNilf(err, \"error message %s\", \"formatted\")\nfunc (a *Assertions) NotNilf(object interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotNilf(a.t, object, msg, args...)\n}\n\n// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic.\n//\n//\ta.NotPanics(func(){ RemainCalm() })\nfunc (a *Assertions) NotPanics(f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotPanics(a.t, f, msgAndArgs...)\n}\n\n// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic.\n//\n//\ta.NotPanicsf(func(){ RemainCalm() }, \"error message %s\", \"formatted\")\nfunc (a *Assertions) NotPanicsf(f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotPanicsf(a.t, f, msg, args...)\n}\n\n// NotRegexp asserts that a specified regexp does not match a string.\n//\n//\ta.NotRegexp(regexp.MustCompile(\"starts\"), \"it's starting\")\n//\ta.NotRegexp(\"^start\", \"it's not starting\")\nfunc (a *Assertions) NotRegexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotRegexp(a.t, rx, str, msgAndArgs...)\n}\n\n// NotRegexpf asserts that a specified regexp does not match a string.\n//\n//\ta.NotRegexpf(regexp.MustCompile(\"starts\"), \"it's starting\", \"error message %s\", \"formatted\")\n//\ta.NotRegexpf(\"^start\", \"it's not starting\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) NotRegexpf(rx interface{}, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotRegexpf(a.t, rx, str, msg, args...)\n}\n\n// NotSame asserts that two pointers do not reference the same object.\n//\n//\ta.NotSame(ptr1, ptr2)\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc (a *Assertions) NotSame(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotSame(a.t, expected, actual, msgAndArgs...)\n}\n\n// NotSamef asserts that two pointers do not reference the same object.\n//\n//\ta.NotSamef(ptr1, ptr2, \"error message %s\", \"formatted\")\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc (a *Assertions) NotSamef(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotSamef(a.t, expected, actual, msg, args...)\n}\n\n// NotSubset asserts that the specified list(array, slice...) contains not all\n// elements given in the specified subset(array, slice...).\n//\n//\ta.NotSubset([1, 3, 4], [1, 2], \"But [1, 3, 4] does not contain [1, 2]\")\nfunc (a *Assertions) NotSubset(list interface{}, subset interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotSubset(a.t, list, subset, msgAndArgs...)\n}\n\n// NotSubsetf asserts that the specified list(array, slice...) contains not all\n// elements given in the specified subset(array, slice...).\n//\n//\ta.NotSubsetf([1, 3, 4], [1, 2], \"But [1, 3, 4] does not contain [1, 2]\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) NotSubsetf(list interface{}, subset interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotSubsetf(a.t, list, subset, msg, args...)\n}\n\n// NotZero asserts that i is not the zero value for its type.\nfunc (a *Assertions) NotZero(i interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotZero(a.t, i, msgAndArgs...)\n}\n\n// NotZerof asserts that i is not the zero value for its type.\nfunc (a *Assertions) NotZerof(i interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn NotZerof(a.t, i, msg, args...)\n}\n\n// Panics asserts that the code inside the specified PanicTestFunc panics.\n//\n//\ta.Panics(func(){ GoCrazy() })\nfunc (a *Assertions) Panics(f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Panics(a.t, f, msgAndArgs...)\n}\n\n// PanicsWithError asserts that the code inside the specified PanicTestFunc\n// panics, and that the recovered panic value is an error that satisfies the\n// EqualError comparison.\n//\n//\ta.PanicsWithError(\"crazy error\", func(){ GoCrazy() })\nfunc (a *Assertions) PanicsWithError(errString string, f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn PanicsWithError(a.t, errString, f, msgAndArgs...)\n}\n\n// PanicsWithErrorf asserts that the code inside the specified PanicTestFunc\n// panics, and that the recovered panic value is an error that satisfies the\n// EqualError comparison.\n//\n//\ta.PanicsWithErrorf(\"crazy error\", func(){ GoCrazy() }, \"error message %s\", \"formatted\")\nfunc (a *Assertions) PanicsWithErrorf(errString string, f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn PanicsWithErrorf(a.t, errString, f, msg, args...)\n}\n\n// PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that\n// the recovered panic value equals the expected panic value.\n//\n//\ta.PanicsWithValue(\"crazy error\", func(){ GoCrazy() })\nfunc (a *Assertions) PanicsWithValue(expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn PanicsWithValue(a.t, expected, f, msgAndArgs...)\n}\n\n// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that\n// the recovered panic value equals the expected panic value.\n//\n//\ta.PanicsWithValuef(\"crazy error\", func(){ GoCrazy() }, \"error message %s\", \"formatted\")\nfunc (a *Assertions) PanicsWithValuef(expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn PanicsWithValuef(a.t, expected, f, msg, args...)\n}\n\n// Panicsf asserts that the code inside the specified PanicTestFunc panics.\n//\n//\ta.Panicsf(func(){ GoCrazy() }, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Panicsf(f PanicTestFunc, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Panicsf(a.t, f, msg, args...)\n}\n\n// Positive asserts that the specified element is positive\n//\n//\ta.Positive(1)\n//\ta.Positive(1.23)\nfunc (a *Assertions) Positive(e interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Positive(a.t, e, msgAndArgs...)\n}\n\n// Positivef asserts that the specified element is positive\n//\n//\ta.Positivef(1, \"error message %s\", \"formatted\")\n//\ta.Positivef(1.23, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Positivef(e interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Positivef(a.t, e, msg, args...)\n}\n\n// Regexp asserts that a specified regexp matches a string.\n//\n//\ta.Regexp(regexp.MustCompile(\"start\"), \"it's starting\")\n//\ta.Regexp(\"start...$\", \"it's not starting\")\nfunc (a *Assertions) Regexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Regexp(a.t, rx, str, msgAndArgs...)\n}\n\n// Regexpf asserts that a specified regexp matches a string.\n//\n//\ta.Regexpf(regexp.MustCompile(\"start\"), \"it's starting\", \"error message %s\", \"formatted\")\n//\ta.Regexpf(\"start...$\", \"it's not starting\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) Regexpf(rx interface{}, str interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Regexpf(a.t, rx, str, msg, args...)\n}\n\n// Same asserts that two pointers reference the same object.\n//\n//\ta.Same(ptr1, ptr2)\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc (a *Assertions) Same(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Same(a.t, expected, actual, msgAndArgs...)\n}\n\n// Samef asserts that two pointers reference the same object.\n//\n//\ta.Samef(ptr1, ptr2, \"error message %s\", \"formatted\")\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc (a *Assertions) Samef(expected interface{}, actual interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Samef(a.t, expected, actual, msg, args...)\n}\n\n// Subset asserts that the specified list(array, slice...) contains all\n// elements given in the specified subset(array, slice...).\n//\n//\ta.Subset([1, 2, 3], [1, 2], \"But [1, 2, 3] does contain [1, 2]\")\nfunc (a *Assertions) Subset(list interface{}, subset interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Subset(a.t, list, subset, msgAndArgs...)\n}\n\n// Subsetf asserts that the specified list(array, slice...) contains all\n// elements given in the specified subset(array, slice...).\n//\n//\ta.Subsetf([1, 2, 3], [1, 2], \"But [1, 2, 3] does contain [1, 2]\", \"error message %s\", \"formatted\")\nfunc (a *Assertions) Subsetf(list interface{}, subset interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Subsetf(a.t, list, subset, msg, args...)\n}\n\n// True asserts that the specified value is true.\n//\n//\ta.True(myBool)\nfunc (a *Assertions) True(value bool, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn True(a.t, value, msgAndArgs...)\n}\n\n// Truef asserts that the specified value is true.\n//\n//\ta.Truef(myBool, \"error message %s\", \"formatted\")\nfunc (a *Assertions) Truef(value bool, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Truef(a.t, value, msg, args...)\n}\n\n// WithinDuration asserts that the two times are within duration delta of each other.\n//\n//\ta.WithinDuration(time.Now(), time.Now(), 10*time.Second)\nfunc (a *Assertions) WithinDuration(expected time.Time, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn WithinDuration(a.t, expected, actual, delta, msgAndArgs...)\n}\n\n// WithinDurationf asserts that the two times are within duration delta of each other.\n//\n//\ta.WithinDurationf(time.Now(), time.Now(), 10*time.Second, \"error message %s\", \"formatted\")\nfunc (a *Assertions) WithinDurationf(expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn WithinDurationf(a.t, expected, actual, delta, msg, args...)\n}\n\n// WithinRange asserts that a time is within a time range (inclusive).\n//\n//\ta.WithinRange(time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second))\nfunc (a *Assertions) WithinRange(actual time.Time, start time.Time, end time.Time, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn WithinRange(a.t, actual, start, end, msgAndArgs...)\n}\n\n// WithinRangef asserts that a time is within a time range (inclusive).\n//\n//\ta.WithinRangef(time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second), \"error message %s\", \"formatted\")\nfunc (a *Assertions) WithinRangef(actual time.Time, start time.Time, end time.Time, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn WithinRangef(a.t, actual, start, end, msg, args...)\n}\n\n// YAMLEq asserts that two YAML strings are equivalent.\nfunc (a *Assertions) YAMLEq(expected string, actual string, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn YAMLEq(a.t, expected, actual, msgAndArgs...)\n}\n\n// YAMLEqf asserts that two YAML strings are equivalent.\nfunc (a *Assertions) YAMLEqf(expected string, actual string, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn YAMLEqf(a.t, expected, actual, msg, args...)\n}\n\n// Zero asserts that i is the zero value for its type.\nfunc (a *Assertions) Zero(i interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Zero(a.t, i, msgAndArgs...)\n}\n\n// Zerof asserts that i is the zero value for its type.\nfunc (a *Assertions) Zerof(i interface{}, msg string, args ...interface{}) bool {\n\tif h, ok := a.t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Zerof(a.t, i, msg, args...)\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_forward.go.tmpl",
    "content": "{{.CommentWithoutT \"a\"}}\nfunc (a *Assertions) {{.DocInfo.Name}}({{.Params}}) bool {\n\tif h, ok := a.t.(tHelper); ok { h.Helper() }\n\treturn {{.DocInfo.Name}}(a.t, {{.ForwardedParams}})\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertion_order.go",
    "content": "package assert\n\nimport (\n\t\"fmt\"\n\t\"reflect\"\n)\n\n// isOrdered checks that collection contains orderable elements.\nfunc isOrdered(t TestingT, object interface{}, allowedComparesResults []CompareType, failMessage string, msgAndArgs ...interface{}) bool {\n\tobjKind := reflect.TypeOf(object).Kind()\n\tif objKind != reflect.Slice && objKind != reflect.Array {\n\t\treturn false\n\t}\n\n\tobjValue := reflect.ValueOf(object)\n\tobjLen := objValue.Len()\n\n\tif objLen <= 1 {\n\t\treturn true\n\t}\n\n\tvalue := objValue.Index(0)\n\tvalueInterface := value.Interface()\n\tfirstValueKind := value.Kind()\n\n\tfor i := 1; i < objLen; i++ {\n\t\tprevValue := value\n\t\tprevValueInterface := valueInterface\n\n\t\tvalue = objValue.Index(i)\n\t\tvalueInterface = value.Interface()\n\n\t\tcompareResult, isComparable := compare(prevValueInterface, valueInterface, firstValueKind)\n\n\t\tif !isComparable {\n\t\t\treturn Fail(t, fmt.Sprintf(\"Can not compare type \\\"%s\\\" and \\\"%s\\\"\", reflect.TypeOf(value), reflect.TypeOf(prevValue)), msgAndArgs...)\n\t\t}\n\n\t\tif !containsValue(allowedComparesResults, compareResult) {\n\t\t\treturn Fail(t, fmt.Sprintf(failMessage, prevValue, value), msgAndArgs...)\n\t\t}\n\t}\n\n\treturn true\n}\n\n// IsIncreasing asserts that the collection is increasing\n//\n//\tassert.IsIncreasing(t, []int{1, 2, 3})\n//\tassert.IsIncreasing(t, []float{1, 2})\n//\tassert.IsIncreasing(t, []string{\"a\", \"b\"})\nfunc IsIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\treturn isOrdered(t, object, []CompareType{compareLess}, \"\\\"%v\\\" is not less than \\\"%v\\\"\", msgAndArgs...)\n}\n\n// IsNonIncreasing asserts that the collection is not increasing\n//\n//\tassert.IsNonIncreasing(t, []int{2, 1, 1})\n//\tassert.IsNonIncreasing(t, []float{2, 1})\n//\tassert.IsNonIncreasing(t, []string{\"b\", \"a\"})\nfunc IsNonIncreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\treturn isOrdered(t, object, []CompareType{compareEqual, compareGreater}, \"\\\"%v\\\" is not greater than or equal to \\\"%v\\\"\", msgAndArgs...)\n}\n\n// IsDecreasing asserts that the collection is decreasing\n//\n//\tassert.IsDecreasing(t, []int{2, 1, 0})\n//\tassert.IsDecreasing(t, []float{2, 1})\n//\tassert.IsDecreasing(t, []string{\"b\", \"a\"})\nfunc IsDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\treturn isOrdered(t, object, []CompareType{compareGreater}, \"\\\"%v\\\" is not greater than \\\"%v\\\"\", msgAndArgs...)\n}\n\n// IsNonDecreasing asserts that the collection is not decreasing\n//\n//\tassert.IsNonDecreasing(t, []int{1, 1, 2})\n//\tassert.IsNonDecreasing(t, []float{1, 2})\n//\tassert.IsNonDecreasing(t, []string{\"a\", \"b\"})\nfunc IsNonDecreasing(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\treturn isOrdered(t, object, []CompareType{compareLess, compareEqual}, \"\\\"%v\\\" is not less than or equal to \\\"%v\\\"\", msgAndArgs...)\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/assertions.go",
    "content": "package assert\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math\"\n\t\"os\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"runtime/debug\"\n\t\"strings\"\n\t\"time\"\n\t\"unicode\"\n\t\"unicode/utf8\"\n\n\t\"github.com/davecgh/go-spew/spew\"\n\t\"github.com/pmezard/go-difflib/difflib\"\n\tyaml \"gopkg.in/yaml.v3\"\n)\n\n//go:generate sh -c \"cd ../_codegen && go build && cd - && ../_codegen/_codegen -output-package=assert -template=assertion_format.go.tmpl\"\n\n// TestingT is an interface wrapper around *testing.T\ntype TestingT interface {\n\tErrorf(format string, args ...interface{})\n}\n\n// ComparisonAssertionFunc is a common function prototype when comparing two values.  Can be useful\n// for table driven tests.\ntype ComparisonAssertionFunc func(TestingT, interface{}, interface{}, ...interface{}) bool\n\n// ValueAssertionFunc is a common function prototype when validating a single value.  Can be useful\n// for table driven tests.\ntype ValueAssertionFunc func(TestingT, interface{}, ...interface{}) bool\n\n// BoolAssertionFunc is a common function prototype when validating a bool value.  Can be useful\n// for table driven tests.\ntype BoolAssertionFunc func(TestingT, bool, ...interface{}) bool\n\n// ErrorAssertionFunc is a common function prototype when validating an error value.  Can be useful\n// for table driven tests.\ntype ErrorAssertionFunc func(TestingT, error, ...interface{}) bool\n\n// Comparison is a custom function that returns true on success and false on failure\ntype Comparison func() (success bool)\n\n/*\n\tHelper functions\n*/\n\n// ObjectsAreEqual determines if two objects are considered equal.\n//\n// This function does no assertion of any kind.\nfunc ObjectsAreEqual(expected, actual interface{}) bool {\n\tif expected == nil || actual == nil {\n\t\treturn expected == actual\n\t}\n\n\texp, ok := expected.([]byte)\n\tif !ok {\n\t\treturn reflect.DeepEqual(expected, actual)\n\t}\n\n\tact, ok := actual.([]byte)\n\tif !ok {\n\t\treturn false\n\t}\n\tif exp == nil || act == nil {\n\t\treturn exp == nil && act == nil\n\t}\n\treturn bytes.Equal(exp, act)\n}\n\n// copyExportedFields iterates downward through nested data structures and creates a copy\n// that only contains the exported struct fields.\nfunc copyExportedFields(expected interface{}) interface{} {\n\tif isNil(expected) {\n\t\treturn expected\n\t}\n\n\texpectedType := reflect.TypeOf(expected)\n\texpectedKind := expectedType.Kind()\n\texpectedValue := reflect.ValueOf(expected)\n\n\tswitch expectedKind {\n\tcase reflect.Struct:\n\t\tresult := reflect.New(expectedType).Elem()\n\t\tfor i := 0; i < expectedType.NumField(); i++ {\n\t\t\tfield := expectedType.Field(i)\n\t\t\tisExported := field.IsExported()\n\t\t\tif isExported {\n\t\t\t\tfieldValue := expectedValue.Field(i)\n\t\t\t\tif isNil(fieldValue) || isNil(fieldValue.Interface()) {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tnewValue := copyExportedFields(fieldValue.Interface())\n\t\t\t\tresult.Field(i).Set(reflect.ValueOf(newValue))\n\t\t\t}\n\t\t}\n\t\treturn result.Interface()\n\n\tcase reflect.Ptr:\n\t\tresult := reflect.New(expectedType.Elem())\n\t\tunexportedRemoved := copyExportedFields(expectedValue.Elem().Interface())\n\t\tresult.Elem().Set(reflect.ValueOf(unexportedRemoved))\n\t\treturn result.Interface()\n\n\tcase reflect.Array, reflect.Slice:\n\t\tresult := reflect.MakeSlice(expectedType, expectedValue.Len(), expectedValue.Len())\n\t\tfor i := 0; i < expectedValue.Len(); i++ {\n\t\t\tindex := expectedValue.Index(i)\n\t\t\tif isNil(index) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tunexportedRemoved := copyExportedFields(index.Interface())\n\t\t\tresult.Index(i).Set(reflect.ValueOf(unexportedRemoved))\n\t\t}\n\t\treturn result.Interface()\n\n\tcase reflect.Map:\n\t\tresult := reflect.MakeMap(expectedType)\n\t\tfor _, k := range expectedValue.MapKeys() {\n\t\t\tindex := expectedValue.MapIndex(k)\n\t\t\tunexportedRemoved := copyExportedFields(index.Interface())\n\t\t\tresult.SetMapIndex(k, reflect.ValueOf(unexportedRemoved))\n\t\t}\n\t\treturn result.Interface()\n\n\tdefault:\n\t\treturn expected\n\t}\n}\n\n// ObjectsExportedFieldsAreEqual determines if the exported (public) fields of two objects are\n// considered equal. This comparison of only exported fields is applied recursively to nested data\n// structures.\n//\n// This function does no assertion of any kind.\nfunc ObjectsExportedFieldsAreEqual(expected, actual interface{}) bool {\n\texpectedCleaned := copyExportedFields(expected)\n\tactualCleaned := copyExportedFields(actual)\n\treturn ObjectsAreEqualValues(expectedCleaned, actualCleaned)\n}\n\n// ObjectsAreEqualValues gets whether two objects are equal, or if their\n// values are equal.\nfunc ObjectsAreEqualValues(expected, actual interface{}) bool {\n\tif ObjectsAreEqual(expected, actual) {\n\t\treturn true\n\t}\n\n\tactualType := reflect.TypeOf(actual)\n\tif actualType == nil {\n\t\treturn false\n\t}\n\texpectedValue := reflect.ValueOf(expected)\n\tif expectedValue.IsValid() && expectedValue.Type().ConvertibleTo(actualType) {\n\t\t// Attempt comparison after type conversion\n\t\treturn reflect.DeepEqual(expectedValue.Convert(actualType).Interface(), actual)\n\t}\n\n\treturn false\n}\n\n/* CallerInfo is necessary because the assert functions use the testing object\ninternally, causing it to print the file:line of the assert method, rather than where\nthe problem actually occurred in calling code.*/\n\n// CallerInfo returns an array of strings containing the file and line number\n// of each stack frame leading from the current test to the assert call that\n// failed.\nfunc CallerInfo() []string {\n\n\tvar pc uintptr\n\tvar ok bool\n\tvar file string\n\tvar line int\n\tvar name string\n\n\tcallers := []string{}\n\tfor i := 0; ; i++ {\n\t\tpc, file, line, ok = runtime.Caller(i)\n\t\tif !ok {\n\t\t\t// The breaks below failed to terminate the loop, and we ran off the\n\t\t\t// end of the call stack.\n\t\t\tbreak\n\t\t}\n\n\t\t// This is a huge edge case, but it will panic if this is the case, see #180\n\t\tif file == \"<autogenerated>\" {\n\t\t\tbreak\n\t\t}\n\n\t\tf := runtime.FuncForPC(pc)\n\t\tif f == nil {\n\t\t\tbreak\n\t\t}\n\t\tname = f.Name()\n\n\t\t// testing.tRunner is the standard library function that calls\n\t\t// tests. Subtests are called directly by tRunner, without going through\n\t\t// the Test/Benchmark/Example function that contains the t.Run calls, so\n\t\t// with subtests we should break when we hit tRunner, without adding it\n\t\t// to the list of callers.\n\t\tif name == \"testing.tRunner\" {\n\t\t\tbreak\n\t\t}\n\n\t\tparts := strings.Split(file, \"/\")\n\t\tif len(parts) > 1 {\n\t\t\tfilename := parts[len(parts)-1]\n\t\t\tdir := parts[len(parts)-2]\n\t\t\tif (dir != \"assert\" && dir != \"mock\" && dir != \"require\") || filename == \"mock_test.go\" {\n\t\t\t\tcallers = append(callers, fmt.Sprintf(\"%s:%d\", file, line))\n\t\t\t}\n\t\t}\n\n\t\t// Drop the package\n\t\tsegments := strings.Split(name, \".\")\n\t\tname = segments[len(segments)-1]\n\t\tif isTest(name, \"Test\") ||\n\t\t\tisTest(name, \"Benchmark\") ||\n\t\t\tisTest(name, \"Example\") {\n\t\t\tbreak\n\t\t}\n\t}\n\n\treturn callers\n}\n\n// Stolen from the `go test` tool.\n// isTest tells whether name looks like a test (or benchmark, according to prefix).\n// It is a Test (say) if there is a character after Test that is not a lower-case letter.\n// We don't want TesticularCancer.\nfunc isTest(name, prefix string) bool {\n\tif !strings.HasPrefix(name, prefix) {\n\t\treturn false\n\t}\n\tif len(name) == len(prefix) { // \"Test\" is ok\n\t\treturn true\n\t}\n\tr, _ := utf8.DecodeRuneInString(name[len(prefix):])\n\treturn !unicode.IsLower(r)\n}\n\nfunc messageFromMsgAndArgs(msgAndArgs ...interface{}) string {\n\tif len(msgAndArgs) == 0 || msgAndArgs == nil {\n\t\treturn \"\"\n\t}\n\tif len(msgAndArgs) == 1 {\n\t\tmsg := msgAndArgs[0]\n\t\tif msgAsStr, ok := msg.(string); ok {\n\t\t\treturn msgAsStr\n\t\t}\n\t\treturn fmt.Sprintf(\"%+v\", msg)\n\t}\n\tif len(msgAndArgs) > 1 {\n\t\treturn fmt.Sprintf(msgAndArgs[0].(string), msgAndArgs[1:]...)\n\t}\n\treturn \"\"\n}\n\n// Aligns the provided message so that all lines after the first line start at the same location as the first line.\n// Assumes that the first line starts at the correct location (after carriage return, tab, label, spacer and tab).\n// The longestLabelLen parameter specifies the length of the longest label in the output (required becaues this is the\n// basis on which the alignment occurs).\nfunc indentMessageLines(message string, longestLabelLen int) string {\n\toutBuf := new(bytes.Buffer)\n\n\tfor i, scanner := 0, bufio.NewScanner(strings.NewReader(message)); scanner.Scan(); i++ {\n\t\t// no need to align first line because it starts at the correct location (after the label)\n\t\tif i != 0 {\n\t\t\t// append alignLen+1 spaces to align with \"{{longestLabel}}:\" before adding tab\n\t\t\toutBuf.WriteString(\"\\n\\t\" + strings.Repeat(\" \", longestLabelLen+1) + \"\\t\")\n\t\t}\n\t\toutBuf.WriteString(scanner.Text())\n\t}\n\n\treturn outBuf.String()\n}\n\ntype failNower interface {\n\tFailNow()\n}\n\n// FailNow fails test\nfunc FailNow(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tFail(t, failureMessage, msgAndArgs...)\n\n\t// We cannot extend TestingT with FailNow() and\n\t// maintain backwards compatibility, so we fallback\n\t// to panicking when FailNow is not available in\n\t// TestingT.\n\t// See issue #263\n\n\tif t, ok := t.(failNower); ok {\n\t\tt.FailNow()\n\t} else {\n\t\tpanic(\"test failed and t is missing `FailNow()`\")\n\t}\n\treturn false\n}\n\n// Fail reports a failure through\nfunc Fail(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tcontent := []labeledContent{\n\t\t{\"Error Trace\", strings.Join(CallerInfo(), \"\\n\\t\\t\\t\")},\n\t\t{\"Error\", failureMessage},\n\t}\n\n\t// Add test name if the Go version supports it\n\tif n, ok := t.(interface {\n\t\tName() string\n\t}); ok {\n\t\tcontent = append(content, labeledContent{\"Test\", n.Name()})\n\t}\n\n\tmessage := messageFromMsgAndArgs(msgAndArgs...)\n\tif len(message) > 0 {\n\t\tcontent = append(content, labeledContent{\"Messages\", message})\n\t}\n\n\tt.Errorf(\"\\n%s\", \"\"+labeledOutput(content...))\n\n\treturn false\n}\n\ntype labeledContent struct {\n\tlabel   string\n\tcontent string\n}\n\n// labeledOutput returns a string consisting of the provided labeledContent. Each labeled output is appended in the following manner:\n//\n//\t\\t{{label}}:{{align_spaces}}\\t{{content}}\\n\n//\n// The initial carriage return is required to undo/erase any padding added by testing.T.Errorf. The \"\\t{{label}}:\" is for the label.\n// If a label is shorter than the longest label provided, padding spaces are added to make all the labels match in length. Once this\n// alignment is achieved, \"\\t{{content}}\\n\" is added for the output.\n//\n// If the content of the labeledOutput contains line breaks, the subsequent lines are aligned so that they start at the same location as the first line.\nfunc labeledOutput(content ...labeledContent) string {\n\tlongestLabel := 0\n\tfor _, v := range content {\n\t\tif len(v.label) > longestLabel {\n\t\t\tlongestLabel = len(v.label)\n\t\t}\n\t}\n\tvar output string\n\tfor _, v := range content {\n\t\toutput += \"\\t\" + v.label + \":\" + strings.Repeat(\" \", longestLabel-len(v.label)) + \"\\t\" + indentMessageLines(v.content, longestLabel) + \"\\n\"\n\t}\n\treturn output\n}\n\n// Implements asserts that an object is implemented by the specified interface.\n//\n//\tassert.Implements(t, (*MyInterface)(nil), new(MyObject))\nfunc Implements(t TestingT, interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tinterfaceType := reflect.TypeOf(interfaceObject).Elem()\n\n\tif object == nil {\n\t\treturn Fail(t, fmt.Sprintf(\"Cannot check if nil implements %v\", interfaceType), msgAndArgs...)\n\t}\n\tif !reflect.TypeOf(object).Implements(interfaceType) {\n\t\treturn Fail(t, fmt.Sprintf(\"%T must implement %v\", object, interfaceType), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// IsType asserts that the specified objects are of the same type.\nfunc IsType(t TestingT, expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif !ObjectsAreEqual(reflect.TypeOf(object), reflect.TypeOf(expectedType)) {\n\t\treturn Fail(t, fmt.Sprintf(\"Object expected to be of type %v, but was %v\", reflect.TypeOf(expectedType), reflect.TypeOf(object)), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// Equal asserts that two objects are equal.\n//\n//\tassert.Equal(t, 123, 123)\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses). Function equality\n// cannot be determined and will always fail.\nfunc Equal(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif err := validateEqualArgs(expected, actual); err != nil {\n\t\treturn Fail(t, fmt.Sprintf(\"Invalid operation: %#v == %#v (%s)\",\n\t\t\texpected, actual, err), msgAndArgs...)\n\t}\n\n\tif !ObjectsAreEqual(expected, actual) {\n\t\tdiff := diff(expected, actual)\n\t\texpected, actual = formatUnequalValues(expected, actual)\n\t\treturn Fail(t, fmt.Sprintf(\"Not equal: \\n\"+\n\t\t\t\"expected: %s\\n\"+\n\t\t\t\"actual  : %s%s\", expected, actual, diff), msgAndArgs...)\n\t}\n\n\treturn true\n\n}\n\n// validateEqualArgs checks whether provided arguments can be safely used in the\n// Equal/NotEqual functions.\nfunc validateEqualArgs(expected, actual interface{}) error {\n\tif expected == nil && actual == nil {\n\t\treturn nil\n\t}\n\n\tif isFunction(expected) || isFunction(actual) {\n\t\treturn errors.New(\"cannot take func type as argument\")\n\t}\n\treturn nil\n}\n\n// Same asserts that two pointers reference the same object.\n//\n//\tassert.Same(t, ptr1, ptr2)\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc Same(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif !samePointers(expected, actual) {\n\t\treturn Fail(t, fmt.Sprintf(\"Not same: \\n\"+\n\t\t\t\"expected: %p %#v\\n\"+\n\t\t\t\"actual  : %p %#v\", expected, expected, actual, actual), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// NotSame asserts that two pointers do not reference the same object.\n//\n//\tassert.NotSame(t, ptr1, ptr2)\n//\n// Both arguments must be pointer variables. Pointer variable sameness is\n// determined based on the equality of both type and value.\nfunc NotSame(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif samePointers(expected, actual) {\n\t\treturn Fail(t, fmt.Sprintf(\n\t\t\t\"Expected and actual point to the same object: %p %#v\",\n\t\t\texpected, expected), msgAndArgs...)\n\t}\n\treturn true\n}\n\n// samePointers compares two generic interface objects and returns whether\n// they point to the same object\nfunc samePointers(first, second interface{}) bool {\n\tfirstPtr, secondPtr := reflect.ValueOf(first), reflect.ValueOf(second)\n\tif firstPtr.Kind() != reflect.Ptr || secondPtr.Kind() != reflect.Ptr {\n\t\treturn false\n\t}\n\n\tfirstType, secondType := reflect.TypeOf(first), reflect.TypeOf(second)\n\tif firstType != secondType {\n\t\treturn false\n\t}\n\n\t// compare pointer addresses\n\treturn first == second\n}\n\n// formatUnequalValues takes two values of arbitrary types and returns string\n// representations appropriate to be presented to the user.\n//\n// If the values are not of like type, the returned strings will be prefixed\n// with the type name, and the value will be enclosed in parenthesis similar\n// to a type conversion in the Go grammar.\nfunc formatUnequalValues(expected, actual interface{}) (e string, a string) {\n\tif reflect.TypeOf(expected) != reflect.TypeOf(actual) {\n\t\treturn fmt.Sprintf(\"%T(%s)\", expected, truncatingFormat(expected)),\n\t\t\tfmt.Sprintf(\"%T(%s)\", actual, truncatingFormat(actual))\n\t}\n\tswitch expected.(type) {\n\tcase time.Duration:\n\t\treturn fmt.Sprintf(\"%v\", expected), fmt.Sprintf(\"%v\", actual)\n\t}\n\treturn truncatingFormat(expected), truncatingFormat(actual)\n}\n\n// truncatingFormat formats the data and truncates it if it's too long.\n//\n// This helps keep formatted error messages lines from exceeding the\n// bufio.MaxScanTokenSize max line length that the go testing framework imposes.\nfunc truncatingFormat(data interface{}) string {\n\tvalue := fmt.Sprintf(\"%#v\", data)\n\tmax := bufio.MaxScanTokenSize - 100 // Give us some space the type info too if needed.\n\tif len(value) > max {\n\t\tvalue = value[0:max] + \"<... truncated>\"\n\t}\n\treturn value\n}\n\n// EqualValues asserts that two objects are equal or convertable to the same types\n// and equal.\n//\n//\tassert.EqualValues(t, uint32(123), int32(123))\nfunc EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif !ObjectsAreEqualValues(expected, actual) {\n\t\tdiff := diff(expected, actual)\n\t\texpected, actual = formatUnequalValues(expected, actual)\n\t\treturn Fail(t, fmt.Sprintf(\"Not equal: \\n\"+\n\t\t\t\"expected: %s\\n\"+\n\t\t\t\"actual  : %s%s\", expected, actual, diff), msgAndArgs...)\n\t}\n\n\treturn true\n\n}\n\n// EqualExportedValues asserts that the types of two objects are equal and their public\n// fields are also equal. This is useful for comparing structs that have private fields\n// that could potentially differ.\n//\n//\t type S struct {\n//\t\tExported     \tint\n//\t\tnotExported   \tint\n//\t }\n//\t assert.EqualExportedValues(t, S{1, 2}, S{1, 3}) => true\n//\t assert.EqualExportedValues(t, S{1, 2}, S{2, 3}) => false\nfunc EqualExportedValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\taType := reflect.TypeOf(expected)\n\tbType := reflect.TypeOf(actual)\n\n\tif aType != bType {\n\t\treturn Fail(t, fmt.Sprintf(\"Types expected to match exactly\\n\\t%v != %v\", aType, bType), msgAndArgs...)\n\t}\n\n\tif aType.Kind() != reflect.Struct {\n\t\treturn Fail(t, fmt.Sprintf(\"Types expected to both be struct \\n\\t%v != %v\", aType.Kind(), reflect.Struct), msgAndArgs...)\n\t}\n\n\tif bType.Kind() != reflect.Struct {\n\t\treturn Fail(t, fmt.Sprintf(\"Types expected to both be struct \\n\\t%v != %v\", bType.Kind(), reflect.Struct), msgAndArgs...)\n\t}\n\n\texpected = copyExportedFields(expected)\n\tactual = copyExportedFields(actual)\n\n\tif !ObjectsAreEqualValues(expected, actual) {\n\t\tdiff := diff(expected, actual)\n\t\texpected, actual = formatUnequalValues(expected, actual)\n\t\treturn Fail(t, fmt.Sprintf(\"Not equal (comparing only exported fields): \\n\"+\n\t\t\t\"expected: %s\\n\"+\n\t\t\t\"actual  : %s%s\", expected, actual, diff), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// Exactly asserts that two objects are equal in value and type.\n//\n//\tassert.Exactly(t, int32(123), int64(123))\nfunc Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\taType := reflect.TypeOf(expected)\n\tbType := reflect.TypeOf(actual)\n\n\tif aType != bType {\n\t\treturn Fail(t, fmt.Sprintf(\"Types expected to match exactly\\n\\t%v != %v\", aType, bType), msgAndArgs...)\n\t}\n\n\treturn Equal(t, expected, actual, msgAndArgs...)\n\n}\n\n// NotNil asserts that the specified object is not nil.\n//\n//\tassert.NotNil(t, err)\nfunc NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\tif !isNil(object) {\n\t\treturn true\n\t}\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Fail(t, \"Expected value not to be nil.\", msgAndArgs...)\n}\n\n// containsKind checks if a specified kind in the slice of kinds.\nfunc containsKind(kinds []reflect.Kind, kind reflect.Kind) bool {\n\tfor i := 0; i < len(kinds); i++ {\n\t\tif kind == kinds[i] {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn false\n}\n\n// isNil checks if a specified object is nil or not, without Failing.\nfunc isNil(object interface{}) bool {\n\tif object == nil {\n\t\treturn true\n\t}\n\n\tvalue := reflect.ValueOf(object)\n\tkind := value.Kind()\n\tisNilableKind := containsKind(\n\t\t[]reflect.Kind{\n\t\t\treflect.Chan, reflect.Func,\n\t\t\treflect.Interface, reflect.Map,\n\t\t\treflect.Ptr, reflect.Slice, reflect.UnsafePointer},\n\t\tkind)\n\n\tif isNilableKind && value.IsNil() {\n\t\treturn true\n\t}\n\n\treturn false\n}\n\n// Nil asserts that the specified object is nil.\n//\n//\tassert.Nil(t, err)\nfunc Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\tif isNil(object) {\n\t\treturn true\n\t}\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\treturn Fail(t, fmt.Sprintf(\"Expected nil, but got: %#v\", object), msgAndArgs...)\n}\n\n// isEmpty gets whether the specified object is considered empty or not.\nfunc isEmpty(object interface{}) bool {\n\n\t// get nil case out of the way\n\tif object == nil {\n\t\treturn true\n\t}\n\n\tobjValue := reflect.ValueOf(object)\n\n\tswitch objValue.Kind() {\n\t// collection types are empty when they have no element\n\tcase reflect.Chan, reflect.Map, reflect.Slice:\n\t\treturn objValue.Len() == 0\n\t// pointers are empty if nil or if the value they point to is empty\n\tcase reflect.Ptr:\n\t\tif objValue.IsNil() {\n\t\t\treturn true\n\t\t}\n\t\tderef := objValue.Elem().Interface()\n\t\treturn isEmpty(deref)\n\t// for all other types, compare against the zero value\n\t// array types are empty when they match their zero-initialized state\n\tdefault:\n\t\tzero := reflect.Zero(objValue.Type())\n\t\treturn reflect.DeepEqual(object, zero.Interface())\n\t}\n}\n\n// Empty asserts that the specified object is empty.  I.e. nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\tassert.Empty(t, obj)\nfunc Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\tpass := isEmpty(object)\n\tif !pass {\n\t\tif h, ok := t.(tHelper); ok {\n\t\t\th.Helper()\n\t\t}\n\t\tFail(t, fmt.Sprintf(\"Should be empty, but was %v\", object), msgAndArgs...)\n\t}\n\n\treturn pass\n\n}\n\n// NotEmpty asserts that the specified object is NOT empty.  I.e. not nil, \"\", false, 0 or either\n// a slice or a channel with len == 0.\n//\n//\tif assert.NotEmpty(t, obj) {\n//\t  assert.Equal(t, \"two\", obj[1])\n//\t}\nfunc NotEmpty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {\n\tpass := !isEmpty(object)\n\tif !pass {\n\t\tif h, ok := t.(tHelper); ok {\n\t\t\th.Helper()\n\t\t}\n\t\tFail(t, fmt.Sprintf(\"Should NOT be empty, but was %v\", object), msgAndArgs...)\n\t}\n\n\treturn pass\n\n}\n\n// getLen try to get length of object.\n// return (false, 0) if impossible.\nfunc getLen(x interface{}) (ok bool, length int) {\n\tv := reflect.ValueOf(x)\n\tdefer func() {\n\t\tif e := recover(); e != nil {\n\t\t\tok = false\n\t\t}\n\t}()\n\treturn true, v.Len()\n}\n\n// Len asserts that the specified object has specific length.\n// Len also fails if the object has a type that len() not accept.\n//\n//\tassert.Len(t, mySlice, 3)\nfunc Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tok, l := getLen(object)\n\tif !ok {\n\t\treturn Fail(t, fmt.Sprintf(\"\\\"%s\\\" could not be applied builtin len()\", object), msgAndArgs...)\n\t}\n\n\tif l != length {\n\t\treturn Fail(t, fmt.Sprintf(\"\\\"%s\\\" should have %d item(s), but has %d\", object, length, l), msgAndArgs...)\n\t}\n\treturn true\n}\n\n// True asserts that the specified value is true.\n//\n//\tassert.True(t, myBool)\nfunc True(t TestingT, value bool, msgAndArgs ...interface{}) bool {\n\tif !value {\n\t\tif h, ok := t.(tHelper); ok {\n\t\t\th.Helper()\n\t\t}\n\t\treturn Fail(t, \"Should be true\", msgAndArgs...)\n\t}\n\n\treturn true\n\n}\n\n// False asserts that the specified value is false.\n//\n//\tassert.False(t, myBool)\nfunc False(t TestingT, value bool, msgAndArgs ...interface{}) bool {\n\tif value {\n\t\tif h, ok := t.(tHelper); ok {\n\t\t\th.Helper()\n\t\t}\n\t\treturn Fail(t, \"Should be false\", msgAndArgs...)\n\t}\n\n\treturn true\n\n}\n\n// NotEqual asserts that the specified values are NOT equal.\n//\n//\tassert.NotEqual(t, obj1, obj2)\n//\n// Pointer variable equality is determined based on the equality of the\n// referenced values (as opposed to the memory addresses).\nfunc NotEqual(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif err := validateEqualArgs(expected, actual); err != nil {\n\t\treturn Fail(t, fmt.Sprintf(\"Invalid operation: %#v != %#v (%s)\",\n\t\t\texpected, actual, err), msgAndArgs...)\n\t}\n\n\tif ObjectsAreEqual(expected, actual) {\n\t\treturn Fail(t, fmt.Sprintf(\"Should not be: %#v\\n\", actual), msgAndArgs...)\n\t}\n\n\treturn true\n\n}\n\n// NotEqualValues asserts that two objects are not equal even when converted to the same type\n//\n//\tassert.NotEqualValues(t, obj1, obj2)\nfunc NotEqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif ObjectsAreEqualValues(expected, actual) {\n\t\treturn Fail(t, fmt.Sprintf(\"Should not be: %#v\\n\", actual), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// containsElement try loop over the list check if the list includes the element.\n// return (false, false) if impossible.\n// return (true, false) if element was not found.\n// return (true, true) if element was found.\nfunc containsElement(list interface{}, element interface{}) (ok, found bool) {\n\n\tlistValue := reflect.ValueOf(list)\n\tlistType := reflect.TypeOf(list)\n\tif listType == nil {\n\t\treturn false, false\n\t}\n\tlistKind := listType.Kind()\n\tdefer func() {\n\t\tif e := recover(); e != nil {\n\t\t\tok = false\n\t\t\tfound = false\n\t\t}\n\t}()\n\n\tif listKind == reflect.String {\n\t\telementValue := reflect.ValueOf(element)\n\t\treturn true, strings.Contains(listValue.String(), elementValue.String())\n\t}\n\n\tif listKind == reflect.Map {\n\t\tmapKeys := listValue.MapKeys()\n\t\tfor i := 0; i < len(mapKeys); i++ {\n\t\t\tif ObjectsAreEqual(mapKeys[i].Interface(), element) {\n\t\t\t\treturn true, true\n\t\t\t}\n\t\t}\n\t\treturn true, false\n\t}\n\n\tfor i := 0; i < listValue.Len(); i++ {\n\t\tif ObjectsAreEqual(listValue.Index(i).Interface(), element) {\n\t\t\treturn true, true\n\t\t}\n\t}\n\treturn true, false\n\n}\n\n// Contains asserts that the specified string, list(array, slice...) or map contains the\n// specified substring or element.\n//\n//\tassert.Contains(t, \"Hello World\", \"World\")\n//\tassert.Contains(t, [\"Hello\", \"World\"], \"World\")\n//\tassert.Contains(t, {\"Hello\": \"World\"}, \"Hello\")\nfunc Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tok, found := containsElement(s, contains)\n\tif !ok {\n\t\treturn Fail(t, fmt.Sprintf(\"%#v could not be applied builtin len()\", s), msgAndArgs...)\n\t}\n\tif !found {\n\t\treturn Fail(t, fmt.Sprintf(\"%#v does not contain %#v\", s, contains), msgAndArgs...)\n\t}\n\n\treturn true\n\n}\n\n// NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the\n// specified substring or element.\n//\n//\tassert.NotContains(t, \"Hello World\", \"Earth\")\n//\tassert.NotContains(t, [\"Hello\", \"World\"], \"Earth\")\n//\tassert.NotContains(t, {\"Hello\": \"World\"}, \"Earth\")\nfunc NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tok, found := containsElement(s, contains)\n\tif !ok {\n\t\treturn Fail(t, fmt.Sprintf(\"%#v could not be applied builtin len()\", s), msgAndArgs...)\n\t}\n\tif found {\n\t\treturn Fail(t, fmt.Sprintf(\"%#v should not contain %#v\", s, contains), msgAndArgs...)\n\t}\n\n\treturn true\n\n}\n\n// Subset asserts that the specified list(array, slice...) contains all\n// elements given in the specified subset(array, slice...).\n//\n//\tassert.Subset(t, [1, 2, 3], [1, 2], \"But [1, 2, 3] does contain [1, 2]\")\nfunc Subset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif subset == nil {\n\t\treturn true // we consider nil to be equal to the nil set\n\t}\n\n\tlistKind := reflect.TypeOf(list).Kind()\n\tif listKind != reflect.Array && listKind != reflect.Slice && listKind != reflect.Map {\n\t\treturn Fail(t, fmt.Sprintf(\"%q has an unsupported type %s\", list, listKind), msgAndArgs...)\n\t}\n\n\tsubsetKind := reflect.TypeOf(subset).Kind()\n\tif subsetKind != reflect.Array && subsetKind != reflect.Slice && listKind != reflect.Map {\n\t\treturn Fail(t, fmt.Sprintf(\"%q has an unsupported type %s\", subset, subsetKind), msgAndArgs...)\n\t}\n\n\tif subsetKind == reflect.Map && listKind == reflect.Map {\n\t\tsubsetMap := reflect.ValueOf(subset)\n\t\tactualMap := reflect.ValueOf(list)\n\n\t\tfor _, k := range subsetMap.MapKeys() {\n\t\t\tev := subsetMap.MapIndex(k)\n\t\t\tav := actualMap.MapIndex(k)\n\n\t\t\tif !av.IsValid() {\n\t\t\t\treturn Fail(t, fmt.Sprintf(\"%#v does not contain %#v\", list, subset), msgAndArgs...)\n\t\t\t}\n\t\t\tif !ObjectsAreEqual(ev.Interface(), av.Interface()) {\n\t\t\t\treturn Fail(t, fmt.Sprintf(\"%#v does not contain %#v\", list, subset), msgAndArgs...)\n\t\t\t}\n\t\t}\n\n\t\treturn true\n\t}\n\n\tsubsetList := reflect.ValueOf(subset)\n\tfor i := 0; i < subsetList.Len(); i++ {\n\t\telement := subsetList.Index(i).Interface()\n\t\tok, found := containsElement(list, element)\n\t\tif !ok {\n\t\t\treturn Fail(t, fmt.Sprintf(\"%#v could not be applied builtin len()\", list), msgAndArgs...)\n\t\t}\n\t\tif !found {\n\t\t\treturn Fail(t, fmt.Sprintf(\"%#v does not contain %#v\", list, element), msgAndArgs...)\n\t\t}\n\t}\n\n\treturn true\n}\n\n// NotSubset asserts that the specified list(array, slice...) contains not all\n// elements given in the specified subset(array, slice...).\n//\n//\tassert.NotSubset(t, [1, 3, 4], [1, 2], \"But [1, 3, 4] does not contain [1, 2]\")\nfunc NotSubset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif subset == nil {\n\t\treturn Fail(t, \"nil is the empty set which is a subset of every set\", msgAndArgs...)\n\t}\n\n\tlistKind := reflect.TypeOf(list).Kind()\n\tif listKind != reflect.Array && listKind != reflect.Slice && listKind != reflect.Map {\n\t\treturn Fail(t, fmt.Sprintf(\"%q has an unsupported type %s\", list, listKind), msgAndArgs...)\n\t}\n\n\tsubsetKind := reflect.TypeOf(subset).Kind()\n\tif subsetKind != reflect.Array && subsetKind != reflect.Slice && listKind != reflect.Map {\n\t\treturn Fail(t, fmt.Sprintf(\"%q has an unsupported type %s\", subset, subsetKind), msgAndArgs...)\n\t}\n\n\tif subsetKind == reflect.Map && listKind == reflect.Map {\n\t\tsubsetMap := reflect.ValueOf(subset)\n\t\tactualMap := reflect.ValueOf(list)\n\n\t\tfor _, k := range subsetMap.MapKeys() {\n\t\t\tev := subsetMap.MapIndex(k)\n\t\t\tav := actualMap.MapIndex(k)\n\n\t\t\tif !av.IsValid() {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\tif !ObjectsAreEqual(ev.Interface(), av.Interface()) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\n\t\treturn Fail(t, fmt.Sprintf(\"%q is a subset of %q\", subset, list), msgAndArgs...)\n\t}\n\n\tsubsetList := reflect.ValueOf(subset)\n\tfor i := 0; i < subsetList.Len(); i++ {\n\t\telement := subsetList.Index(i).Interface()\n\t\tok, found := containsElement(list, element)\n\t\tif !ok {\n\t\t\treturn Fail(t, fmt.Sprintf(\"\\\"%s\\\" could not be applied builtin len()\", list), msgAndArgs...)\n\t\t}\n\t\tif !found {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn Fail(t, fmt.Sprintf(\"%q is a subset of %q\", subset, list), msgAndArgs...)\n}\n\n// ElementsMatch asserts that the specified listA(array, slice...) is equal to specified\n// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements,\n// the number of appearances of each of them in both lists should match.\n//\n// assert.ElementsMatch(t, [1, 3, 2, 3], [1, 3, 3, 2])\nfunc ElementsMatch(t TestingT, listA, listB interface{}, msgAndArgs ...interface{}) (ok bool) {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif isEmpty(listA) && isEmpty(listB) {\n\t\treturn true\n\t}\n\n\tif !isList(t, listA, msgAndArgs...) || !isList(t, listB, msgAndArgs...) {\n\t\treturn false\n\t}\n\n\textraA, extraB := diffLists(listA, listB)\n\n\tif len(extraA) == 0 && len(extraB) == 0 {\n\t\treturn true\n\t}\n\n\treturn Fail(t, formatListDiff(listA, listB, extraA, extraB), msgAndArgs...)\n}\n\n// isList checks that the provided value is array or slice.\nfunc isList(t TestingT, list interface{}, msgAndArgs ...interface{}) (ok bool) {\n\tkind := reflect.TypeOf(list).Kind()\n\tif kind != reflect.Array && kind != reflect.Slice {\n\t\treturn Fail(t, fmt.Sprintf(\"%q has an unsupported type %s, expecting array or slice\", list, kind),\n\t\t\tmsgAndArgs...)\n\t}\n\treturn true\n}\n\n// diffLists diffs two arrays/slices and returns slices of elements that are only in A and only in B.\n// If some element is present multiple times, each instance is counted separately (e.g. if something is 2x in A and\n// 5x in B, it will be 0x in extraA and 3x in extraB). The order of items in both lists is ignored.\nfunc diffLists(listA, listB interface{}) (extraA, extraB []interface{}) {\n\taValue := reflect.ValueOf(listA)\n\tbValue := reflect.ValueOf(listB)\n\n\taLen := aValue.Len()\n\tbLen := bValue.Len()\n\n\t// Mark indexes in bValue that we already used\n\tvisited := make([]bool, bLen)\n\tfor i := 0; i < aLen; i++ {\n\t\telement := aValue.Index(i).Interface()\n\t\tfound := false\n\t\tfor j := 0; j < bLen; j++ {\n\t\t\tif visited[j] {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif ObjectsAreEqual(bValue.Index(j).Interface(), element) {\n\t\t\t\tvisited[j] = true\n\t\t\t\tfound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !found {\n\t\t\textraA = append(extraA, element)\n\t\t}\n\t}\n\n\tfor j := 0; j < bLen; j++ {\n\t\tif visited[j] {\n\t\t\tcontinue\n\t\t}\n\t\textraB = append(extraB, bValue.Index(j).Interface())\n\t}\n\n\treturn\n}\n\nfunc formatListDiff(listA, listB interface{}, extraA, extraB []interface{}) string {\n\tvar msg bytes.Buffer\n\n\tmsg.WriteString(\"elements differ\")\n\tif len(extraA) > 0 {\n\t\tmsg.WriteString(\"\\n\\nextra elements in list A:\\n\")\n\t\tmsg.WriteString(spewConfig.Sdump(extraA))\n\t}\n\tif len(extraB) > 0 {\n\t\tmsg.WriteString(\"\\n\\nextra elements in list B:\\n\")\n\t\tmsg.WriteString(spewConfig.Sdump(extraB))\n\t}\n\tmsg.WriteString(\"\\n\\nlistA:\\n\")\n\tmsg.WriteString(spewConfig.Sdump(listA))\n\tmsg.WriteString(\"\\n\\nlistB:\\n\")\n\tmsg.WriteString(spewConfig.Sdump(listB))\n\n\treturn msg.String()\n}\n\n// Condition uses a Comparison to assert a complex condition.\nfunc Condition(t TestingT, comp Comparison, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tresult := comp()\n\tif !result {\n\t\tFail(t, \"Condition failed!\", msgAndArgs...)\n\t}\n\treturn result\n}\n\n// PanicTestFunc defines a func that should be passed to the assert.Panics and assert.NotPanics\n// methods, and represents a simple func that takes no arguments, and returns nothing.\ntype PanicTestFunc func()\n\n// didPanic returns true if the function passed to it panics. Otherwise, it returns false.\nfunc didPanic(f PanicTestFunc) (didPanic bool, message interface{}, stack string) {\n\tdidPanic = true\n\n\tdefer func() {\n\t\tmessage = recover()\n\t\tif didPanic {\n\t\t\tstack = string(debug.Stack())\n\t\t}\n\t}()\n\n\t// call the target function\n\tf()\n\tdidPanic = false\n\n\treturn\n}\n\n// Panics asserts that the code inside the specified PanicTestFunc panics.\n//\n//\tassert.Panics(t, func(){ GoCrazy() })\nfunc Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif funcDidPanic, panicValue, _ := didPanic(f); !funcDidPanic {\n\t\treturn Fail(t, fmt.Sprintf(\"func %#v should panic\\n\\tPanic value:\\t%#v\", f, panicValue), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that\n// the recovered panic value equals the expected panic value.\n//\n//\tassert.PanicsWithValue(t, \"crazy error\", func(){ GoCrazy() })\nfunc PanicsWithValue(t TestingT, expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tfuncDidPanic, panicValue, panickedStack := didPanic(f)\n\tif !funcDidPanic {\n\t\treturn Fail(t, fmt.Sprintf(\"func %#v should panic\\n\\tPanic value:\\t%#v\", f, panicValue), msgAndArgs...)\n\t}\n\tif panicValue != expected {\n\t\treturn Fail(t, fmt.Sprintf(\"func %#v should panic with value:\\t%#v\\n\\tPanic value:\\t%#v\\n\\tPanic stack:\\t%s\", f, expected, panicValue, panickedStack), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// PanicsWithError asserts that the code inside the specified PanicTestFunc\n// panics, and that the recovered panic value is an error that satisfies the\n// EqualError comparison.\n//\n//\tassert.PanicsWithError(t, \"crazy error\", func(){ GoCrazy() })\nfunc PanicsWithError(t TestingT, errString string, f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tfuncDidPanic, panicValue, panickedStack := didPanic(f)\n\tif !funcDidPanic {\n\t\treturn Fail(t, fmt.Sprintf(\"func %#v should panic\\n\\tPanic value:\\t%#v\", f, panicValue), msgAndArgs...)\n\t}\n\tpanicErr, ok := panicValue.(error)\n\tif !ok || panicErr.Error() != errString {\n\t\treturn Fail(t, fmt.Sprintf(\"func %#v should panic with error message:\\t%#v\\n\\tPanic value:\\t%#v\\n\\tPanic stack:\\t%s\", f, errString, panicValue, panickedStack), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic.\n//\n//\tassert.NotPanics(t, func(){ RemainCalm() })\nfunc NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif funcDidPanic, panicValue, panickedStack := didPanic(f); funcDidPanic {\n\t\treturn Fail(t, fmt.Sprintf(\"func %#v should not panic\\n\\tPanic value:\\t%v\\n\\tPanic stack:\\t%s\", f, panicValue, panickedStack), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// WithinDuration asserts that the two times are within duration delta of each other.\n//\n//\tassert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second)\nfunc WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tdt := expected.Sub(actual)\n\tif dt < -delta || dt > delta {\n\t\treturn Fail(t, fmt.Sprintf(\"Max difference between %v and %v allowed is %v, but difference was %v\", expected, actual, delta, dt), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// WithinRange asserts that a time is within a time range (inclusive).\n//\n//\tassert.WithinRange(t, time.Now(), time.Now().Add(-time.Second), time.Now().Add(time.Second))\nfunc WithinRange(t TestingT, actual, start, end time.Time, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tif end.Before(start) {\n\t\treturn Fail(t, \"Start should be before end\", msgAndArgs...)\n\t}\n\n\tif actual.Before(start) {\n\t\treturn Fail(t, fmt.Sprintf(\"Time %v expected to be in time range %v to %v, but is before the range\", actual, start, end), msgAndArgs...)\n\t} else if actual.After(end) {\n\t\treturn Fail(t, fmt.Sprintf(\"Time %v expected to be in time range %v to %v, but is after the range\", actual, start, end), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\nfunc toFloat(x interface{}) (float64, bool) {\n\tvar xf float64\n\txok := true\n\n\tswitch xn := x.(type) {\n\tcase uint:\n\t\txf = float64(xn)\n\tcase uint8:\n\t\txf = float64(xn)\n\tcase uint16:\n\t\txf = float64(xn)\n\tcase uint32:\n\t\txf = float64(xn)\n\tcase uint64:\n\t\txf = float64(xn)\n\tcase int:\n\t\txf = float64(xn)\n\tcase int8:\n\t\txf = float64(xn)\n\tcase int16:\n\t\txf = float64(xn)\n\tcase int32:\n\t\txf = float64(xn)\n\tcase int64:\n\t\txf = float64(xn)\n\tcase float32:\n\t\txf = float64(xn)\n\tcase float64:\n\t\txf = xn\n\tcase time.Duration:\n\t\txf = float64(xn)\n\tdefault:\n\t\txok = false\n\t}\n\n\treturn xf, xok\n}\n\n// InDelta asserts that the two numerals are within delta of each other.\n//\n//\tassert.InDelta(t, math.Pi, 22/7.0, 0.01)\nfunc InDelta(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\taf, aok := toFloat(expected)\n\tbf, bok := toFloat(actual)\n\n\tif !aok || !bok {\n\t\treturn Fail(t, \"Parameters must be numerical\", msgAndArgs...)\n\t}\n\n\tif math.IsNaN(af) && math.IsNaN(bf) {\n\t\treturn true\n\t}\n\n\tif math.IsNaN(af) {\n\t\treturn Fail(t, \"Expected must not be NaN\", msgAndArgs...)\n\t}\n\n\tif math.IsNaN(bf) {\n\t\treturn Fail(t, fmt.Sprintf(\"Expected %v with delta %v, but was NaN\", expected, delta), msgAndArgs...)\n\t}\n\n\tdt := af - bf\n\tif dt < -delta || dt > delta {\n\t\treturn Fail(t, fmt.Sprintf(\"Max difference between %v and %v allowed is %v, but difference was %v\", expected, actual, delta, dt), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// InDeltaSlice is the same as InDelta, except it compares two slices.\nfunc InDeltaSlice(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif expected == nil || actual == nil ||\n\t\treflect.TypeOf(actual).Kind() != reflect.Slice ||\n\t\treflect.TypeOf(expected).Kind() != reflect.Slice {\n\t\treturn Fail(t, \"Parameters must be slice\", msgAndArgs...)\n\t}\n\n\tactualSlice := reflect.ValueOf(actual)\n\texpectedSlice := reflect.ValueOf(expected)\n\n\tfor i := 0; i < actualSlice.Len(); i++ {\n\t\tresult := InDelta(t, actualSlice.Index(i).Interface(), expectedSlice.Index(i).Interface(), delta, msgAndArgs...)\n\t\tif !result {\n\t\t\treturn result\n\t\t}\n\t}\n\n\treturn true\n}\n\n// InDeltaMapValues is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys.\nfunc InDeltaMapValues(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif expected == nil || actual == nil ||\n\t\treflect.TypeOf(actual).Kind() != reflect.Map ||\n\t\treflect.TypeOf(expected).Kind() != reflect.Map {\n\t\treturn Fail(t, \"Arguments must be maps\", msgAndArgs...)\n\t}\n\n\texpectedMap := reflect.ValueOf(expected)\n\tactualMap := reflect.ValueOf(actual)\n\n\tif expectedMap.Len() != actualMap.Len() {\n\t\treturn Fail(t, \"Arguments must have the same number of keys\", msgAndArgs...)\n\t}\n\n\tfor _, k := range expectedMap.MapKeys() {\n\t\tev := expectedMap.MapIndex(k)\n\t\tav := actualMap.MapIndex(k)\n\n\t\tif !ev.IsValid() {\n\t\t\treturn Fail(t, fmt.Sprintf(\"missing key %q in expected map\", k), msgAndArgs...)\n\t\t}\n\n\t\tif !av.IsValid() {\n\t\t\treturn Fail(t, fmt.Sprintf(\"missing key %q in actual map\", k), msgAndArgs...)\n\t\t}\n\n\t\tif !InDelta(\n\t\t\tt,\n\t\t\tev.Interface(),\n\t\t\tav.Interface(),\n\t\t\tdelta,\n\t\t\tmsgAndArgs...,\n\t\t) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\treturn true\n}\n\nfunc calcRelativeError(expected, actual interface{}) (float64, error) {\n\taf, aok := toFloat(expected)\n\tbf, bok := toFloat(actual)\n\tif !aok || !bok {\n\t\treturn 0, fmt.Errorf(\"Parameters must be numerical\")\n\t}\n\tif math.IsNaN(af) && math.IsNaN(bf) {\n\t\treturn 0, nil\n\t}\n\tif math.IsNaN(af) {\n\t\treturn 0, errors.New(\"expected value must not be NaN\")\n\t}\n\tif af == 0 {\n\t\treturn 0, fmt.Errorf(\"expected value must have a value other than zero to calculate the relative error\")\n\t}\n\tif math.IsNaN(bf) {\n\t\treturn 0, errors.New(\"actual value must not be NaN\")\n\t}\n\n\treturn math.Abs(af-bf) / math.Abs(af), nil\n}\n\n// InEpsilon asserts that expected and actual have a relative error less than epsilon\nfunc InEpsilon(t TestingT, expected, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif math.IsNaN(epsilon) {\n\t\treturn Fail(t, \"epsilon must not be NaN\")\n\t}\n\tactualEpsilon, err := calcRelativeError(expected, actual)\n\tif err != nil {\n\t\treturn Fail(t, err.Error(), msgAndArgs...)\n\t}\n\tif actualEpsilon > epsilon {\n\t\treturn Fail(t, fmt.Sprintf(\"Relative error is too high: %#v (expected)\\n\"+\n\t\t\t\"        < %#v (actual)\", epsilon, actualEpsilon), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// InEpsilonSlice is the same as InEpsilon, except it compares each value from two slices.\nfunc InEpsilonSlice(t TestingT, expected, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif expected == nil || actual == nil ||\n\t\treflect.TypeOf(actual).Kind() != reflect.Slice ||\n\t\treflect.TypeOf(expected).Kind() != reflect.Slice {\n\t\treturn Fail(t, \"Parameters must be slice\", msgAndArgs...)\n\t}\n\n\tactualSlice := reflect.ValueOf(actual)\n\texpectedSlice := reflect.ValueOf(expected)\n\n\tfor i := 0; i < actualSlice.Len(); i++ {\n\t\tresult := InEpsilon(t, actualSlice.Index(i).Interface(), expectedSlice.Index(i).Interface(), epsilon)\n\t\tif !result {\n\t\t\treturn result\n\t\t}\n\t}\n\n\treturn true\n}\n\n/*\n\tErrors\n*/\n\n// NoError asserts that a function returned no error (i.e. `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if assert.NoError(t, err) {\n//\t\t   assert.Equal(t, expectedObj, actualObj)\n//\t  }\nfunc NoError(t TestingT, err error, msgAndArgs ...interface{}) bool {\n\tif err != nil {\n\t\tif h, ok := t.(tHelper); ok {\n\t\t\th.Helper()\n\t\t}\n\t\treturn Fail(t, fmt.Sprintf(\"Received unexpected error:\\n%+v\", err), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// Error asserts that a function returned an error (i.e. not `nil`).\n//\n//\t  actualObj, err := SomeFunction()\n//\t  if assert.Error(t, err) {\n//\t\t   assert.Equal(t, expectedError, err)\n//\t  }\nfunc Error(t TestingT, err error, msgAndArgs ...interface{}) bool {\n\tif err == nil {\n\t\tif h, ok := t.(tHelper); ok {\n\t\t\th.Helper()\n\t\t}\n\t\treturn Fail(t, \"An error is expected but got nil.\", msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// EqualError asserts that a function returned an error (i.e. not `nil`)\n// and that it is equal to the provided error.\n//\n//\tactualObj, err := SomeFunction()\n//\tassert.EqualError(t, err,  expectedErrorString)\nfunc EqualError(t TestingT, theError error, errString string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif !Error(t, theError, msgAndArgs...) {\n\t\treturn false\n\t}\n\texpected := errString\n\tactual := theError.Error()\n\t// don't need to use deep equals here, we know they are both strings\n\tif expected != actual {\n\t\treturn Fail(t, fmt.Sprintf(\"Error message not equal:\\n\"+\n\t\t\t\"expected: %q\\n\"+\n\t\t\t\"actual  : %q\", expected, actual), msgAndArgs...)\n\t}\n\treturn true\n}\n\n// ErrorContains asserts that a function returned an error (i.e. not `nil`)\n// and that the error contains the specified substring.\n//\n//\tactualObj, err := SomeFunction()\n//\tassert.ErrorContains(t, err,  expectedErrorSubString)\nfunc ErrorContains(t TestingT, theError error, contains string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif !Error(t, theError, msgAndArgs...) {\n\t\treturn false\n\t}\n\n\tactual := theError.Error()\n\tif !strings.Contains(actual, contains) {\n\t\treturn Fail(t, fmt.Sprintf(\"Error %#v does not contain %#v\", actual, contains), msgAndArgs...)\n\t}\n\n\treturn true\n}\n\n// matchRegexp return true if a specified regexp matches a string.\nfunc matchRegexp(rx interface{}, str interface{}) bool {\n\n\tvar r *regexp.Regexp\n\tif rr, ok := rx.(*regexp.Regexp); ok {\n\t\tr = rr\n\t} else {\n\t\tr = regexp.MustCompile(fmt.Sprint(rx))\n\t}\n\n\treturn (r.FindStringIndex(fmt.Sprint(str)) != nil)\n\n}\n\n// Regexp asserts that a specified regexp matches a string.\n//\n//\tassert.Regexp(t, regexp.MustCompile(\"start\"), \"it's starting\")\n//\tassert.Regexp(t, \"start...$\", \"it's not starting\")\nfunc Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tmatch := matchRegexp(rx, str)\n\n\tif !match {\n\t\tFail(t, fmt.Sprintf(\"Expect \\\"%v\\\" to match \\\"%v\\\"\", str, rx), msgAndArgs...)\n\t}\n\n\treturn match\n}\n\n// NotRegexp asserts that a specified regexp does not match a string.\n//\n//\tassert.NotRegexp(t, regexp.MustCompile(\"starts\"), \"it's starting\")\n//\tassert.NotRegexp(t, \"^start\", \"it's not starting\")\nfunc NotRegexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tmatch := matchRegexp(rx, str)\n\n\tif match {\n\t\tFail(t, fmt.Sprintf(\"Expect \\\"%v\\\" to NOT match \\\"%v\\\"\", str, rx), msgAndArgs...)\n\t}\n\n\treturn !match\n\n}\n\n// Zero asserts that i is the zero value for its type.\nfunc Zero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif i != nil && !reflect.DeepEqual(i, reflect.Zero(reflect.TypeOf(i)).Interface()) {\n\t\treturn Fail(t, fmt.Sprintf(\"Should be zero, but was %v\", i), msgAndArgs...)\n\t}\n\treturn true\n}\n\n// NotZero asserts that i is not the zero value for its type.\nfunc NotZero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif i == nil || reflect.DeepEqual(i, reflect.Zero(reflect.TypeOf(i)).Interface()) {\n\t\treturn Fail(t, fmt.Sprintf(\"Should not be zero, but was %v\", i), msgAndArgs...)\n\t}\n\treturn true\n}\n\n// FileExists checks whether a file exists in the given path. It also fails if\n// the path points to a directory or there is an error when trying to check the file.\nfunc FileExists(t TestingT, path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tinfo, err := os.Lstat(path)\n\tif err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn Fail(t, fmt.Sprintf(\"unable to find file %q\", path), msgAndArgs...)\n\t\t}\n\t\treturn Fail(t, fmt.Sprintf(\"error when running os.Lstat(%q): %s\", path, err), msgAndArgs...)\n\t}\n\tif info.IsDir() {\n\t\treturn Fail(t, fmt.Sprintf(\"%q is a directory\", path), msgAndArgs...)\n\t}\n\treturn true\n}\n\n// NoFileExists checks whether a file does not exist in a given path. It fails\n// if the path points to an existing _file_ only.\nfunc NoFileExists(t TestingT, path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tinfo, err := os.Lstat(path)\n\tif err != nil {\n\t\treturn true\n\t}\n\tif info.IsDir() {\n\t\treturn true\n\t}\n\treturn Fail(t, fmt.Sprintf(\"file %q exists\", path), msgAndArgs...)\n}\n\n// DirExists checks whether a directory exists in the given path. It also fails\n// if the path is a file rather a directory or there is an error checking whether it exists.\nfunc DirExists(t TestingT, path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tinfo, err := os.Lstat(path)\n\tif err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn Fail(t, fmt.Sprintf(\"unable to find file %q\", path), msgAndArgs...)\n\t\t}\n\t\treturn Fail(t, fmt.Sprintf(\"error when running os.Lstat(%q): %s\", path, err), msgAndArgs...)\n\t}\n\tif !info.IsDir() {\n\t\treturn Fail(t, fmt.Sprintf(\"%q is a file\", path), msgAndArgs...)\n\t}\n\treturn true\n}\n\n// NoDirExists checks whether a directory does not exist in the given path.\n// It fails if the path points to an existing _directory_ only.\nfunc NoDirExists(t TestingT, path string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tinfo, err := os.Lstat(path)\n\tif err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn true\n\t\t}\n\t\treturn true\n\t}\n\tif !info.IsDir() {\n\t\treturn true\n\t}\n\treturn Fail(t, fmt.Sprintf(\"directory %q exists\", path), msgAndArgs...)\n}\n\n// JSONEq asserts that two JSON strings are equivalent.\n//\n//\tassert.JSONEq(t, `{\"hello\": \"world\", \"foo\": \"bar\"}`, `{\"foo\": \"bar\", \"hello\": \"world\"}`)\nfunc JSONEq(t TestingT, expected string, actual string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tvar expectedJSONAsInterface, actualJSONAsInterface interface{}\n\n\tif err := json.Unmarshal([]byte(expected), &expectedJSONAsInterface); err != nil {\n\t\treturn Fail(t, fmt.Sprintf(\"Expected value ('%s') is not valid json.\\nJSON parsing error: '%s'\", expected, err.Error()), msgAndArgs...)\n\t}\n\n\tif err := json.Unmarshal([]byte(actual), &actualJSONAsInterface); err != nil {\n\t\treturn Fail(t, fmt.Sprintf(\"Input ('%s') needs to be valid json.\\nJSON parsing error: '%s'\", actual, err.Error()), msgAndArgs...)\n\t}\n\n\treturn Equal(t, expectedJSONAsInterface, actualJSONAsInterface, msgAndArgs...)\n}\n\n// YAMLEq asserts that two YAML strings are equivalent.\nfunc YAMLEq(t TestingT, expected string, actual string, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tvar expectedYAMLAsInterface, actualYAMLAsInterface interface{}\n\n\tif err := yaml.Unmarshal([]byte(expected), &expectedYAMLAsInterface); err != nil {\n\t\treturn Fail(t, fmt.Sprintf(\"Expected value ('%s') is not valid yaml.\\nYAML parsing error: '%s'\", expected, err.Error()), msgAndArgs...)\n\t}\n\n\tif err := yaml.Unmarshal([]byte(actual), &actualYAMLAsInterface); err != nil {\n\t\treturn Fail(t, fmt.Sprintf(\"Input ('%s') needs to be valid yaml.\\nYAML error: '%s'\", actual, err.Error()), msgAndArgs...)\n\t}\n\n\treturn Equal(t, expectedYAMLAsInterface, actualYAMLAsInterface, msgAndArgs...)\n}\n\nfunc typeAndKind(v interface{}) (reflect.Type, reflect.Kind) {\n\tt := reflect.TypeOf(v)\n\tk := t.Kind()\n\n\tif k == reflect.Ptr {\n\t\tt = t.Elem()\n\t\tk = t.Kind()\n\t}\n\treturn t, k\n}\n\n// diff returns a diff of both values as long as both are of the same type and\n// are a struct, map, slice, array or string. Otherwise it returns an empty string.\nfunc diff(expected interface{}, actual interface{}) string {\n\tif expected == nil || actual == nil {\n\t\treturn \"\"\n\t}\n\n\tet, ek := typeAndKind(expected)\n\tat, _ := typeAndKind(actual)\n\n\tif et != at {\n\t\treturn \"\"\n\t}\n\n\tif ek != reflect.Struct && ek != reflect.Map && ek != reflect.Slice && ek != reflect.Array && ek != reflect.String {\n\t\treturn \"\"\n\t}\n\n\tvar e, a string\n\n\tswitch et {\n\tcase reflect.TypeOf(\"\"):\n\t\te = reflect.ValueOf(expected).String()\n\t\ta = reflect.ValueOf(actual).String()\n\tcase reflect.TypeOf(time.Time{}):\n\t\te = spewConfigStringerEnabled.Sdump(expected)\n\t\ta = spewConfigStringerEnabled.Sdump(actual)\n\tdefault:\n\t\te = spewConfig.Sdump(expected)\n\t\ta = spewConfig.Sdump(actual)\n\t}\n\n\tdiff, _ := difflib.GetUnifiedDiffString(difflib.UnifiedDiff{\n\t\tA:        difflib.SplitLines(e),\n\t\tB:        difflib.SplitLines(a),\n\t\tFromFile: \"Expected\",\n\t\tFromDate: \"\",\n\t\tToFile:   \"Actual\",\n\t\tToDate:   \"\",\n\t\tContext:  1,\n\t})\n\n\treturn \"\\n\\nDiff:\\n\" + diff\n}\n\nfunc isFunction(arg interface{}) bool {\n\tif arg == nil {\n\t\treturn false\n\t}\n\treturn reflect.TypeOf(arg).Kind() == reflect.Func\n}\n\nvar spewConfig = spew.ConfigState{\n\tIndent:                  \" \",\n\tDisablePointerAddresses: true,\n\tDisableCapacities:       true,\n\tSortKeys:                true,\n\tDisableMethods:          true,\n\tMaxDepth:                10,\n}\n\nvar spewConfigStringerEnabled = spew.ConfigState{\n\tIndent:                  \" \",\n\tDisablePointerAddresses: true,\n\tDisableCapacities:       true,\n\tSortKeys:                true,\n\tMaxDepth:                10,\n}\n\ntype tHelper interface {\n\tHelper()\n}\n\n// Eventually asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick.\n//\n//\tassert.Eventually(t, func() bool { return true; }, time.Second, 10*time.Millisecond)\nfunc Eventually(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tch := make(chan bool, 1)\n\n\ttimer := time.NewTimer(waitFor)\n\tdefer timer.Stop()\n\n\tticker := time.NewTicker(tick)\n\tdefer ticker.Stop()\n\n\tfor tick := ticker.C; ; {\n\t\tselect {\n\t\tcase <-timer.C:\n\t\t\treturn Fail(t, \"Condition never satisfied\", msgAndArgs...)\n\t\tcase <-tick:\n\t\t\ttick = nil\n\t\t\tgo func() { ch <- condition() }()\n\t\tcase v := <-ch:\n\t\t\tif v {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\ttick = ticker.C\n\t\t}\n\t}\n}\n\n// CollectT implements the TestingT interface and collects all errors.\ntype CollectT struct {\n\terrors []error\n}\n\n// Errorf collects the error.\nfunc (c *CollectT) Errorf(format string, args ...interface{}) {\n\tc.errors = append(c.errors, fmt.Errorf(format, args...))\n}\n\n// FailNow panics.\nfunc (c *CollectT) FailNow() {\n\tpanic(\"Assertion failed\")\n}\n\n// Reset clears the collected errors.\nfunc (c *CollectT) Reset() {\n\tc.errors = nil\n}\n\n// Copy copies the collected errors to the supplied t.\nfunc (c *CollectT) Copy(t TestingT) {\n\tif tt, ok := t.(tHelper); ok {\n\t\ttt.Helper()\n\t}\n\tfor _, err := range c.errors {\n\t\tt.Errorf(\"%v\", err)\n\t}\n}\n\n// EventuallyWithT asserts that given condition will be met in waitFor time,\n// periodically checking target function each tick. In contrast to Eventually,\n// it supplies a CollectT to the condition function, so that the condition\n// function can use the CollectT to call other assertions.\n// The condition is considered \"met\" if no errors are raised in a tick.\n// The supplied CollectT collects all errors from one tick (if there are any).\n// If the condition is not met before waitFor, the collected errors of\n// the last tick are copied to t.\n//\n//\texternalValue := false\n//\tgo func() {\n//\t\ttime.Sleep(8*time.Second)\n//\t\texternalValue = true\n//\t}()\n//\tassert.EventuallyWithT(t, func(c *assert.CollectT) {\n//\t\t// add assertions as needed; any assertion failure will fail the current tick\n//\t\tassert.True(c, externalValue, \"expected 'externalValue' to be true\")\n//\t}, 1*time.Second, 10*time.Second, \"external state has not changed to 'true'; still false\")\nfunc EventuallyWithT(t TestingT, condition func(collect *CollectT), waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tcollect := new(CollectT)\n\tch := make(chan bool, 1)\n\n\ttimer := time.NewTimer(waitFor)\n\tdefer timer.Stop()\n\n\tticker := time.NewTicker(tick)\n\tdefer ticker.Stop()\n\n\tfor tick := ticker.C; ; {\n\t\tselect {\n\t\tcase <-timer.C:\n\t\t\tcollect.Copy(t)\n\t\t\treturn Fail(t, \"Condition never satisfied\", msgAndArgs...)\n\t\tcase <-tick:\n\t\t\ttick = nil\n\t\t\tcollect.Reset()\n\t\t\tgo func() {\n\t\t\t\tcondition(collect)\n\t\t\t\tch <- len(collect.errors) == 0\n\t\t\t}()\n\t\tcase v := <-ch:\n\t\t\tif v {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\ttick = ticker.C\n\t\t}\n\t}\n}\n\n// Never asserts that the given condition doesn't satisfy in waitFor time,\n// periodically checking the target function each tick.\n//\n//\tassert.Never(t, func() bool { return false; }, time.Second, 10*time.Millisecond)\nfunc Never(t TestingT, condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tch := make(chan bool, 1)\n\n\ttimer := time.NewTimer(waitFor)\n\tdefer timer.Stop()\n\n\tticker := time.NewTicker(tick)\n\tdefer ticker.Stop()\n\n\tfor tick := ticker.C; ; {\n\t\tselect {\n\t\tcase <-timer.C:\n\t\t\treturn true\n\t\tcase <-tick:\n\t\t\ttick = nil\n\t\t\tgo func() { ch <- condition() }()\n\t\tcase v := <-ch:\n\t\t\tif v {\n\t\t\t\treturn Fail(t, \"Condition satisfied\", msgAndArgs...)\n\t\t\t}\n\t\t\ttick = ticker.C\n\t\t}\n\t}\n}\n\n// ErrorIs asserts that at least one of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc ErrorIs(t TestingT, err, target error, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif errors.Is(err, target) {\n\t\treturn true\n\t}\n\n\tvar expectedText string\n\tif target != nil {\n\t\texpectedText = target.Error()\n\t}\n\n\tchain := buildErrorChainString(err)\n\n\treturn Fail(t, fmt.Sprintf(\"Target error should be in err chain:\\n\"+\n\t\t\"expected: %q\\n\"+\n\t\t\"in chain: %s\", expectedText, chain,\n\t), msgAndArgs...)\n}\n\n// NotErrorIs asserts that at none of the errors in err's chain matches target.\n// This is a wrapper for errors.Is.\nfunc NotErrorIs(t TestingT, err, target error, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif !errors.Is(err, target) {\n\t\treturn true\n\t}\n\n\tvar expectedText string\n\tif target != nil {\n\t\texpectedText = target.Error()\n\t}\n\n\tchain := buildErrorChainString(err)\n\n\treturn Fail(t, fmt.Sprintf(\"Target error should not be in err chain:\\n\"+\n\t\t\"found: %q\\n\"+\n\t\t\"in chain: %s\", expectedText, chain,\n\t), msgAndArgs...)\n}\n\n// ErrorAs asserts that at least one of the errors in err's chain matches target, and if so, sets target to that error value.\n// This is a wrapper for errors.As.\nfunc ErrorAs(t TestingT, err error, target interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tif errors.As(err, target) {\n\t\treturn true\n\t}\n\n\tchain := buildErrorChainString(err)\n\n\treturn Fail(t, fmt.Sprintf(\"Should be in error chain:\\n\"+\n\t\t\"expected: %q\\n\"+\n\t\t\"in chain: %s\", target, chain,\n\t), msgAndArgs...)\n}\n\nfunc buildErrorChainString(err error) string {\n\tif err == nil {\n\t\treturn \"\"\n\t}\n\n\te := errors.Unwrap(err)\n\tchain := fmt.Sprintf(\"%q\", err.Error())\n\tfor e != nil {\n\t\tchain += fmt.Sprintf(\"\\n\\t%q\", e.Error())\n\t\te = errors.Unwrap(e)\n\t}\n\treturn chain\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/doc.go",
    "content": "// Package assert provides a set of comprehensive testing tools for use with the normal Go testing system.\n//\n// # Example Usage\n//\n// The following is a complete example using assert in a standard test function:\n//\n//\timport (\n//\t  \"testing\"\n//\t  \"github.com/stretchr/testify/assert\"\n//\t)\n//\n//\tfunc TestSomething(t *testing.T) {\n//\n//\t  var a string = \"Hello\"\n//\t  var b string = \"Hello\"\n//\n//\t  assert.Equal(t, a, b, \"The two words should be the same.\")\n//\n//\t}\n//\n// if you assert many times, use the format below:\n//\n//\timport (\n//\t  \"testing\"\n//\t  \"github.com/stretchr/testify/assert\"\n//\t)\n//\n//\tfunc TestSomething(t *testing.T) {\n//\t  assert := assert.New(t)\n//\n//\t  var a string = \"Hello\"\n//\t  var b string = \"Hello\"\n//\n//\t  assert.Equal(a, b, \"The two words should be the same.\")\n//\t}\n//\n// # Assertions\n//\n// Assertions allow you to easily write test code, and are global funcs in the `assert` package.\n// All assertion functions take, as the first argument, the `*testing.T` object provided by the\n// testing framework. This allows the assertion funcs to write the failings and other details to\n// the correct place.\n//\n// Every assertion function also takes an optional string message as the final argument,\n// allowing custom error messages to be appended to the message the assertion method outputs.\npackage assert\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/errors.go",
    "content": "package assert\n\nimport (\n\t\"errors\"\n)\n\n// AnError is an error instance useful for testing.  If the code does not care\n// about error specifics, and only needs to return the error for example, this\n// error should be used to make the test code more readable.\nvar AnError = errors.New(\"assert.AnError general error for testing\")\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/forward_assertions.go",
    "content": "package assert\n\n// Assertions provides assertion methods around the\n// TestingT interface.\ntype Assertions struct {\n\tt TestingT\n}\n\n// New makes a new Assertions object for the specified TestingT.\nfunc New(t TestingT) *Assertions {\n\treturn &Assertions{\n\t\tt: t,\n\t}\n}\n\n//go:generate sh -c \"cd ../_codegen && go build && cd - && ../_codegen/_codegen -output-package=assert -template=assertion_forward.go.tmpl -include-format-funcs\"\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/assert/http_assertions.go",
    "content": "package assert\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"net/url\"\n\t\"strings\"\n)\n\n// httpCode is a helper that returns HTTP code of the response. It returns -1 and\n// an error if building a new request fails.\nfunc httpCode(handler http.HandlerFunc, method, url string, values url.Values) (int, error) {\n\tw := httptest.NewRecorder()\n\treq, err := http.NewRequest(method, url, nil)\n\tif err != nil {\n\t\treturn -1, err\n\t}\n\treq.URL.RawQuery = values.Encode()\n\thandler(w, req)\n\treturn w.Code, nil\n}\n\n// HTTPSuccess asserts that a specified handler returns a success status code.\n//\n//\tassert.HTTPSuccess(t, myHandler, \"POST\", \"http://www.google.com\", nil)\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tcode, err := httpCode(handler, method, url, values)\n\tif err != nil {\n\t\tFail(t, fmt.Sprintf(\"Failed to build test request, got error: %s\", err))\n\t}\n\n\tisSuccessCode := code >= http.StatusOK && code <= http.StatusPartialContent\n\tif !isSuccessCode {\n\t\tFail(t, fmt.Sprintf(\"Expected HTTP success status code for %q but received %d\", url+\"?\"+values.Encode(), code))\n\t}\n\n\treturn isSuccessCode\n}\n\n// HTTPRedirect asserts that a specified handler returns a redirect status code.\n//\n//\tassert.HTTPRedirect(t, myHandler, \"GET\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tcode, err := httpCode(handler, method, url, values)\n\tif err != nil {\n\t\tFail(t, fmt.Sprintf(\"Failed to build test request, got error: %s\", err))\n\t}\n\n\tisRedirectCode := code >= http.StatusMultipleChoices && code <= http.StatusTemporaryRedirect\n\tif !isRedirectCode {\n\t\tFail(t, fmt.Sprintf(\"Expected HTTP redirect status code for %q but received %d\", url+\"?\"+values.Encode(), code))\n\t}\n\n\treturn isRedirectCode\n}\n\n// HTTPError asserts that a specified handler returns an error status code.\n//\n//\tassert.HTTPError(t, myHandler, \"POST\", \"/a/b/c\", url.Values{\"a\": []string{\"b\", \"c\"}}\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPError(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tcode, err := httpCode(handler, method, url, values)\n\tif err != nil {\n\t\tFail(t, fmt.Sprintf(\"Failed to build test request, got error: %s\", err))\n\t}\n\n\tisErrorCode := code >= http.StatusBadRequest\n\tif !isErrorCode {\n\t\tFail(t, fmt.Sprintf(\"Expected HTTP error status code for %q but received %d\", url+\"?\"+values.Encode(), code))\n\t}\n\n\treturn isErrorCode\n}\n\n// HTTPStatusCode asserts that a specified handler returns a specified status code.\n//\n//\tassert.HTTPStatusCode(t, myHandler, \"GET\", \"/notImplemented\", nil, 501)\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPStatusCode(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, statuscode int, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tcode, err := httpCode(handler, method, url, values)\n\tif err != nil {\n\t\tFail(t, fmt.Sprintf(\"Failed to build test request, got error: %s\", err))\n\t}\n\n\tsuccessful := code == statuscode\n\tif !successful {\n\t\tFail(t, fmt.Sprintf(\"Expected HTTP status code %d for %q but received %d\", statuscode, url+\"?\"+values.Encode(), code))\n\t}\n\n\treturn successful\n}\n\n// HTTPBody is a helper that returns HTTP body of the response. It returns\n// empty string if building a new request fails.\nfunc HTTPBody(handler http.HandlerFunc, method, url string, values url.Values) string {\n\tw := httptest.NewRecorder()\n\treq, err := http.NewRequest(method, url+\"?\"+values.Encode(), nil)\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\thandler(w, req)\n\treturn w.Body.String()\n}\n\n// HTTPBodyContains asserts that a specified handler returns a\n// body that contains a string.\n//\n//\tassert.HTTPBodyContains(t, myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tbody := HTTPBody(handler, method, url, values)\n\n\tcontains := strings.Contains(body, fmt.Sprint(str))\n\tif !contains {\n\t\tFail(t, fmt.Sprintf(\"Expected response body for \\\"%s\\\" to contain \\\"%s\\\" but found \\\"%s\\\"\", url+\"?\"+values.Encode(), str, body))\n\t}\n\n\treturn contains\n}\n\n// HTTPBodyNotContains asserts that a specified handler returns a\n// body that does not contain a string.\n//\n//\tassert.HTTPBodyNotContains(t, myHandler, \"GET\", \"www.google.com\", nil, \"I'm Feeling Lucky\")\n//\n// Returns whether the assertion was successful (true) or not (false).\nfunc HTTPBodyNotContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tbody := HTTPBody(handler, method, url, values)\n\n\tcontains := strings.Contains(body, fmt.Sprint(str))\n\tif contains {\n\t\tFail(t, fmt.Sprintf(\"Expected response body for \\\"%s\\\" to NOT contain \\\"%s\\\" but found \\\"%s\\\"\", url+\"?\"+values.Encode(), str, body))\n\t}\n\n\treturn !contains\n}\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/mock/doc.go",
    "content": "// Package mock provides a system by which it is possible to mock your objects\n// and verify calls are happening as expected.\n//\n// # Example Usage\n//\n// The mock package provides an object, Mock, that tracks activity on another object.  It is usually\n// embedded into a test object as shown below:\n//\n//\ttype MyTestObject struct {\n//\t  // add a Mock object instance\n//\t  mock.Mock\n//\n//\t  // other fields go here as normal\n//\t}\n//\n// When implementing the methods of an interface, you wire your functions up\n// to call the Mock.Called(args...) method, and return the appropriate values.\n//\n// For example, to mock a method that saves the name and age of a person and returns\n// the year of their birth or an error, you might write this:\n//\n//\tfunc (o *MyTestObject) SavePersonDetails(firstname, lastname string, age int) (int, error) {\n//\t  args := o.Called(firstname, lastname, age)\n//\t  return args.Int(0), args.Error(1)\n//\t}\n//\n// The Int, Error and Bool methods are examples of strongly typed getters that take the argument\n// index position. Given this argument list:\n//\n//\t(12, true, \"Something\")\n//\n// You could read them out strongly typed like this:\n//\n//\targs.Int(0)\n//\targs.Bool(1)\n//\targs.String(2)\n//\n// For objects of your own type, use the generic Arguments.Get(index) method and make a type assertion:\n//\n//\treturn args.Get(0).(*MyObject), args.Get(1).(*AnotherObjectOfMine)\n//\n// This may cause a panic if the object you are getting is nil (the type assertion will fail), in those\n// cases you should check for nil first.\npackage mock\n"
  },
  {
    "path": "vendor/github.com/stretchr/testify/mock/mock.go",
    "content": "package mock\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"path\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"runtime\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/davecgh/go-spew/spew\"\n\t\"github.com/pmezard/go-difflib/difflib\"\n\t\"github.com/stretchr/objx\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\n// TestingT is an interface wrapper around *testing.T\ntype TestingT interface {\n\tLogf(format string, args ...interface{})\n\tErrorf(format string, args ...interface{})\n\tFailNow()\n}\n\n/*\n\tCall\n*/\n\n// Call represents a method call and is used for setting expectations,\n// as well as recording activity.\ntype Call struct {\n\tParent *Mock\n\n\t// The name of the method that was or will be called.\n\tMethod string\n\n\t// Holds the arguments of the method.\n\tArguments Arguments\n\n\t// Holds the arguments that should be returned when\n\t// this method is called.\n\tReturnArguments Arguments\n\n\t// Holds the caller info for the On() call\n\tcallerInfo []string\n\n\t// The number of times to return the return arguments when setting\n\t// expectations. 0 means to always return the value.\n\tRepeatability int\n\n\t// Amount of times this call has been called\n\ttotalCalls int\n\n\t// Call to this method can be optional\n\toptional bool\n\n\t// Holds a channel that will be used to block the Return until it either\n\t// receives a message or is closed. nil means it returns immediately.\n\tWaitFor <-chan time.Time\n\n\twaitTime time.Duration\n\n\t// Holds a handler used to manipulate arguments content that are passed by\n\t// reference. It's useful when mocking methods such as unmarshalers or\n\t// decoders.\n\tRunFn func(Arguments)\n\n\t// PanicMsg holds msg to be used to mock panic on the function call\n\t//  if the PanicMsg is set to a non nil string the function call will panic\n\t// irrespective of other settings\n\tPanicMsg *string\n\n\t// Calls which must be satisfied before this call can be\n\trequires []*Call\n}\n\nfunc newCall(parent *Mock, methodName string, callerInfo []string, methodArguments ...interface{}) *Call {\n\treturn &Call{\n\t\tParent:          parent,\n\t\tMethod:          methodName,\n\t\tArguments:       methodArguments,\n\t\tReturnArguments: make([]interface{}, 0),\n\t\tcallerInfo:      callerInfo,\n\t\tRepeatability:   0,\n\t\tWaitFor:         nil,\n\t\tRunFn:           nil,\n\t\tPanicMsg:        nil,\n\t}\n}\n\nfunc (c *Call) lock() {\n\tc.Parent.mutex.Lock()\n}\n\nfunc (c *Call) unlock() {\n\tc.Parent.mutex.Unlock()\n}\n\n// Return specifies the return arguments for the expectation.\n//\n//\tMock.On(\"DoSomething\").Return(errors.New(\"failed\"))\nfunc (c *Call) Return(returnArguments ...interface{}) *Call {\n\tc.lock()\n\tdefer c.unlock()\n\n\tc.ReturnArguments = returnArguments\n\n\treturn c\n}\n\n// Panic specifies if the functon call should fail and the panic message\n//\n//\tMock.On(\"DoSomething\").Panic(\"test panic\")\nfunc (c *Call) Panic(msg string) *Call {\n\tc.lock()\n\tdefer c.unlock()\n\n\tc.PanicMsg = &msg\n\n\treturn c\n}\n\n// Once indicates that that the mock should only return the value once.\n//\n//\tMock.On(\"MyMethod\", arg1, arg2).Return(returnArg1, returnArg2).Once()\nfunc (c *Call) Once() *Call {\n\treturn c.Times(1)\n}\n\n// Twice indicates that that the mock should only return the value twice.\n//\n//\tMock.On(\"MyMethod\", arg1, arg2).Return(returnArg1, returnArg2).Twice()\nfunc (c *Call) Twice() *Call {\n\treturn c.Times(2)\n}\n\n// Times indicates that that the mock should only return the indicated number\n// of times.\n//\n//\tMock.On(\"MyMethod\", arg1, arg2).Return(returnArg1, returnArg2).Times(5)\nfunc (c *Call) Times(i int) *Call {\n\tc.lock()\n\tdefer c.unlock()\n\tc.Repeatability = i\n\treturn c\n}\n\n// WaitUntil sets the channel that will block the mock's return until its closed\n// or a message is received.\n//\n//\tMock.On(\"MyMethod\", arg1, arg2).WaitUntil(time.After(time.Second))\nfunc (c *Call) WaitUntil(w <-chan time.Time) *Call {\n\tc.lock()\n\tdefer c.unlock()\n\tc.WaitFor = w\n\treturn c\n}\n\n// After sets how long to block until the call returns\n//\n//\tMock.On(\"MyMethod\", arg1, arg2).After(time.Second)\nfunc (c *Call) After(d time.Duration) *Call {\n\tc.lock()\n\tdefer c.unlock()\n\tc.waitTime = d\n\treturn c\n}\n\n// Run sets a handler to be called before returning. It can be used when\n// mocking a method (such as an unmarshaler) that takes a pointer to a struct and\n// sets properties in such struct\n//\n//\tMock.On(\"Unmarshal\", AnythingOfType(\"*map[string]interface{}\")).Return().Run(func(args Arguments) {\n//\t\targ := args.Get(0).(*map[string]interface{})\n//\t\targ[\"foo\"] = \"bar\"\n//\t})\nfunc (c *Call) Run(fn func(args Arguments)) *Call {\n\tc.lock()\n\tdefer c.unlock()\n\tc.RunFn = fn\n\treturn c\n}\n\n// Maybe allows the method call to be optional. Not calling an optional method\n// will not cause an error while asserting expectations\nfunc (c *Call) Maybe() *Call {\n\tc.lock()\n\tdefer c.unlock()\n\tc.optional = true\n\treturn c\n}\n\n// On chains a new expectation description onto the mocked interface. This\n// allows syntax like.\n//\n//\tMock.\n//\t   On(\"MyMethod\", 1).Return(nil).\n//\t   On(\"MyOtherMethod\", 'a', 'b', 'c').Return(errors.New(\"Some Error\"))\n//\n//go:noinline\nfunc (c *Call) On(methodName string, arguments ...interface{}) *Call {\n\treturn c.Parent.On(methodName, arguments...)\n}\n\n// Unset removes a mock handler from being called.\n//\n//\ttest.On(\"func\", mock.Anything).Unset()\nfunc (c *Call) Unset() *Call {\n\tvar unlockOnce sync.Once\n\n\tfor _, arg := range c.Arguments {\n\t\tif v := reflect.ValueOf(arg); v.Kind() == reflect.Func {\n\t\t\tpanic(fmt.Sprintf(\"cannot use Func in expectations. Use mock.AnythingOfType(\\\"%T\\\")\", arg))\n\t\t}\n\t}\n\n\tc.lock()\n\tdefer unlockOnce.Do(c.unlock)\n\n\tfoundMatchingCall := false\n\n\t// in-place filter slice for calls to be removed - iterate from 0'th to last skipping unnecessary ones\n\tvar index int // write index\n\tfor _, call := range c.Parent.ExpectedCalls {\n\t\tif call.Method == c.Method {\n\t\t\t_, diffCount := call.Arguments.Diff(c.Arguments)\n\t\t\tif diffCount == 0 {\n\t\t\t\tfoundMatchingCall = true\n\t\t\t\t// Remove from ExpectedCalls - just skip it\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\tc.Parent.ExpectedCalls[index] = call\n\t\tindex++\n\t}\n\t// trim slice up to last copied index\n\tc.Parent.ExpectedCalls = c.Parent.ExpectedCalls[:index]\n\n\tif !foundMatchingCall {\n\t\tunlockOnce.Do(c.unlock)\n\t\tc.Parent.fail(\"\\n\\nmock: Could not find expected call\\n-----------------------------\\n\\n%s\\n\\n\",\n\t\t\tcallString(c.Method, c.Arguments, true),\n\t\t)\n\t}\n\n\treturn c\n}\n\n// NotBefore indicates that the mock should only be called after the referenced\n// calls have been called as expected. The referenced calls may be from the\n// same mock instance and/or other mock instances.\n//\n//\tMock.On(\"Do\").Return(nil).Notbefore(\n//\t    Mock.On(\"Init\").Return(nil)\n//\t)\nfunc (c *Call) NotBefore(calls ...*Call) *Call {\n\tc.lock()\n\tdefer c.unlock()\n\n\tfor _, call := range calls {\n\t\tif call.Parent == nil {\n\t\t\tpanic(\"not before calls must be created with Mock.On()\")\n\t\t}\n\t}\n\n\tc.requires = append(c.requires, calls...)\n\treturn c\n}\n\n// Mock is the workhorse used to track activity on another object.\n// For an example of its usage, refer to the \"Example Usage\" section at the top\n// of this document.\ntype Mock struct {\n\t// Represents the calls that are expected of\n\t// an object.\n\tExpectedCalls []*Call\n\n\t// Holds the calls that were made to this mocked object.\n\tCalls []Call\n\n\t// test is An optional variable that holds the test struct, to be used when an\n\t// invalid mock call was made.\n\ttest TestingT\n\n\t// TestData holds any data that might be useful for testing.  Testify ignores\n\t// this data completely allowing you to do whatever you like with it.\n\ttestData objx.Map\n\n\tmutex sync.Mutex\n}\n\n// String provides a %v format string for Mock.\n// Note: this is used implicitly by Arguments.Diff if a Mock is passed.\n// It exists because go's default %v formatting traverses the struct\n// without acquiring the mutex, which is detected by go test -race.\nfunc (m *Mock) String() string {\n\treturn fmt.Sprintf(\"%[1]T<%[1]p>\", m)\n}\n\n// TestData holds any data that might be useful for testing.  Testify ignores\n// this data completely allowing you to do whatever you like with it.\nfunc (m *Mock) TestData() objx.Map {\n\tif m.testData == nil {\n\t\tm.testData = make(objx.Map)\n\t}\n\n\treturn m.testData\n}\n\n/*\n\tSetting expectations\n*/\n\n// Test sets the test struct variable of the mock object\nfunc (m *Mock) Test(t TestingT) {\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\tm.test = t\n}\n\n// fail fails the current test with the given formatted format and args.\n// In case that a test was defined, it uses the test APIs for failing a test,\n// otherwise it uses panic.\nfunc (m *Mock) fail(format string, args ...interface{}) {\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\n\tif m.test == nil {\n\t\tpanic(fmt.Sprintf(format, args...))\n\t}\n\tm.test.Errorf(format, args...)\n\tm.test.FailNow()\n}\n\n// On starts a description of an expectation of the specified method\n// being called.\n//\n//\tMock.On(\"MyMethod\", arg1, arg2)\nfunc (m *Mock) On(methodName string, arguments ...interface{}) *Call {\n\tfor _, arg := range arguments {\n\t\tif v := reflect.ValueOf(arg); v.Kind() == reflect.Func {\n\t\t\tpanic(fmt.Sprintf(\"cannot use Func in expectations. Use mock.AnythingOfType(\\\"%T\\\")\", arg))\n\t\t}\n\t}\n\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\tc := newCall(m, methodName, assert.CallerInfo(), arguments...)\n\tm.ExpectedCalls = append(m.ExpectedCalls, c)\n\treturn c\n}\n\n// /*\n// \tRecording and responding to activity\n// */\n\nfunc (m *Mock) findExpectedCall(method string, arguments ...interface{}) (int, *Call) {\n\tvar expectedCall *Call\n\n\tfor i, call := range m.ExpectedCalls {\n\t\tif call.Method == method {\n\t\t\t_, diffCount := call.Arguments.Diff(arguments)\n\t\t\tif diffCount == 0 {\n\t\t\t\texpectedCall = call\n\t\t\t\tif call.Repeatability > -1 {\n\t\t\t\t\treturn i, call\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn -1, expectedCall\n}\n\ntype matchCandidate struct {\n\tcall      *Call\n\tmismatch  string\n\tdiffCount int\n}\n\nfunc (c matchCandidate) isBetterMatchThan(other matchCandidate) bool {\n\tif c.call == nil {\n\t\treturn false\n\t}\n\tif other.call == nil {\n\t\treturn true\n\t}\n\n\tif c.diffCount > other.diffCount {\n\t\treturn false\n\t}\n\tif c.diffCount < other.diffCount {\n\t\treturn true\n\t}\n\n\tif c.call.Repeatability > 0 && other.call.Repeatability <= 0 {\n\t\treturn true\n\t}\n\treturn false\n}\n\nfunc (m *Mock) findClosestCall(method string, arguments ...interface{}) (*Call, string) {\n\tvar bestMatch matchCandidate\n\n\tfor _, call := range m.expectedCalls() {\n\t\tif call.Method == method {\n\n\t\t\terrInfo, tempDiffCount := call.Arguments.Diff(arguments)\n\t\t\ttempCandidate := matchCandidate{\n\t\t\t\tcall:      call,\n\t\t\t\tmismatch:  errInfo,\n\t\t\t\tdiffCount: tempDiffCount,\n\t\t\t}\n\t\t\tif tempCandidate.isBetterMatchThan(bestMatch) {\n\t\t\t\tbestMatch = tempCandidate\n\t\t\t}\n\t\t}\n\t}\n\n\treturn bestMatch.call, bestMatch.mismatch\n}\n\nfunc callString(method string, arguments Arguments, includeArgumentValues bool) string {\n\tvar argValsString string\n\tif includeArgumentValues {\n\t\tvar argVals []string\n\t\tfor argIndex, arg := range arguments {\n\t\t\tif _, ok := arg.(*FunctionalOptionsArgument); ok {\n\t\t\t\targVals = append(argVals, fmt.Sprintf(\"%d: %s\", argIndex, arg))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\targVals = append(argVals, fmt.Sprintf(\"%d: %#v\", argIndex, arg))\n\t\t}\n\t\targValsString = fmt.Sprintf(\"\\n\\t\\t%s\", strings.Join(argVals, \"\\n\\t\\t\"))\n\t}\n\n\treturn fmt.Sprintf(\"%s(%s)%s\", method, arguments.String(), argValsString)\n}\n\n// Called tells the mock object that a method has been called, and gets an array\n// of arguments to return.  Panics if the call is unexpected (i.e. not preceded by\n// appropriate .On .Return() calls)\n// If Call.WaitFor is set, blocks until the channel is closed or receives a message.\nfunc (m *Mock) Called(arguments ...interface{}) Arguments {\n\t// get the calling function's name\n\tpc, _, _, ok := runtime.Caller(1)\n\tif !ok {\n\t\tpanic(\"Couldn't get the caller information\")\n\t}\n\tfunctionPath := runtime.FuncForPC(pc).Name()\n\t// Next four lines are required to use GCCGO function naming conventions.\n\t// For Ex:  github_com_docker_libkv_store_mock.WatchTree.pN39_github_com_docker_libkv_store_mock.Mock\n\t// uses interface information unlike golang github.com/docker/libkv/store/mock.(*Mock).WatchTree\n\t// With GCCGO we need to remove interface information starting from pN<dd>.\n\tre := regexp.MustCompile(\"\\\\.pN\\\\d+_\")\n\tif re.MatchString(functionPath) {\n\t\tfunctionPath = re.Split(functionPath, -1)[0]\n\t}\n\tparts := strings.Split(functionPath, \".\")\n\tfunctionName := parts[len(parts)-1]\n\treturn m.MethodCalled(functionName, arguments...)\n}\n\n// MethodCalled tells the mock object that the given method has been called, and gets\n// an array of arguments to return. Panics if the call is unexpected (i.e. not preceded\n// by appropriate .On .Return() calls)\n// If Call.WaitFor is set, blocks until the channel is closed or receives a message.\nfunc (m *Mock) MethodCalled(methodName string, arguments ...interface{}) Arguments {\n\tm.mutex.Lock()\n\t// TODO: could combine expected and closes in single loop\n\tfound, call := m.findExpectedCall(methodName, arguments...)\n\n\tif found < 0 {\n\t\t// expected call found but it has already been called with repeatable times\n\t\tif call != nil {\n\t\t\tm.mutex.Unlock()\n\t\t\tm.fail(\"\\nassert: mock: The method has been called over %d times.\\n\\tEither do one more Mock.On(\\\"%s\\\").Return(...), or remove extra call.\\n\\tThis call was unexpected:\\n\\t\\t%s\\n\\tat: %s\", call.totalCalls, methodName, callString(methodName, arguments, true), assert.CallerInfo())\n\t\t}\n\t\t// we have to fail here - because we don't know what to do\n\t\t// as the return arguments.  This is because:\n\t\t//\n\t\t//   a) this is a totally unexpected call to this method,\n\t\t//   b) the arguments are not what was expected, or\n\t\t//   c) the developer has forgotten to add an accompanying On...Return pair.\n\t\tclosestCall, mismatch := m.findClosestCall(methodName, arguments...)\n\t\tm.mutex.Unlock()\n\n\t\tif closestCall != nil {\n\t\t\tm.fail(\"\\n\\nmock: Unexpected Method Call\\n-----------------------------\\n\\n%s\\n\\nThe closest call I have is: \\n\\n%s\\n\\n%s\\nDiff: %s\",\n\t\t\t\tcallString(methodName, arguments, true),\n\t\t\t\tcallString(methodName, closestCall.Arguments, true),\n\t\t\t\tdiffArguments(closestCall.Arguments, arguments),\n\t\t\t\tstrings.TrimSpace(mismatch),\n\t\t\t)\n\t\t} else {\n\t\t\tm.fail(\"\\nassert: mock: I don't know what to return because the method call was unexpected.\\n\\tEither do Mock.On(\\\"%s\\\").Return(...) first, or remove the %s() call.\\n\\tThis method was unexpected:\\n\\t\\t%s\\n\\tat: %s\", methodName, methodName, callString(methodName, arguments, true), assert.CallerInfo())\n\t\t}\n\t}\n\n\tfor _, requirement := range call.requires {\n\t\tif satisfied, _ := requirement.Parent.checkExpectation(requirement); !satisfied {\n\t\t\tm.mutex.Unlock()\n\t\t\tm.fail(\"mock: Unexpected Method Call\\n-----------------------------\\n\\n%s\\n\\nMust not be called before%s:\\n\\n%s\",\n\t\t\t\tcallString(call.Method, call.Arguments, true),\n\t\t\t\tfunc() (s string) {\n\t\t\t\t\tif requirement.totalCalls > 0 {\n\t\t\t\t\t\ts = \" another call of\"\n\t\t\t\t\t}\n\t\t\t\t\tif call.Parent != requirement.Parent {\n\t\t\t\t\t\ts += \" method from another mock instance\"\n\t\t\t\t\t}\n\t\t\t\t\treturn\n\t\t\t\t}(),\n\t\t\t\tcallString(requirement.Method, requirement.Arguments, true),\n\t\t\t)\n\t\t}\n\t}\n\n\tif call.Repeatability == 1 {\n\t\tcall.Repeatability = -1\n\t} else if call.Repeatability > 1 {\n\t\tcall.Repeatability--\n\t}\n\tcall.totalCalls++\n\n\t// add the call\n\tm.Calls = append(m.Calls, *newCall(m, methodName, assert.CallerInfo(), arguments...))\n\tm.mutex.Unlock()\n\n\t// block if specified\n\tif call.WaitFor != nil {\n\t\t<-call.WaitFor\n\t} else {\n\t\ttime.Sleep(call.waitTime)\n\t}\n\n\tm.mutex.Lock()\n\tpanicMsg := call.PanicMsg\n\tm.mutex.Unlock()\n\tif panicMsg != nil {\n\t\tpanic(*panicMsg)\n\t}\n\n\tm.mutex.Lock()\n\trunFn := call.RunFn\n\tm.mutex.Unlock()\n\n\tif runFn != nil {\n\t\trunFn(arguments)\n\t}\n\n\tm.mutex.Lock()\n\treturnArgs := call.ReturnArguments\n\tm.mutex.Unlock()\n\n\treturn returnArgs\n}\n\n/*\n\tAssertions\n*/\n\ntype assertExpectationser interface {\n\tAssertExpectations(TestingT) bool\n}\n\n// AssertExpectationsForObjects asserts that everything specified with On and Return\n// of the specified objects was in fact called as expected.\n//\n// Calls may have occurred in any order.\nfunc AssertExpectationsForObjects(t TestingT, testObjects ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tfor _, obj := range testObjects {\n\t\tif m, ok := obj.(*Mock); ok {\n\t\t\tt.Logf(\"Deprecated mock.AssertExpectationsForObjects(myMock.Mock) use mock.AssertExpectationsForObjects(myMock)\")\n\t\t\tobj = m\n\t\t}\n\t\tm := obj.(assertExpectationser)\n\t\tif !m.AssertExpectations(t) {\n\t\t\tt.Logf(\"Expectations didn't match for Mock: %+v\", reflect.TypeOf(m))\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// AssertExpectations asserts that everything specified with On and Return was\n// in fact called as expected.  Calls may have occurred in any order.\nfunc (m *Mock) AssertExpectations(t TestingT) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\tvar failedExpectations int\n\n\t// iterate through each expectation\n\texpectedCalls := m.expectedCalls()\n\tfor _, expectedCall := range expectedCalls {\n\t\tsatisfied, reason := m.checkExpectation(expectedCall)\n\t\tif !satisfied {\n\t\t\tfailedExpectations++\n\t\t}\n\t\tt.Logf(reason)\n\t}\n\n\tif failedExpectations != 0 {\n\t\tt.Errorf(\"FAIL: %d out of %d expectation(s) were met.\\n\\tThe code you are testing needs to make %d more call(s).\\n\\tat: %s\", len(expectedCalls)-failedExpectations, len(expectedCalls), failedExpectations, assert.CallerInfo())\n\t}\n\n\treturn failedExpectations == 0\n}\n\nfunc (m *Mock) checkExpectation(call *Call) (bool, string) {\n\tif !call.optional && !m.methodWasCalled(call.Method, call.Arguments) && call.totalCalls == 0 {\n\t\treturn false, fmt.Sprintf(\"FAIL:\\t%s(%s)\\n\\t\\tat: %s\", call.Method, call.Arguments.String(), call.callerInfo)\n\t}\n\tif call.Repeatability > 0 {\n\t\treturn false, fmt.Sprintf(\"FAIL:\\t%s(%s)\\n\\t\\tat: %s\", call.Method, call.Arguments.String(), call.callerInfo)\n\t}\n\treturn true, fmt.Sprintf(\"PASS:\\t%s(%s)\", call.Method, call.Arguments.String())\n}\n\n// AssertNumberOfCalls asserts that the method was called expectedCalls times.\nfunc (m *Mock) AssertNumberOfCalls(t TestingT, methodName string, expectedCalls int) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\tvar actualCalls int\n\tfor _, call := range m.calls() {\n\t\tif call.Method == methodName {\n\t\t\tactualCalls++\n\t\t}\n\t}\n\treturn assert.Equal(t, expectedCalls, actualCalls, fmt.Sprintf(\"Expected number of calls (%d) does not match the actual number of calls (%d).\", expectedCalls, actualCalls))\n}\n\n// AssertCalled asserts that the method was called.\n// It can produce a false result when an argument is a pointer type and the underlying value changed after calling the mocked method.\nfunc (m *Mock) AssertCalled(t TestingT, methodName string, arguments ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\tif !m.methodWasCalled(methodName, arguments) {\n\t\tvar calledWithArgs []string\n\t\tfor _, call := range m.calls() {\n\t\t\tcalledWithArgs = append(calledWithArgs, fmt.Sprintf(\"%v\", call.Arguments))\n\t\t}\n\t\tif len(calledWithArgs) == 0 {\n\t\t\treturn assert.Fail(t, \"Should have called with given arguments\",\n\t\t\t\tfmt.Sprintf(\"Expected %q to have been called with:\\n%v\\nbut no actual calls happened\", methodName, arguments))\n\t\t}\n\t\treturn assert.Fail(t, \"Should have called with given arguments\",\n\t\t\tfmt.Sprintf(\"Expected %q to have been called with:\\n%v\\nbut actual calls were:\\n        %v\", methodName, arguments, strings.Join(calledWithArgs, \"\\n\")))\n\t}\n\treturn true\n}\n\n// AssertNotCalled asserts that the method was not called.\n// It can produce a false result when an argument is a pointer type and the underlying value changed after calling the mocked method.\nfunc (m *Mock) AssertNotCalled(t TestingT, methodName string, arguments ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\tif m.methodWasCalled(methodName, arguments) {\n\t\treturn assert.Fail(t, \"Should not have called with given arguments\",\n\t\t\tfmt.Sprintf(\"Expected %q to not have been called with:\\n%v\\nbut actually it was.\", methodName, arguments))\n\t}\n\treturn true\n}\n\n// IsMethodCallable checking that the method can be called\n// If the method was called more than `Repeatability` return false\nfunc (m *Mock) IsMethodCallable(t TestingT, methodName string, arguments ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\tm.mutex.Lock()\n\tdefer m.mutex.Unlock()\n\n\tfor _, v := range m.ExpectedCalls {\n\t\tif v.Method != methodName {\n\t\t\tcontinue\n\t\t}\n\t\tif len(arguments) != len(v.Arguments) {\n\t\t\tcontinue\n\t\t}\n\t\tif v.Repeatability < v.totalCalls {\n\t\t\tcontinue\n\t\t}\n\t\tif isArgsEqual(v.Arguments, arguments) {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n// isArgsEqual compares arguments\nfunc isArgsEqual(expected Arguments, args []interface{}) bool {\n\tif len(expected) != len(args) {\n\t\treturn false\n\t}\n\tfor i, v := range args {\n\t\tif !reflect.DeepEqual(expected[i], v) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc (m *Mock) methodWasCalled(methodName string, expected []interface{}) bool {\n\tfor _, call := range m.calls() {\n\t\tif call.Method == methodName {\n\n\t\t\t_, differences := Arguments(expected).Diff(call.Arguments)\n\n\t\t\tif differences == 0 {\n\t\t\t\t// found the expected call\n\t\t\t\treturn true\n\t\t\t}\n\n\t\t}\n\t}\n\t// we didn't find the expected call\n\treturn false\n}\n\nfunc (m *Mock) expectedCalls() []*Call {\n\treturn append([]*Call{}, m.ExpectedCalls...)\n}\n\nfunc (m *Mock) calls() []Call {\n\treturn append([]Call{}, m.Calls...)\n}\n\n/*\n\tArguments\n*/\n\n// Arguments holds an array of method arguments or return values.\ntype Arguments []interface{}\n\nconst (\n\t// Anything is used in Diff and Assert when the argument being tested\n\t// shouldn't be taken into consideration.\n\tAnything = \"mock.Anything\"\n)\n\n// AnythingOfTypeArgument is a string that contains the type of an argument\n// for use when type checking.  Used in Diff and Assert.\ntype AnythingOfTypeArgument string\n\n// AnythingOfType returns an AnythingOfTypeArgument object containing the\n// name of the type to check for.  Used in Diff and Assert.\n//\n// For example:\n//\n//\tAssert(t, AnythingOfType(\"string\"), AnythingOfType(\"int\"))\nfunc AnythingOfType(t string) AnythingOfTypeArgument {\n\treturn AnythingOfTypeArgument(t)\n}\n\n// IsTypeArgument is a struct that contains the type of an argument\n// for use when type checking.  This is an alternative to AnythingOfType.\n// Used in Diff and Assert.\ntype IsTypeArgument struct {\n\tt interface{}\n}\n\n// IsType returns an IsTypeArgument object containing the type to check for.\n// You can provide a zero-value of the type to check.  This is an\n// alternative to AnythingOfType.  Used in Diff and Assert.\n//\n// For example:\n// Assert(t, IsType(\"\"), IsType(0))\nfunc IsType(t interface{}) *IsTypeArgument {\n\treturn &IsTypeArgument{t: t}\n}\n\n// FunctionalOptionsArgument is a struct that contains the type and value of an functional option argument\n// for use when type checking.\ntype FunctionalOptionsArgument struct {\n\tvalue interface{}\n}\n\n// String returns the string representation of FunctionalOptionsArgument\nfunc (f *FunctionalOptionsArgument) String() string {\n\tvar name string\n\ttValue := reflect.ValueOf(f.value)\n\tif tValue.Len() > 0 {\n\t\tname = \"[]\" + reflect.TypeOf(tValue.Index(0).Interface()).String()\n\t}\n\n\treturn strings.Replace(fmt.Sprintf(\"%#v\", f.value), \"[]interface {}\", name, 1)\n}\n\n// FunctionalOptions returns an FunctionalOptionsArgument object containing the functional option type\n// and the values to check of\n//\n// For example:\n// Assert(t, FunctionalOptions(\"[]foo.FunctionalOption\", foo.Opt1(), foo.Opt2()))\nfunc FunctionalOptions(value ...interface{}) *FunctionalOptionsArgument {\n\treturn &FunctionalOptionsArgument{\n\t\tvalue: value,\n\t}\n}\n\n// argumentMatcher performs custom argument matching, returning whether or\n// not the argument is matched by the expectation fixture function.\ntype argumentMatcher struct {\n\t// fn is a function which accepts one argument, and returns a bool.\n\tfn reflect.Value\n}\n\nfunc (f argumentMatcher) Matches(argument interface{}) bool {\n\texpectType := f.fn.Type().In(0)\n\texpectTypeNilSupported := false\n\tswitch expectType.Kind() {\n\tcase reflect.Interface, reflect.Chan, reflect.Func, reflect.Map, reflect.Slice, reflect.Ptr:\n\t\texpectTypeNilSupported = true\n\t}\n\n\targType := reflect.TypeOf(argument)\n\tvar arg reflect.Value\n\tif argType == nil {\n\t\targ = reflect.New(expectType).Elem()\n\t} else {\n\t\targ = reflect.ValueOf(argument)\n\t}\n\n\tif argType == nil && !expectTypeNilSupported {\n\t\tpanic(errors.New(\"attempting to call matcher with nil for non-nil expected type\"))\n\t}\n\tif argType == nil || argType.AssignableTo(expectType) {\n\t\tresult := f.fn.Call([]reflect.Value{arg})\n\t\treturn result[0].Bool()\n\t}\n\treturn false\n}\n\nfunc (f argumentMatcher) String() string {\n\treturn fmt.Sprintf(\"func(%s) bool\", f.fn.Type().In(0).String())\n}\n\n// MatchedBy can be used to match a mock call based on only certain properties\n// from a complex struct or some calculation. It takes a function that will be\n// evaluated with the called argument and will return true when there's a match\n// and false otherwise.\n//\n// Example:\n// m.On(\"Do\", MatchedBy(func(req *http.Request) bool { return req.Host == \"example.com\" }))\n//\n// |fn|, must be a function accepting a single argument (of the expected type)\n// which returns a bool. If |fn| doesn't match the required signature,\n// MatchedBy() panics.\nfunc MatchedBy(fn interface{}) argumentMatcher {\n\tfnType := reflect.TypeOf(fn)\n\n\tif fnType.Kind() != reflect.Func {\n\t\tpanic(fmt.Sprintf(\"assert: arguments: %s is not a func\", fn))\n\t}\n\tif fnType.NumIn() != 1 {\n\t\tpanic(fmt.Sprintf(\"assert: arguments: %s does not take exactly one argument\", fn))\n\t}\n\tif fnType.NumOut() != 1 || fnType.Out(0).Kind() != reflect.Bool {\n\t\tpanic(fmt.Sprintf(\"assert: arguments: %s does not return a bool\", fn))\n\t}\n\n\treturn argumentMatcher{fn: reflect.ValueOf(fn)}\n}\n\n// Get Returns the argument at the specified index.\nfunc (args Arguments) Get(index int) interface{} {\n\tif index+1 > len(args) {\n\t\tpanic(fmt.Sprintf(\"assert: arguments: Cannot call Get(%d) because there are %d argument(s).\", index, len(args)))\n\t}\n\treturn args[index]\n}\n\n// Is gets whether the objects match the arguments specified.\nfunc (args Arguments) Is(objects ...interface{}) bool {\n\tfor i, obj := range args {\n\t\tif obj != objects[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// Diff gets a string describing the differences between the arguments\n// and the specified objects.\n//\n// Returns the diff string and number of differences found.\nfunc (args Arguments) Diff(objects []interface{}) (string, int) {\n\t// TODO: could return string as error and nil for No difference\n\n\toutput := \"\\n\"\n\tvar differences int\n\n\tmaxArgCount := len(args)\n\tif len(objects) > maxArgCount {\n\t\tmaxArgCount = len(objects)\n\t}\n\n\tfor i := 0; i < maxArgCount; i++ {\n\t\tvar actual, expected interface{}\n\t\tvar actualFmt, expectedFmt string\n\n\t\tif len(objects) <= i {\n\t\t\tactual = \"(Missing)\"\n\t\t\tactualFmt = \"(Missing)\"\n\t\t} else {\n\t\t\tactual = objects[i]\n\t\t\tactualFmt = fmt.Sprintf(\"(%[1]T=%[1]v)\", actual)\n\t\t}\n\n\t\tif len(args) <= i {\n\t\t\texpected = \"(Missing)\"\n\t\t\texpectedFmt = \"(Missing)\"\n\t\t} else {\n\t\t\texpected = args[i]\n\t\t\texpectedFmt = fmt.Sprintf(\"(%[1]T=%[1]v)\", expected)\n\t\t}\n\n\t\tif matcher, ok := expected.(argumentMatcher); ok {\n\t\t\tvar matches bool\n\t\t\tfunc() {\n\t\t\t\tdefer func() {\n\t\t\t\t\tif r := recover(); r != nil {\n\t\t\t\t\t\tactualFmt = fmt.Sprintf(\"panic in argument matcher: %v\", r)\n\t\t\t\t\t}\n\t\t\t\t}()\n\t\t\t\tmatches = matcher.Matches(actual)\n\t\t\t}()\n\t\t\tif matches {\n\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: PASS:  %s matched by %s\\n\", output, i, actualFmt, matcher)\n\t\t\t} else {\n\t\t\t\tdifferences++\n\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: FAIL:  %s not matched by %s\\n\", output, i, actualFmt, matcher)\n\t\t\t}\n\t\t} else if reflect.TypeOf(expected) == reflect.TypeOf((*AnythingOfTypeArgument)(nil)).Elem() {\n\t\t\t// type checking\n\t\t\tif reflect.TypeOf(actual).Name() != string(expected.(AnythingOfTypeArgument)) && reflect.TypeOf(actual).String() != string(expected.(AnythingOfTypeArgument)) {\n\t\t\t\t// not match\n\t\t\t\tdifferences++\n\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: FAIL:  type %s != type %s - %s\\n\", output, i, expected, reflect.TypeOf(actual).Name(), actualFmt)\n\t\t\t}\n\t\t} else if reflect.TypeOf(expected) == reflect.TypeOf((*IsTypeArgument)(nil)) {\n\t\t\tt := expected.(*IsTypeArgument).t\n\t\t\tif reflect.TypeOf(t) != reflect.TypeOf(actual) {\n\t\t\t\tdifferences++\n\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: FAIL:  type %s != type %s - %s\\n\", output, i, reflect.TypeOf(t).Name(), reflect.TypeOf(actual).Name(), actualFmt)\n\t\t\t}\n\t\t} else if reflect.TypeOf(expected) == reflect.TypeOf((*FunctionalOptionsArgument)(nil)) {\n\t\t\tt := expected.(*FunctionalOptionsArgument).value\n\n\t\t\tvar name string\n\t\t\ttValue := reflect.ValueOf(t)\n\t\t\tif tValue.Len() > 0 {\n\t\t\t\tname = \"[]\" + reflect.TypeOf(tValue.Index(0).Interface()).String()\n\t\t\t}\n\n\t\t\ttName := reflect.TypeOf(t).Name()\n\t\t\tif name != reflect.TypeOf(actual).String() && tValue.Len() != 0 {\n\t\t\t\tdifferences++\n\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: FAIL:  type %s != type %s - %s\\n\", output, i, tName, reflect.TypeOf(actual).Name(), actualFmt)\n\t\t\t} else {\n\t\t\t\tif ef, af := assertOpts(t, actual); ef == \"\" && af == \"\" {\n\t\t\t\t\t// match\n\t\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: PASS:  %s == %s\\n\", output, i, tName, tName)\n\t\t\t\t} else {\n\t\t\t\t\t// not match\n\t\t\t\t\tdifferences++\n\t\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: FAIL:  %s != %s\\n\", output, i, af, ef)\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t// normal checking\n\n\t\t\tif assert.ObjectsAreEqual(expected, Anything) || assert.ObjectsAreEqual(actual, Anything) || assert.ObjectsAreEqual(actual, expected) {\n\t\t\t\t// match\n\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: PASS:  %s == %s\\n\", output, i, actualFmt, expectedFmt)\n\t\t\t} else {\n\t\t\t\t// not match\n\t\t\t\tdifferences++\n\t\t\t\toutput = fmt.Sprintf(\"%s\\t%d: FAIL:  %s != %s\\n\", output, i, actualFmt, expectedFmt)\n\t\t\t}\n\t\t}\n\n\t}\n\n\tif differences == 0 {\n\t\treturn \"No differences.\", differences\n\t}\n\n\treturn output, differences\n}\n\n// Assert compares the arguments with the specified objects and fails if\n// they do not exactly match.\nfunc (args Arguments) Assert(t TestingT, objects ...interface{}) bool {\n\tif h, ok := t.(tHelper); ok {\n\t\th.Helper()\n\t}\n\n\t// get the differences\n\tdiff, diffCount := args.Diff(objects)\n\n\tif diffCount == 0 {\n\t\treturn true\n\t}\n\n\t// there are differences... report them...\n\tt.Logf(diff)\n\tt.Errorf(\"%sArguments do not match.\", assert.CallerInfo())\n\n\treturn false\n}\n\n// String gets the argument at the specified index. Panics if there is no argument, or\n// if the argument is of the wrong type.\n//\n// If no index is provided, String() returns a complete string representation\n// of the arguments.\nfunc (args Arguments) String(indexOrNil ...int) string {\n\tif len(indexOrNil) == 0 {\n\t\t// normal String() method - return a string representation of the args\n\t\tvar argsStr []string\n\t\tfor _, arg := range args {\n\t\t\targsStr = append(argsStr, fmt.Sprintf(\"%T\", arg)) // handles nil nicely\n\t\t}\n\t\treturn strings.Join(argsStr, \",\")\n\t} else if len(indexOrNil) == 1 {\n\t\t// Index has been specified - get the argument at that index\n\t\tindex := indexOrNil[0]\n\t\tvar s string\n\t\tvar ok bool\n\t\tif s, ok = args.Get(index).(string); !ok {\n\t\t\tpanic(fmt.Sprintf(\"assert: arguments: String(%d) failed because object wasn't correct type: %s\", index, args.Get(index)))\n\t\t}\n\t\treturn s\n\t}\n\n\tpanic(fmt.Sprintf(\"assert: arguments: Wrong number of arguments passed to String.  Must be 0 or 1, not %d\", len(indexOrNil)))\n}\n\n// Int gets the argument at the specified index. Panics if there is no argument, or\n// if the argument is of the wrong type.\nfunc (args Arguments) Int(index int) int {\n\tvar s int\n\tvar ok bool\n\tif s, ok = args.Get(index).(int); !ok {\n\t\tpanic(fmt.Sprintf(\"assert: arguments: Int(%d) failed because object wasn't correct type: %v\", index, args.Get(index)))\n\t}\n\treturn s\n}\n\n// Error gets the argument at the specified index. Panics if there is no argument, or\n// if the argument is of the wrong type.\nfunc (args Arguments) Error(index int) error {\n\tobj := args.Get(index)\n\tvar s error\n\tvar ok bool\n\tif obj == nil {\n\t\treturn nil\n\t}\n\tif s, ok = obj.(error); !ok {\n\t\tpanic(fmt.Sprintf(\"assert: arguments: Error(%d) failed because object wasn't correct type: %v\", index, args.Get(index)))\n\t}\n\treturn s\n}\n\n// Bool gets the argument at the specified index. Panics if there is no argument, or\n// if the argument is of the wrong type.\nfunc (args Arguments) Bool(index int) bool {\n\tvar s bool\n\tvar ok bool\n\tif s, ok = args.Get(index).(bool); !ok {\n\t\tpanic(fmt.Sprintf(\"assert: arguments: Bool(%d) failed because object wasn't correct type: %v\", index, args.Get(index)))\n\t}\n\treturn s\n}\n\nfunc typeAndKind(v interface{}) (reflect.Type, reflect.Kind) {\n\tt := reflect.TypeOf(v)\n\tk := t.Kind()\n\n\tif k == reflect.Ptr {\n\t\tt = t.Elem()\n\t\tk = t.Kind()\n\t}\n\treturn t, k\n}\n\nfunc diffArguments(expected Arguments, actual Arguments) string {\n\tif len(expected) != len(actual) {\n\t\treturn fmt.Sprintf(\"Provided %v arguments, mocked for %v arguments\", len(expected), len(actual))\n\t}\n\n\tfor x := range expected {\n\t\tif diffString := diff(expected[x], actual[x]); diffString != \"\" {\n\t\t\treturn fmt.Sprintf(\"Difference found in argument %v:\\n\\n%s\", x, diffString)\n\t\t}\n\t}\n\n\treturn \"\"\n}\n\n// diff returns a diff of both values as long as both are of the same type and\n// are a struct, map, slice or array. Otherwise it returns an empty string.\nfunc diff(expected interface{}, actual interface{}) string {\n\tif expected == nil || actual == nil {\n\t\treturn \"\"\n\t}\n\n\tet, ek := typeAndKind(expected)\n\tat, _ := typeAndKind(actual)\n\n\tif et != at {\n\t\treturn \"\"\n\t}\n\n\tif ek != reflect.Struct && ek != reflect.Map && ek != reflect.Slice && ek != reflect.Array {\n\t\treturn \"\"\n\t}\n\n\te := spewConfig.Sdump(expected)\n\ta := spewConfig.Sdump(actual)\n\n\tdiff, _ := difflib.GetUnifiedDiffString(difflib.UnifiedDiff{\n\t\tA:        difflib.SplitLines(e),\n\t\tB:        difflib.SplitLines(a),\n\t\tFromFile: \"Expected\",\n\t\tFromDate: \"\",\n\t\tToFile:   \"Actual\",\n\t\tToDate:   \"\",\n\t\tContext:  1,\n\t})\n\n\treturn diff\n}\n\nvar spewConfig = spew.ConfigState{\n\tIndent:                  \" \",\n\tDisablePointerAddresses: true,\n\tDisableCapacities:       true,\n\tSortKeys:                true,\n}\n\ntype tHelper interface {\n\tHelper()\n}\n\nfunc assertOpts(expected, actual interface{}) (expectedFmt, actualFmt string) {\n\texpectedOpts := reflect.ValueOf(expected)\n\tactualOpts := reflect.ValueOf(actual)\n\tvar expectedNames []string\n\tfor i := 0; i < expectedOpts.Len(); i++ {\n\t\texpectedNames = append(expectedNames, funcName(expectedOpts.Index(i).Interface()))\n\t}\n\tvar actualNames []string\n\tfor i := 0; i < actualOpts.Len(); i++ {\n\t\tactualNames = append(actualNames, funcName(actualOpts.Index(i).Interface()))\n\t}\n\tif !assert.ObjectsAreEqual(expectedNames, actualNames) {\n\t\texpectedFmt = fmt.Sprintf(\"%v\", expectedNames)\n\t\tactualFmt = fmt.Sprintf(\"%v\", actualNames)\n\t\treturn\n\t}\n\n\tfor i := 0; i < expectedOpts.Len(); i++ {\n\t\texpectedOpt := expectedOpts.Index(i).Interface()\n\t\tactualOpt := actualOpts.Index(i).Interface()\n\n\t\texpectedFunc := expectedNames[i]\n\t\tactualFunc := actualNames[i]\n\t\tif expectedFunc != actualFunc {\n\t\t\texpectedFmt = expectedFunc\n\t\t\tactualFmt = actualFunc\n\t\t\treturn\n\t\t}\n\n\t\tot := reflect.TypeOf(expectedOpt)\n\t\tvar expectedValues []reflect.Value\n\t\tvar actualValues []reflect.Value\n\t\tif ot.NumIn() == 0 {\n\t\t\treturn\n\t\t}\n\n\t\tfor i := 0; i < ot.NumIn(); i++ {\n\t\t\tvt := ot.In(i).Elem()\n\t\t\texpectedValues = append(expectedValues, reflect.New(vt))\n\t\t\tactualValues = append(actualValues, reflect.New(vt))\n\t\t}\n\n\t\treflect.ValueOf(expectedOpt).Call(expectedValues)\n\t\treflect.ValueOf(actualOpt).Call(actualValues)\n\n\t\tfor i := 0; i < ot.NumIn(); i++ {\n\t\t\tif !assert.ObjectsAreEqual(expectedValues[i].Interface(), actualValues[i].Interface()) {\n\t\t\t\texpectedFmt = fmt.Sprintf(\"%s %+v\", expectedNames[i], expectedValues[i].Interface())\n\t\t\t\tactualFmt = fmt.Sprintf(\"%s %+v\", expectedNames[i], actualValues[i].Interface())\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n\n\treturn \"\", \"\"\n}\n\nfunc funcName(opt interface{}) string {\n\tn := runtime.FuncForPC(reflect.ValueOf(opt).Pointer()).Name()\n\treturn strings.TrimSuffix(path.Base(n), path.Ext(n))\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/LICENSE",
    "content": "\nThis project is covered by two different licenses: MIT and Apache.\n\n#### MIT License ####\n\nThe following files were ported to Go from C files of libyaml, and thus\nare still covered by their original MIT license, with the additional\ncopyright staring in 2011 when the project was ported over:\n\n    apic.go emitterc.go parserc.go readerc.go scannerc.go\n    writerc.go yamlh.go yamlprivateh.go\n\nCopyright (c) 2006-2010 Kirill Simonov\nCopyright (c) 2006-2011 Kirill Simonov\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\nof the Software, and to permit persons to whom the Software is furnished to do\nso, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n### Apache License ###\n\nAll the remaining project files are covered by the Apache license:\n\nCopyright (c) 2011-2019 Canonical Ltd\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/NOTICE",
    "content": "Copyright 2011-2016 Canonical Ltd.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/README.md",
    "content": "# YAML support for the Go language\n\nIntroduction\n------------\n\nThe yaml package enables Go programs to comfortably encode and decode YAML\nvalues. It was developed within [Canonical](https://www.canonical.com) as\npart of the [juju](https://juju.ubuntu.com) project, and is based on a\npure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)\nC library to parse and generate YAML data quickly and reliably.\n\nCompatibility\n-------------\n\nThe yaml package supports most of YAML 1.2, but preserves some behavior\nfrom 1.1 for backwards compatibility.\n\nSpecifically, as of v3 of the yaml package:\n\n - YAML 1.1 bools (_yes/no, on/off_) are supported as long as they are being\n   decoded into a typed bool value. Otherwise they behave as a string. Booleans\n   in YAML 1.2 are _true/false_ only.\n - Octals encode and decode as _0777_ per YAML 1.1, rather than _0o777_\n   as specified in YAML 1.2, because most parsers still use the old format.\n   Octals in the  _0o777_ format are supported though, so new files work.\n - Does not support base-60 floats. These are gone from YAML 1.2, and were\n   actually never supported by this package as it's clearly a poor choice.\n\nand offers backwards\ncompatibility with YAML 1.1 in some cases.\n1.2, including support for\nanchors, tags, map merging, etc. Multi-document unmarshalling is not yet\nimplemented, and base-60 floats from YAML 1.1 are purposefully not\nsupported since they're a poor design and are gone in YAML 1.2.\n\nInstallation and usage\n----------------------\n\nThe import path for the package is *gopkg.in/yaml.v3*.\n\nTo install it, run:\n\n    go get gopkg.in/yaml.v3\n\nAPI documentation\n-----------------\n\nIf opened in a browser, the import path itself leads to the API documentation:\n\n  - [https://gopkg.in/yaml.v3](https://gopkg.in/yaml.v3)\n\nAPI stability\n-------------\n\nThe package API for yaml v3 will remain stable as described in [gopkg.in](https://gopkg.in).\n\n\nLicense\n-------\n\nThe yaml package is licensed under the MIT and Apache License 2.0 licenses.\nPlease see the LICENSE file for details.\n\n\nExample\n-------\n\n```Go\npackage main\n\nimport (\n        \"fmt\"\n        \"log\"\n\n        \"gopkg.in/yaml.v3\"\n)\n\nvar data = `\na: Easy!\nb:\n  c: 2\n  d: [3, 4]\n`\n\n// Note: struct fields must be public in order for unmarshal to\n// correctly populate the data.\ntype T struct {\n        A string\n        B struct {\n                RenamedC int   `yaml:\"c\"`\n                D        []int `yaml:\",flow\"`\n        }\n}\n\nfunc main() {\n        t := T{}\n    \n        err := yaml.Unmarshal([]byte(data), &t)\n        if err != nil {\n                log.Fatalf(\"error: %v\", err)\n        }\n        fmt.Printf(\"--- t:\\n%v\\n\\n\", t)\n    \n        d, err := yaml.Marshal(&t)\n        if err != nil {\n                log.Fatalf(\"error: %v\", err)\n        }\n        fmt.Printf(\"--- t dump:\\n%s\\n\\n\", string(d))\n    \n        m := make(map[interface{}]interface{})\n    \n        err = yaml.Unmarshal([]byte(data), &m)\n        if err != nil {\n                log.Fatalf(\"error: %v\", err)\n        }\n        fmt.Printf(\"--- m:\\n%v\\n\\n\", m)\n    \n        d, err = yaml.Marshal(&m)\n        if err != nil {\n                log.Fatalf(\"error: %v\", err)\n        }\n        fmt.Printf(\"--- m dump:\\n%s\\n\\n\", string(d))\n}\n```\n\nThis example will generate the following output:\n\n```\n--- t:\n{Easy! {2 [3 4]}}\n\n--- t dump:\na: Easy!\nb:\n  c: 2\n  d: [3, 4]\n\n\n--- m:\nmap[a:Easy! b:map[c:2 d:[3 4]]]\n\n--- m dump:\na: Easy!\nb:\n  c: 2\n  d:\n  - 3\n  - 4\n```\n\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/apic.go",
    "content": "// \n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n// \n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n// \n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n// \n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\nimport (\n\t\"io\"\n)\n\nfunc yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) {\n\t//fmt.Println(\"yaml_insert_token\", \"pos:\", pos, \"typ:\", token.typ, \"head:\", parser.tokens_head, \"len:\", len(parser.tokens))\n\n\t// Check if we can move the queue at the beginning of the buffer.\n\tif parser.tokens_head > 0 && len(parser.tokens) == cap(parser.tokens) {\n\t\tif parser.tokens_head != len(parser.tokens) {\n\t\t\tcopy(parser.tokens, parser.tokens[parser.tokens_head:])\n\t\t}\n\t\tparser.tokens = parser.tokens[:len(parser.tokens)-parser.tokens_head]\n\t\tparser.tokens_head = 0\n\t}\n\tparser.tokens = append(parser.tokens, *token)\n\tif pos < 0 {\n\t\treturn\n\t}\n\tcopy(parser.tokens[parser.tokens_head+pos+1:], parser.tokens[parser.tokens_head+pos:])\n\tparser.tokens[parser.tokens_head+pos] = *token\n}\n\n// Create a new parser object.\nfunc yaml_parser_initialize(parser *yaml_parser_t) bool {\n\t*parser = yaml_parser_t{\n\t\traw_buffer: make([]byte, 0, input_raw_buffer_size),\n\t\tbuffer:     make([]byte, 0, input_buffer_size),\n\t}\n\treturn true\n}\n\n// Destroy a parser object.\nfunc yaml_parser_delete(parser *yaml_parser_t) {\n\t*parser = yaml_parser_t{}\n}\n\n// String read handler.\nfunc yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {\n\tif parser.input_pos == len(parser.input) {\n\t\treturn 0, io.EOF\n\t}\n\tn = copy(buffer, parser.input[parser.input_pos:])\n\tparser.input_pos += n\n\treturn n, nil\n}\n\n// Reader read handler.\nfunc yaml_reader_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {\n\treturn parser.input_reader.Read(buffer)\n}\n\n// Set a string input.\nfunc yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) {\n\tif parser.read_handler != nil {\n\t\tpanic(\"must set the input source only once\")\n\t}\n\tparser.read_handler = yaml_string_read_handler\n\tparser.input = input\n\tparser.input_pos = 0\n}\n\n// Set a file input.\nfunc yaml_parser_set_input_reader(parser *yaml_parser_t, r io.Reader) {\n\tif parser.read_handler != nil {\n\t\tpanic(\"must set the input source only once\")\n\t}\n\tparser.read_handler = yaml_reader_read_handler\n\tparser.input_reader = r\n}\n\n// Set the source encoding.\nfunc yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) {\n\tif parser.encoding != yaml_ANY_ENCODING {\n\t\tpanic(\"must set the encoding only once\")\n\t}\n\tparser.encoding = encoding\n}\n\n// Create a new emitter object.\nfunc yaml_emitter_initialize(emitter *yaml_emitter_t) {\n\t*emitter = yaml_emitter_t{\n\t\tbuffer:     make([]byte, output_buffer_size),\n\t\traw_buffer: make([]byte, 0, output_raw_buffer_size),\n\t\tstates:     make([]yaml_emitter_state_t, 0, initial_stack_size),\n\t\tevents:     make([]yaml_event_t, 0, initial_queue_size),\n\t\tbest_width: -1,\n\t}\n}\n\n// Destroy an emitter object.\nfunc yaml_emitter_delete(emitter *yaml_emitter_t) {\n\t*emitter = yaml_emitter_t{}\n}\n\n// String write handler.\nfunc yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error {\n\t*emitter.output_buffer = append(*emitter.output_buffer, buffer...)\n\treturn nil\n}\n\n// yaml_writer_write_handler uses emitter.output_writer to write the\n// emitted text.\nfunc yaml_writer_write_handler(emitter *yaml_emitter_t, buffer []byte) error {\n\t_, err := emitter.output_writer.Write(buffer)\n\treturn err\n}\n\n// Set a string output.\nfunc yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]byte) {\n\tif emitter.write_handler != nil {\n\t\tpanic(\"must set the output target only once\")\n\t}\n\temitter.write_handler = yaml_string_write_handler\n\temitter.output_buffer = output_buffer\n}\n\n// Set a file output.\nfunc yaml_emitter_set_output_writer(emitter *yaml_emitter_t, w io.Writer) {\n\tif emitter.write_handler != nil {\n\t\tpanic(\"must set the output target only once\")\n\t}\n\temitter.write_handler = yaml_writer_write_handler\n\temitter.output_writer = w\n}\n\n// Set the output encoding.\nfunc yaml_emitter_set_encoding(emitter *yaml_emitter_t, encoding yaml_encoding_t) {\n\tif emitter.encoding != yaml_ANY_ENCODING {\n\t\tpanic(\"must set the output encoding only once\")\n\t}\n\temitter.encoding = encoding\n}\n\n// Set the canonical output style.\nfunc yaml_emitter_set_canonical(emitter *yaml_emitter_t, canonical bool) {\n\temitter.canonical = canonical\n}\n\n// Set the indentation increment.\nfunc yaml_emitter_set_indent(emitter *yaml_emitter_t, indent int) {\n\tif indent < 2 || indent > 9 {\n\t\tindent = 2\n\t}\n\temitter.best_indent = indent\n}\n\n// Set the preferred line width.\nfunc yaml_emitter_set_width(emitter *yaml_emitter_t, width int) {\n\tif width < 0 {\n\t\twidth = -1\n\t}\n\temitter.best_width = width\n}\n\n// Set if unescaped non-ASCII characters are allowed.\nfunc yaml_emitter_set_unicode(emitter *yaml_emitter_t, unicode bool) {\n\temitter.unicode = unicode\n}\n\n// Set the preferred line break character.\nfunc yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) {\n\temitter.line_break = line_break\n}\n\n///*\n// * Destroy a token object.\n// */\n//\n//YAML_DECLARE(void)\n//yaml_token_delete(yaml_token_t *token)\n//{\n//    assert(token);  // Non-NULL token object expected.\n//\n//    switch (token.type)\n//    {\n//        case YAML_TAG_DIRECTIVE_TOKEN:\n//            yaml_free(token.data.tag_directive.handle);\n//            yaml_free(token.data.tag_directive.prefix);\n//            break;\n//\n//        case YAML_ALIAS_TOKEN:\n//            yaml_free(token.data.alias.value);\n//            break;\n//\n//        case YAML_ANCHOR_TOKEN:\n//            yaml_free(token.data.anchor.value);\n//            break;\n//\n//        case YAML_TAG_TOKEN:\n//            yaml_free(token.data.tag.handle);\n//            yaml_free(token.data.tag.suffix);\n//            break;\n//\n//        case YAML_SCALAR_TOKEN:\n//            yaml_free(token.data.scalar.value);\n//            break;\n//\n//        default:\n//            break;\n//    }\n//\n//    memset(token, 0, sizeof(yaml_token_t));\n//}\n//\n///*\n// * Check if a string is a valid UTF-8 sequence.\n// *\n// * Check 'reader.c' for more details on UTF-8 encoding.\n// */\n//\n//static int\n//yaml_check_utf8(yaml_char_t *start, size_t length)\n//{\n//    yaml_char_t *end = start+length;\n//    yaml_char_t *pointer = start;\n//\n//    while (pointer < end) {\n//        unsigned char octet;\n//        unsigned int width;\n//        unsigned int value;\n//        size_t k;\n//\n//        octet = pointer[0];\n//        width = (octet & 0x80) == 0x00 ? 1 :\n//                (octet & 0xE0) == 0xC0 ? 2 :\n//                (octet & 0xF0) == 0xE0 ? 3 :\n//                (octet & 0xF8) == 0xF0 ? 4 : 0;\n//        value = (octet & 0x80) == 0x00 ? octet & 0x7F :\n//                (octet & 0xE0) == 0xC0 ? octet & 0x1F :\n//                (octet & 0xF0) == 0xE0 ? octet & 0x0F :\n//                (octet & 0xF8) == 0xF0 ? octet & 0x07 : 0;\n//        if (!width) return 0;\n//        if (pointer+width > end) return 0;\n//        for (k = 1; k < width; k ++) {\n//            octet = pointer[k];\n//            if ((octet & 0xC0) != 0x80) return 0;\n//            value = (value << 6) + (octet & 0x3F);\n//        }\n//        if (!((width == 1) ||\n//            (width == 2 && value >= 0x80) ||\n//            (width == 3 && value >= 0x800) ||\n//            (width == 4 && value >= 0x10000))) return 0;\n//\n//        pointer += width;\n//    }\n//\n//    return 1;\n//}\n//\n\n// Create STREAM-START.\nfunc yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) {\n\t*event = yaml_event_t{\n\t\ttyp:      yaml_STREAM_START_EVENT,\n\t\tencoding: encoding,\n\t}\n}\n\n// Create STREAM-END.\nfunc yaml_stream_end_event_initialize(event *yaml_event_t) {\n\t*event = yaml_event_t{\n\t\ttyp: yaml_STREAM_END_EVENT,\n\t}\n}\n\n// Create DOCUMENT-START.\nfunc yaml_document_start_event_initialize(\n\tevent *yaml_event_t,\n\tversion_directive *yaml_version_directive_t,\n\ttag_directives []yaml_tag_directive_t,\n\timplicit bool,\n) {\n\t*event = yaml_event_t{\n\t\ttyp:               yaml_DOCUMENT_START_EVENT,\n\t\tversion_directive: version_directive,\n\t\ttag_directives:    tag_directives,\n\t\timplicit:          implicit,\n\t}\n}\n\n// Create DOCUMENT-END.\nfunc yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) {\n\t*event = yaml_event_t{\n\t\ttyp:      yaml_DOCUMENT_END_EVENT,\n\t\timplicit: implicit,\n\t}\n}\n\n// Create ALIAS.\nfunc yaml_alias_event_initialize(event *yaml_event_t, anchor []byte) bool {\n\t*event = yaml_event_t{\n\t\ttyp:    yaml_ALIAS_EVENT,\n\t\tanchor: anchor,\n\t}\n\treturn true\n}\n\n// Create SCALAR.\nfunc yaml_scalar_event_initialize(event *yaml_event_t, anchor, tag, value []byte, plain_implicit, quoted_implicit bool, style yaml_scalar_style_t) bool {\n\t*event = yaml_event_t{\n\t\ttyp:             yaml_SCALAR_EVENT,\n\t\tanchor:          anchor,\n\t\ttag:             tag,\n\t\tvalue:           value,\n\t\timplicit:        plain_implicit,\n\t\tquoted_implicit: quoted_implicit,\n\t\tstyle:           yaml_style_t(style),\n\t}\n\treturn true\n}\n\n// Create SEQUENCE-START.\nfunc yaml_sequence_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_sequence_style_t) bool {\n\t*event = yaml_event_t{\n\t\ttyp:      yaml_SEQUENCE_START_EVENT,\n\t\tanchor:   anchor,\n\t\ttag:      tag,\n\t\timplicit: implicit,\n\t\tstyle:    yaml_style_t(style),\n\t}\n\treturn true\n}\n\n// Create SEQUENCE-END.\nfunc yaml_sequence_end_event_initialize(event *yaml_event_t) bool {\n\t*event = yaml_event_t{\n\t\ttyp: yaml_SEQUENCE_END_EVENT,\n\t}\n\treturn true\n}\n\n// Create MAPPING-START.\nfunc yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) {\n\t*event = yaml_event_t{\n\t\ttyp:      yaml_MAPPING_START_EVENT,\n\t\tanchor:   anchor,\n\t\ttag:      tag,\n\t\timplicit: implicit,\n\t\tstyle:    yaml_style_t(style),\n\t}\n}\n\n// Create MAPPING-END.\nfunc yaml_mapping_end_event_initialize(event *yaml_event_t) {\n\t*event = yaml_event_t{\n\t\ttyp: yaml_MAPPING_END_EVENT,\n\t}\n}\n\n// Destroy an event object.\nfunc yaml_event_delete(event *yaml_event_t) {\n\t*event = yaml_event_t{}\n}\n\n///*\n// * Create a document object.\n// */\n//\n//YAML_DECLARE(int)\n//yaml_document_initialize(document *yaml_document_t,\n//        version_directive *yaml_version_directive_t,\n//        tag_directives_start *yaml_tag_directive_t,\n//        tag_directives_end *yaml_tag_directive_t,\n//        start_implicit int, end_implicit int)\n//{\n//    struct {\n//        error yaml_error_type_t\n//    } context\n//    struct {\n//        start *yaml_node_t\n//        end *yaml_node_t\n//        top *yaml_node_t\n//    } nodes = { NULL, NULL, NULL }\n//    version_directive_copy *yaml_version_directive_t = NULL\n//    struct {\n//        start *yaml_tag_directive_t\n//        end *yaml_tag_directive_t\n//        top *yaml_tag_directive_t\n//    } tag_directives_copy = { NULL, NULL, NULL }\n//    value yaml_tag_directive_t = { NULL, NULL }\n//    mark yaml_mark_t = { 0, 0, 0 }\n//\n//    assert(document) // Non-NULL document object is expected.\n//    assert((tag_directives_start && tag_directives_end) ||\n//            (tag_directives_start == tag_directives_end))\n//                            // Valid tag directives are expected.\n//\n//    if (!STACK_INIT(&context, nodes, INITIAL_STACK_SIZE)) goto error\n//\n//    if (version_directive) {\n//        version_directive_copy = yaml_malloc(sizeof(yaml_version_directive_t))\n//        if (!version_directive_copy) goto error\n//        version_directive_copy.major = version_directive.major\n//        version_directive_copy.minor = version_directive.minor\n//    }\n//\n//    if (tag_directives_start != tag_directives_end) {\n//        tag_directive *yaml_tag_directive_t\n//        if (!STACK_INIT(&context, tag_directives_copy, INITIAL_STACK_SIZE))\n//            goto error\n//        for (tag_directive = tag_directives_start\n//                tag_directive != tag_directives_end; tag_directive ++) {\n//            assert(tag_directive.handle)\n//            assert(tag_directive.prefix)\n//            if (!yaml_check_utf8(tag_directive.handle,\n//                        strlen((char *)tag_directive.handle)))\n//                goto error\n//            if (!yaml_check_utf8(tag_directive.prefix,\n//                        strlen((char *)tag_directive.prefix)))\n//                goto error\n//            value.handle = yaml_strdup(tag_directive.handle)\n//            value.prefix = yaml_strdup(tag_directive.prefix)\n//            if (!value.handle || !value.prefix) goto error\n//            if (!PUSH(&context, tag_directives_copy, value))\n//                goto error\n//            value.handle = NULL\n//            value.prefix = NULL\n//        }\n//    }\n//\n//    DOCUMENT_INIT(*document, nodes.start, nodes.end, version_directive_copy,\n//            tag_directives_copy.start, tag_directives_copy.top,\n//            start_implicit, end_implicit, mark, mark)\n//\n//    return 1\n//\n//error:\n//    STACK_DEL(&context, nodes)\n//    yaml_free(version_directive_copy)\n//    while (!STACK_EMPTY(&context, tag_directives_copy)) {\n//        value yaml_tag_directive_t = POP(&context, tag_directives_copy)\n//        yaml_free(value.handle)\n//        yaml_free(value.prefix)\n//    }\n//    STACK_DEL(&context, tag_directives_copy)\n//    yaml_free(value.handle)\n//    yaml_free(value.prefix)\n//\n//    return 0\n//}\n//\n///*\n// * Destroy a document object.\n// */\n//\n//YAML_DECLARE(void)\n//yaml_document_delete(document *yaml_document_t)\n//{\n//    struct {\n//        error yaml_error_type_t\n//    } context\n//    tag_directive *yaml_tag_directive_t\n//\n//    context.error = YAML_NO_ERROR // Eliminate a compiler warning.\n//\n//    assert(document) // Non-NULL document object is expected.\n//\n//    while (!STACK_EMPTY(&context, document.nodes)) {\n//        node yaml_node_t = POP(&context, document.nodes)\n//        yaml_free(node.tag)\n//        switch (node.type) {\n//            case YAML_SCALAR_NODE:\n//                yaml_free(node.data.scalar.value)\n//                break\n//            case YAML_SEQUENCE_NODE:\n//                STACK_DEL(&context, node.data.sequence.items)\n//                break\n//            case YAML_MAPPING_NODE:\n//                STACK_DEL(&context, node.data.mapping.pairs)\n//                break\n//            default:\n//                assert(0) // Should not happen.\n//        }\n//    }\n//    STACK_DEL(&context, document.nodes)\n//\n//    yaml_free(document.version_directive)\n//    for (tag_directive = document.tag_directives.start\n//            tag_directive != document.tag_directives.end\n//            tag_directive++) {\n//        yaml_free(tag_directive.handle)\n//        yaml_free(tag_directive.prefix)\n//    }\n//    yaml_free(document.tag_directives.start)\n//\n//    memset(document, 0, sizeof(yaml_document_t))\n//}\n//\n///**\n// * Get a document node.\n// */\n//\n//YAML_DECLARE(yaml_node_t *)\n//yaml_document_get_node(document *yaml_document_t, index int)\n//{\n//    assert(document) // Non-NULL document object is expected.\n//\n//    if (index > 0 && document.nodes.start + index <= document.nodes.top) {\n//        return document.nodes.start + index - 1\n//    }\n//    return NULL\n//}\n//\n///**\n// * Get the root object.\n// */\n//\n//YAML_DECLARE(yaml_node_t *)\n//yaml_document_get_root_node(document *yaml_document_t)\n//{\n//    assert(document) // Non-NULL document object is expected.\n//\n//    if (document.nodes.top != document.nodes.start) {\n//        return document.nodes.start\n//    }\n//    return NULL\n//}\n//\n///*\n// * Add a scalar node to a document.\n// */\n//\n//YAML_DECLARE(int)\n//yaml_document_add_scalar(document *yaml_document_t,\n//        tag *yaml_char_t, value *yaml_char_t, length int,\n//        style yaml_scalar_style_t)\n//{\n//    struct {\n//        error yaml_error_type_t\n//    } context\n//    mark yaml_mark_t = { 0, 0, 0 }\n//    tag_copy *yaml_char_t = NULL\n//    value_copy *yaml_char_t = NULL\n//    node yaml_node_t\n//\n//    assert(document) // Non-NULL document object is expected.\n//    assert(value) // Non-NULL value is expected.\n//\n//    if (!tag) {\n//        tag = (yaml_char_t *)YAML_DEFAULT_SCALAR_TAG\n//    }\n//\n//    if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error\n//    tag_copy = yaml_strdup(tag)\n//    if (!tag_copy) goto error\n//\n//    if (length < 0) {\n//        length = strlen((char *)value)\n//    }\n//\n//    if (!yaml_check_utf8(value, length)) goto error\n//    value_copy = yaml_malloc(length+1)\n//    if (!value_copy) goto error\n//    memcpy(value_copy, value, length)\n//    value_copy[length] = '\\0'\n//\n//    SCALAR_NODE_INIT(node, tag_copy, value_copy, length, style, mark, mark)\n//    if (!PUSH(&context, document.nodes, node)) goto error\n//\n//    return document.nodes.top - document.nodes.start\n//\n//error:\n//    yaml_free(tag_copy)\n//    yaml_free(value_copy)\n//\n//    return 0\n//}\n//\n///*\n// * Add a sequence node to a document.\n// */\n//\n//YAML_DECLARE(int)\n//yaml_document_add_sequence(document *yaml_document_t,\n//        tag *yaml_char_t, style yaml_sequence_style_t)\n//{\n//    struct {\n//        error yaml_error_type_t\n//    } context\n//    mark yaml_mark_t = { 0, 0, 0 }\n//    tag_copy *yaml_char_t = NULL\n//    struct {\n//        start *yaml_node_item_t\n//        end *yaml_node_item_t\n//        top *yaml_node_item_t\n//    } items = { NULL, NULL, NULL }\n//    node yaml_node_t\n//\n//    assert(document) // Non-NULL document object is expected.\n//\n//    if (!tag) {\n//        tag = (yaml_char_t *)YAML_DEFAULT_SEQUENCE_TAG\n//    }\n//\n//    if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error\n//    tag_copy = yaml_strdup(tag)\n//    if (!tag_copy) goto error\n//\n//    if (!STACK_INIT(&context, items, INITIAL_STACK_SIZE)) goto error\n//\n//    SEQUENCE_NODE_INIT(node, tag_copy, items.start, items.end,\n//            style, mark, mark)\n//    if (!PUSH(&context, document.nodes, node)) goto error\n//\n//    return document.nodes.top - document.nodes.start\n//\n//error:\n//    STACK_DEL(&context, items)\n//    yaml_free(tag_copy)\n//\n//    return 0\n//}\n//\n///*\n// * Add a mapping node to a document.\n// */\n//\n//YAML_DECLARE(int)\n//yaml_document_add_mapping(document *yaml_document_t,\n//        tag *yaml_char_t, style yaml_mapping_style_t)\n//{\n//    struct {\n//        error yaml_error_type_t\n//    } context\n//    mark yaml_mark_t = { 0, 0, 0 }\n//    tag_copy *yaml_char_t = NULL\n//    struct {\n//        start *yaml_node_pair_t\n//        end *yaml_node_pair_t\n//        top *yaml_node_pair_t\n//    } pairs = { NULL, NULL, NULL }\n//    node yaml_node_t\n//\n//    assert(document) // Non-NULL document object is expected.\n//\n//    if (!tag) {\n//        tag = (yaml_char_t *)YAML_DEFAULT_MAPPING_TAG\n//    }\n//\n//    if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error\n//    tag_copy = yaml_strdup(tag)\n//    if (!tag_copy) goto error\n//\n//    if (!STACK_INIT(&context, pairs, INITIAL_STACK_SIZE)) goto error\n//\n//    MAPPING_NODE_INIT(node, tag_copy, pairs.start, pairs.end,\n//            style, mark, mark)\n//    if (!PUSH(&context, document.nodes, node)) goto error\n//\n//    return document.nodes.top - document.nodes.start\n//\n//error:\n//    STACK_DEL(&context, pairs)\n//    yaml_free(tag_copy)\n//\n//    return 0\n//}\n//\n///*\n// * Append an item to a sequence node.\n// */\n//\n//YAML_DECLARE(int)\n//yaml_document_append_sequence_item(document *yaml_document_t,\n//        sequence int, item int)\n//{\n//    struct {\n//        error yaml_error_type_t\n//    } context\n//\n//    assert(document) // Non-NULL document is required.\n//    assert(sequence > 0\n//            && document.nodes.start + sequence <= document.nodes.top)\n//                            // Valid sequence id is required.\n//    assert(document.nodes.start[sequence-1].type == YAML_SEQUENCE_NODE)\n//                            // A sequence node is required.\n//    assert(item > 0 && document.nodes.start + item <= document.nodes.top)\n//                            // Valid item id is required.\n//\n//    if (!PUSH(&context,\n//                document.nodes.start[sequence-1].data.sequence.items, item))\n//        return 0\n//\n//    return 1\n//}\n//\n///*\n// * Append a pair of a key and a value to a mapping node.\n// */\n//\n//YAML_DECLARE(int)\n//yaml_document_append_mapping_pair(document *yaml_document_t,\n//        mapping int, key int, value int)\n//{\n//    struct {\n//        error yaml_error_type_t\n//    } context\n//\n//    pair yaml_node_pair_t\n//\n//    assert(document) // Non-NULL document is required.\n//    assert(mapping > 0\n//            && document.nodes.start + mapping <= document.nodes.top)\n//                            // Valid mapping id is required.\n//    assert(document.nodes.start[mapping-1].type == YAML_MAPPING_NODE)\n//                            // A mapping node is required.\n//    assert(key > 0 && document.nodes.start + key <= document.nodes.top)\n//                            // Valid key id is required.\n//    assert(value > 0 && document.nodes.start + value <= document.nodes.top)\n//                            // Valid value id is required.\n//\n//    pair.key = key\n//    pair.value = value\n//\n//    if (!PUSH(&context,\n//                document.nodes.start[mapping-1].data.mapping.pairs, pair))\n//        return 0\n//\n//    return 1\n//}\n//\n//\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/decode.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage yaml\n\nimport (\n\t\"encoding\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"io\"\n\t\"math\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"time\"\n)\n\n// ----------------------------------------------------------------------------\n// Parser, produces a node tree out of a libyaml event stream.\n\ntype parser struct {\n\tparser   yaml_parser_t\n\tevent    yaml_event_t\n\tdoc      *Node\n\tanchors  map[string]*Node\n\tdoneInit bool\n\ttextless bool\n}\n\nfunc newParser(b []byte) *parser {\n\tp := parser{}\n\tif !yaml_parser_initialize(&p.parser) {\n\t\tpanic(\"failed to initialize YAML emitter\")\n\t}\n\tif len(b) == 0 {\n\t\tb = []byte{'\\n'}\n\t}\n\tyaml_parser_set_input_string(&p.parser, b)\n\treturn &p\n}\n\nfunc newParserFromReader(r io.Reader) *parser {\n\tp := parser{}\n\tif !yaml_parser_initialize(&p.parser) {\n\t\tpanic(\"failed to initialize YAML emitter\")\n\t}\n\tyaml_parser_set_input_reader(&p.parser, r)\n\treturn &p\n}\n\nfunc (p *parser) init() {\n\tif p.doneInit {\n\t\treturn\n\t}\n\tp.anchors = make(map[string]*Node)\n\tp.expect(yaml_STREAM_START_EVENT)\n\tp.doneInit = true\n}\n\nfunc (p *parser) destroy() {\n\tif p.event.typ != yaml_NO_EVENT {\n\t\tyaml_event_delete(&p.event)\n\t}\n\tyaml_parser_delete(&p.parser)\n}\n\n// expect consumes an event from the event stream and\n// checks that it's of the expected type.\nfunc (p *parser) expect(e yaml_event_type_t) {\n\tif p.event.typ == yaml_NO_EVENT {\n\t\tif !yaml_parser_parse(&p.parser, &p.event) {\n\t\t\tp.fail()\n\t\t}\n\t}\n\tif p.event.typ == yaml_STREAM_END_EVENT {\n\t\tfailf(\"attempted to go past the end of stream; corrupted value?\")\n\t}\n\tif p.event.typ != e {\n\t\tp.parser.problem = fmt.Sprintf(\"expected %s event but got %s\", e, p.event.typ)\n\t\tp.fail()\n\t}\n\tyaml_event_delete(&p.event)\n\tp.event.typ = yaml_NO_EVENT\n}\n\n// peek peeks at the next event in the event stream,\n// puts the results into p.event and returns the event type.\nfunc (p *parser) peek() yaml_event_type_t {\n\tif p.event.typ != yaml_NO_EVENT {\n\t\treturn p.event.typ\n\t}\n\t// It's curious choice from the underlying API to generally return a\n\t// positive result on success, but on this case return true in an error\n\t// scenario. This was the source of bugs in the past (issue #666).\n\tif !yaml_parser_parse(&p.parser, &p.event) || p.parser.error != yaml_NO_ERROR {\n\t\tp.fail()\n\t}\n\treturn p.event.typ\n}\n\nfunc (p *parser) fail() {\n\tvar where string\n\tvar line int\n\tif p.parser.context_mark.line != 0 {\n\t\tline = p.parser.context_mark.line\n\t\t// Scanner errors don't iterate line before returning error\n\t\tif p.parser.error == yaml_SCANNER_ERROR {\n\t\t\tline++\n\t\t}\n\t} else if p.parser.problem_mark.line != 0 {\n\t\tline = p.parser.problem_mark.line\n\t\t// Scanner errors don't iterate line before returning error\n\t\tif p.parser.error == yaml_SCANNER_ERROR {\n\t\t\tline++\n\t\t}\n\t}\n\tif line != 0 {\n\t\twhere = \"line \" + strconv.Itoa(line) + \": \"\n\t}\n\tvar msg string\n\tif len(p.parser.problem) > 0 {\n\t\tmsg = p.parser.problem\n\t} else {\n\t\tmsg = \"unknown problem parsing YAML content\"\n\t}\n\tfailf(\"%s%s\", where, msg)\n}\n\nfunc (p *parser) anchor(n *Node, anchor []byte) {\n\tif anchor != nil {\n\t\tn.Anchor = string(anchor)\n\t\tp.anchors[n.Anchor] = n\n\t}\n}\n\nfunc (p *parser) parse() *Node {\n\tp.init()\n\tswitch p.peek() {\n\tcase yaml_SCALAR_EVENT:\n\t\treturn p.scalar()\n\tcase yaml_ALIAS_EVENT:\n\t\treturn p.alias()\n\tcase yaml_MAPPING_START_EVENT:\n\t\treturn p.mapping()\n\tcase yaml_SEQUENCE_START_EVENT:\n\t\treturn p.sequence()\n\tcase yaml_DOCUMENT_START_EVENT:\n\t\treturn p.document()\n\tcase yaml_STREAM_END_EVENT:\n\t\t// Happens when attempting to decode an empty buffer.\n\t\treturn nil\n\tcase yaml_TAIL_COMMENT_EVENT:\n\t\tpanic(\"internal error: unexpected tail comment event (please report)\")\n\tdefault:\n\t\tpanic(\"internal error: attempted to parse unknown event (please report): \" + p.event.typ.String())\n\t}\n}\n\nfunc (p *parser) node(kind Kind, defaultTag, tag, value string) *Node {\n\tvar style Style\n\tif tag != \"\" && tag != \"!\" {\n\t\ttag = shortTag(tag)\n\t\tstyle = TaggedStyle\n\t} else if defaultTag != \"\" {\n\t\ttag = defaultTag\n\t} else if kind == ScalarNode {\n\t\ttag, _ = resolve(\"\", value)\n\t}\n\tn := &Node{\n\t\tKind:  kind,\n\t\tTag:   tag,\n\t\tValue: value,\n\t\tStyle: style,\n\t}\n\tif !p.textless {\n\t\tn.Line = p.event.start_mark.line + 1\n\t\tn.Column = p.event.start_mark.column + 1\n\t\tn.HeadComment = string(p.event.head_comment)\n\t\tn.LineComment = string(p.event.line_comment)\n\t\tn.FootComment = string(p.event.foot_comment)\n\t}\n\treturn n\n}\n\nfunc (p *parser) parseChild(parent *Node) *Node {\n\tchild := p.parse()\n\tparent.Content = append(parent.Content, child)\n\treturn child\n}\n\nfunc (p *parser) document() *Node {\n\tn := p.node(DocumentNode, \"\", \"\", \"\")\n\tp.doc = n\n\tp.expect(yaml_DOCUMENT_START_EVENT)\n\tp.parseChild(n)\n\tif p.peek() == yaml_DOCUMENT_END_EVENT {\n\t\tn.FootComment = string(p.event.foot_comment)\n\t}\n\tp.expect(yaml_DOCUMENT_END_EVENT)\n\treturn n\n}\n\nfunc (p *parser) alias() *Node {\n\tn := p.node(AliasNode, \"\", \"\", string(p.event.anchor))\n\tn.Alias = p.anchors[n.Value]\n\tif n.Alias == nil {\n\t\tfailf(\"unknown anchor '%s' referenced\", n.Value)\n\t}\n\tp.expect(yaml_ALIAS_EVENT)\n\treturn n\n}\n\nfunc (p *parser) scalar() *Node {\n\tvar parsedStyle = p.event.scalar_style()\n\tvar nodeStyle Style\n\tswitch {\n\tcase parsedStyle&yaml_DOUBLE_QUOTED_SCALAR_STYLE != 0:\n\t\tnodeStyle = DoubleQuotedStyle\n\tcase parsedStyle&yaml_SINGLE_QUOTED_SCALAR_STYLE != 0:\n\t\tnodeStyle = SingleQuotedStyle\n\tcase parsedStyle&yaml_LITERAL_SCALAR_STYLE != 0:\n\t\tnodeStyle = LiteralStyle\n\tcase parsedStyle&yaml_FOLDED_SCALAR_STYLE != 0:\n\t\tnodeStyle = FoldedStyle\n\t}\n\tvar nodeValue = string(p.event.value)\n\tvar nodeTag = string(p.event.tag)\n\tvar defaultTag string\n\tif nodeStyle == 0 {\n\t\tif nodeValue == \"<<\" {\n\t\t\tdefaultTag = mergeTag\n\t\t}\n\t} else {\n\t\tdefaultTag = strTag\n\t}\n\tn := p.node(ScalarNode, defaultTag, nodeTag, nodeValue)\n\tn.Style |= nodeStyle\n\tp.anchor(n, p.event.anchor)\n\tp.expect(yaml_SCALAR_EVENT)\n\treturn n\n}\n\nfunc (p *parser) sequence() *Node {\n\tn := p.node(SequenceNode, seqTag, string(p.event.tag), \"\")\n\tif p.event.sequence_style()&yaml_FLOW_SEQUENCE_STYLE != 0 {\n\t\tn.Style |= FlowStyle\n\t}\n\tp.anchor(n, p.event.anchor)\n\tp.expect(yaml_SEQUENCE_START_EVENT)\n\tfor p.peek() != yaml_SEQUENCE_END_EVENT {\n\t\tp.parseChild(n)\n\t}\n\tn.LineComment = string(p.event.line_comment)\n\tn.FootComment = string(p.event.foot_comment)\n\tp.expect(yaml_SEQUENCE_END_EVENT)\n\treturn n\n}\n\nfunc (p *parser) mapping() *Node {\n\tn := p.node(MappingNode, mapTag, string(p.event.tag), \"\")\n\tblock := true\n\tif p.event.mapping_style()&yaml_FLOW_MAPPING_STYLE != 0 {\n\t\tblock = false\n\t\tn.Style |= FlowStyle\n\t}\n\tp.anchor(n, p.event.anchor)\n\tp.expect(yaml_MAPPING_START_EVENT)\n\tfor p.peek() != yaml_MAPPING_END_EVENT {\n\t\tk := p.parseChild(n)\n\t\tif block && k.FootComment != \"\" {\n\t\t\t// Must be a foot comment for the prior value when being dedented.\n\t\t\tif len(n.Content) > 2 {\n\t\t\t\tn.Content[len(n.Content)-3].FootComment = k.FootComment\n\t\t\t\tk.FootComment = \"\"\n\t\t\t}\n\t\t}\n\t\tv := p.parseChild(n)\n\t\tif k.FootComment == \"\" && v.FootComment != \"\" {\n\t\t\tk.FootComment = v.FootComment\n\t\t\tv.FootComment = \"\"\n\t\t}\n\t\tif p.peek() == yaml_TAIL_COMMENT_EVENT {\n\t\t\tif k.FootComment == \"\" {\n\t\t\t\tk.FootComment = string(p.event.foot_comment)\n\t\t\t}\n\t\t\tp.expect(yaml_TAIL_COMMENT_EVENT)\n\t\t}\n\t}\n\tn.LineComment = string(p.event.line_comment)\n\tn.FootComment = string(p.event.foot_comment)\n\tif n.Style&FlowStyle == 0 && n.FootComment != \"\" && len(n.Content) > 1 {\n\t\tn.Content[len(n.Content)-2].FootComment = n.FootComment\n\t\tn.FootComment = \"\"\n\t}\n\tp.expect(yaml_MAPPING_END_EVENT)\n\treturn n\n}\n\n// ----------------------------------------------------------------------------\n// Decoder, unmarshals a node into a provided value.\n\ntype decoder struct {\n\tdoc     *Node\n\taliases map[*Node]bool\n\tterrors []string\n\n\tstringMapType  reflect.Type\n\tgeneralMapType reflect.Type\n\n\tknownFields bool\n\tuniqueKeys  bool\n\tdecodeCount int\n\taliasCount  int\n\taliasDepth  int\n\n\tmergedFields map[interface{}]bool\n}\n\nvar (\n\tnodeType       = reflect.TypeOf(Node{})\n\tdurationType   = reflect.TypeOf(time.Duration(0))\n\tstringMapType  = reflect.TypeOf(map[string]interface{}{})\n\tgeneralMapType = reflect.TypeOf(map[interface{}]interface{}{})\n\tifaceType      = generalMapType.Elem()\n\ttimeType       = reflect.TypeOf(time.Time{})\n\tptrTimeType    = reflect.TypeOf(&time.Time{})\n)\n\nfunc newDecoder() *decoder {\n\td := &decoder{\n\t\tstringMapType:  stringMapType,\n\t\tgeneralMapType: generalMapType,\n\t\tuniqueKeys:     true,\n\t}\n\td.aliases = make(map[*Node]bool)\n\treturn d\n}\n\nfunc (d *decoder) terror(n *Node, tag string, out reflect.Value) {\n\tif n.Tag != \"\" {\n\t\ttag = n.Tag\n\t}\n\tvalue := n.Value\n\tif tag != seqTag && tag != mapTag {\n\t\tif len(value) > 10 {\n\t\t\tvalue = \" `\" + value[:7] + \"...`\"\n\t\t} else {\n\t\t\tvalue = \" `\" + value + \"`\"\n\t\t}\n\t}\n\td.terrors = append(d.terrors, fmt.Sprintf(\"line %d: cannot unmarshal %s%s into %s\", n.Line, shortTag(tag), value, out.Type()))\n}\n\nfunc (d *decoder) callUnmarshaler(n *Node, u Unmarshaler) (good bool) {\n\terr := u.UnmarshalYAML(n)\n\tif e, ok := err.(*TypeError); ok {\n\t\td.terrors = append(d.terrors, e.Errors...)\n\t\treturn false\n\t}\n\tif err != nil {\n\t\tfail(err)\n\t}\n\treturn true\n}\n\nfunc (d *decoder) callObsoleteUnmarshaler(n *Node, u obsoleteUnmarshaler) (good bool) {\n\tterrlen := len(d.terrors)\n\terr := u.UnmarshalYAML(func(v interface{}) (err error) {\n\t\tdefer handleErr(&err)\n\t\td.unmarshal(n, reflect.ValueOf(v))\n\t\tif len(d.terrors) > terrlen {\n\t\t\tissues := d.terrors[terrlen:]\n\t\t\td.terrors = d.terrors[:terrlen]\n\t\t\treturn &TypeError{issues}\n\t\t}\n\t\treturn nil\n\t})\n\tif e, ok := err.(*TypeError); ok {\n\t\td.terrors = append(d.terrors, e.Errors...)\n\t\treturn false\n\t}\n\tif err != nil {\n\t\tfail(err)\n\t}\n\treturn true\n}\n\n// d.prepare initializes and dereferences pointers and calls UnmarshalYAML\n// if a value is found to implement it.\n// It returns the initialized and dereferenced out value, whether\n// unmarshalling was already done by UnmarshalYAML, and if so whether\n// its types unmarshalled appropriately.\n//\n// If n holds a null value, prepare returns before doing anything.\nfunc (d *decoder) prepare(n *Node, out reflect.Value) (newout reflect.Value, unmarshaled, good bool) {\n\tif n.ShortTag() == nullTag {\n\t\treturn out, false, false\n\t}\n\tagain := true\n\tfor again {\n\t\tagain = false\n\t\tif out.Kind() == reflect.Ptr {\n\t\t\tif out.IsNil() {\n\t\t\t\tout.Set(reflect.New(out.Type().Elem()))\n\t\t\t}\n\t\t\tout = out.Elem()\n\t\t\tagain = true\n\t\t}\n\t\tif out.CanAddr() {\n\t\t\touti := out.Addr().Interface()\n\t\t\tif u, ok := outi.(Unmarshaler); ok {\n\t\t\t\tgood = d.callUnmarshaler(n, u)\n\t\t\t\treturn out, true, good\n\t\t\t}\n\t\t\tif u, ok := outi.(obsoleteUnmarshaler); ok {\n\t\t\t\tgood = d.callObsoleteUnmarshaler(n, u)\n\t\t\t\treturn out, true, good\n\t\t\t}\n\t\t}\n\t}\n\treturn out, false, false\n}\n\nfunc (d *decoder) fieldByIndex(n *Node, v reflect.Value, index []int) (field reflect.Value) {\n\tif n.ShortTag() == nullTag {\n\t\treturn reflect.Value{}\n\t}\n\tfor _, num := range index {\n\t\tfor {\n\t\t\tif v.Kind() == reflect.Ptr {\n\t\t\t\tif v.IsNil() {\n\t\t\t\t\tv.Set(reflect.New(v.Type().Elem()))\n\t\t\t\t}\n\t\t\t\tv = v.Elem()\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t\tv = v.Field(num)\n\t}\n\treturn v\n}\n\nconst (\n\t// 400,000 decode operations is ~500kb of dense object declarations, or\n\t// ~5kb of dense object declarations with 10000% alias expansion\n\talias_ratio_range_low = 400000\n\n\t// 4,000,000 decode operations is ~5MB of dense object declarations, or\n\t// ~4.5MB of dense object declarations with 10% alias expansion\n\talias_ratio_range_high = 4000000\n\n\t// alias_ratio_range is the range over which we scale allowed alias ratios\n\talias_ratio_range = float64(alias_ratio_range_high - alias_ratio_range_low)\n)\n\nfunc allowedAliasRatio(decodeCount int) float64 {\n\tswitch {\n\tcase decodeCount <= alias_ratio_range_low:\n\t\t// allow 99% to come from alias expansion for small-to-medium documents\n\t\treturn 0.99\n\tcase decodeCount >= alias_ratio_range_high:\n\t\t// allow 10% to come from alias expansion for very large documents\n\t\treturn 0.10\n\tdefault:\n\t\t// scale smoothly from 99% down to 10% over the range.\n\t\t// this maps to 396,000 - 400,000 allowed alias-driven decodes over the range.\n\t\t// 400,000 decode operations is ~100MB of allocations in worst-case scenarios (single-item maps).\n\t\treturn 0.99 - 0.89*(float64(decodeCount-alias_ratio_range_low)/alias_ratio_range)\n\t}\n}\n\nfunc (d *decoder) unmarshal(n *Node, out reflect.Value) (good bool) {\n\td.decodeCount++\n\tif d.aliasDepth > 0 {\n\t\td.aliasCount++\n\t}\n\tif d.aliasCount > 100 && d.decodeCount > 1000 && float64(d.aliasCount)/float64(d.decodeCount) > allowedAliasRatio(d.decodeCount) {\n\t\tfailf(\"document contains excessive aliasing\")\n\t}\n\tif out.Type() == nodeType {\n\t\tout.Set(reflect.ValueOf(n).Elem())\n\t\treturn true\n\t}\n\tswitch n.Kind {\n\tcase DocumentNode:\n\t\treturn d.document(n, out)\n\tcase AliasNode:\n\t\treturn d.alias(n, out)\n\t}\n\tout, unmarshaled, good := d.prepare(n, out)\n\tif unmarshaled {\n\t\treturn good\n\t}\n\tswitch n.Kind {\n\tcase ScalarNode:\n\t\tgood = d.scalar(n, out)\n\tcase MappingNode:\n\t\tgood = d.mapping(n, out)\n\tcase SequenceNode:\n\t\tgood = d.sequence(n, out)\n\tcase 0:\n\t\tif n.IsZero() {\n\t\t\treturn d.null(out)\n\t\t}\n\t\tfallthrough\n\tdefault:\n\t\tfailf(\"cannot decode node with unknown kind %d\", n.Kind)\n\t}\n\treturn good\n}\n\nfunc (d *decoder) document(n *Node, out reflect.Value) (good bool) {\n\tif len(n.Content) == 1 {\n\t\td.doc = n\n\t\td.unmarshal(n.Content[0], out)\n\t\treturn true\n\t}\n\treturn false\n}\n\nfunc (d *decoder) alias(n *Node, out reflect.Value) (good bool) {\n\tif d.aliases[n] {\n\t\t// TODO this could actually be allowed in some circumstances.\n\t\tfailf(\"anchor '%s' value contains itself\", n.Value)\n\t}\n\td.aliases[n] = true\n\td.aliasDepth++\n\tgood = d.unmarshal(n.Alias, out)\n\td.aliasDepth--\n\tdelete(d.aliases, n)\n\treturn good\n}\n\nvar zeroValue reflect.Value\n\nfunc resetMap(out reflect.Value) {\n\tfor _, k := range out.MapKeys() {\n\t\tout.SetMapIndex(k, zeroValue)\n\t}\n}\n\nfunc (d *decoder) null(out reflect.Value) bool {\n\tif out.CanAddr() {\n\t\tswitch out.Kind() {\n\t\tcase reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice:\n\t\t\tout.Set(reflect.Zero(out.Type()))\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc (d *decoder) scalar(n *Node, out reflect.Value) bool {\n\tvar tag string\n\tvar resolved interface{}\n\tif n.indicatedString() {\n\t\ttag = strTag\n\t\tresolved = n.Value\n\t} else {\n\t\ttag, resolved = resolve(n.Tag, n.Value)\n\t\tif tag == binaryTag {\n\t\t\tdata, err := base64.StdEncoding.DecodeString(resolved.(string))\n\t\t\tif err != nil {\n\t\t\t\tfailf(\"!!binary value contains invalid base64 data\")\n\t\t\t}\n\t\t\tresolved = string(data)\n\t\t}\n\t}\n\tif resolved == nil {\n\t\treturn d.null(out)\n\t}\n\tif resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {\n\t\t// We've resolved to exactly the type we want, so use that.\n\t\tout.Set(resolvedv)\n\t\treturn true\n\t}\n\t// Perhaps we can use the value as a TextUnmarshaler to\n\t// set its value.\n\tif out.CanAddr() {\n\t\tu, ok := out.Addr().Interface().(encoding.TextUnmarshaler)\n\t\tif ok {\n\t\t\tvar text []byte\n\t\t\tif tag == binaryTag {\n\t\t\t\ttext = []byte(resolved.(string))\n\t\t\t} else {\n\t\t\t\t// We let any value be unmarshaled into TextUnmarshaler.\n\t\t\t\t// That might be more lax than we'd like, but the\n\t\t\t\t// TextUnmarshaler itself should bowl out any dubious values.\n\t\t\t\ttext = []byte(n.Value)\n\t\t\t}\n\t\t\terr := u.UnmarshalText(text)\n\t\t\tif err != nil {\n\t\t\t\tfail(err)\n\t\t\t}\n\t\t\treturn true\n\t\t}\n\t}\n\tswitch out.Kind() {\n\tcase reflect.String:\n\t\tif tag == binaryTag {\n\t\t\tout.SetString(resolved.(string))\n\t\t\treturn true\n\t\t}\n\t\tout.SetString(n.Value)\n\t\treturn true\n\tcase reflect.Interface:\n\t\tout.Set(reflect.ValueOf(resolved))\n\t\treturn true\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\t// This used to work in v2, but it's very unfriendly.\n\t\tisDuration := out.Type() == durationType\n\n\t\tswitch resolved := resolved.(type) {\n\t\tcase int:\n\t\t\tif !isDuration && !out.OverflowInt(int64(resolved)) {\n\t\t\t\tout.SetInt(int64(resolved))\n\t\t\t\treturn true\n\t\t\t}\n\t\tcase int64:\n\t\t\tif !isDuration && !out.OverflowInt(resolved) {\n\t\t\t\tout.SetInt(resolved)\n\t\t\t\treturn true\n\t\t\t}\n\t\tcase uint64:\n\t\t\tif !isDuration && resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {\n\t\t\t\tout.SetInt(int64(resolved))\n\t\t\t\treturn true\n\t\t\t}\n\t\tcase float64:\n\t\t\tif !isDuration && resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {\n\t\t\t\tout.SetInt(int64(resolved))\n\t\t\t\treturn true\n\t\t\t}\n\t\tcase string:\n\t\t\tif out.Type() == durationType {\n\t\t\t\td, err := time.ParseDuration(resolved)\n\t\t\t\tif err == nil {\n\t\t\t\t\tout.SetInt(int64(d))\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:\n\t\tswitch resolved := resolved.(type) {\n\t\tcase int:\n\t\t\tif resolved >= 0 && !out.OverflowUint(uint64(resolved)) {\n\t\t\t\tout.SetUint(uint64(resolved))\n\t\t\t\treturn true\n\t\t\t}\n\t\tcase int64:\n\t\t\tif resolved >= 0 && !out.OverflowUint(uint64(resolved)) {\n\t\t\t\tout.SetUint(uint64(resolved))\n\t\t\t\treturn true\n\t\t\t}\n\t\tcase uint64:\n\t\t\tif !out.OverflowUint(uint64(resolved)) {\n\t\t\t\tout.SetUint(uint64(resolved))\n\t\t\t\treturn true\n\t\t\t}\n\t\tcase float64:\n\t\t\tif resolved <= math.MaxUint64 && !out.OverflowUint(uint64(resolved)) {\n\t\t\t\tout.SetUint(uint64(resolved))\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\tcase reflect.Bool:\n\t\tswitch resolved := resolved.(type) {\n\t\tcase bool:\n\t\t\tout.SetBool(resolved)\n\t\t\treturn true\n\t\tcase string:\n\t\t\t// This offers some compatibility with the 1.1 spec (https://yaml.org/type/bool.html).\n\t\t\t// It only works if explicitly attempting to unmarshal into a typed bool value.\n\t\t\tswitch resolved {\n\t\t\tcase \"y\", \"Y\", \"yes\", \"Yes\", \"YES\", \"on\", \"On\", \"ON\":\n\t\t\t\tout.SetBool(true)\n\t\t\t\treturn true\n\t\t\tcase \"n\", \"N\", \"no\", \"No\", \"NO\", \"off\", \"Off\", \"OFF\":\n\t\t\t\tout.SetBool(false)\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\tcase reflect.Float32, reflect.Float64:\n\t\tswitch resolved := resolved.(type) {\n\t\tcase int:\n\t\t\tout.SetFloat(float64(resolved))\n\t\t\treturn true\n\t\tcase int64:\n\t\t\tout.SetFloat(float64(resolved))\n\t\t\treturn true\n\t\tcase uint64:\n\t\t\tout.SetFloat(float64(resolved))\n\t\t\treturn true\n\t\tcase float64:\n\t\t\tout.SetFloat(resolved)\n\t\t\treturn true\n\t\t}\n\tcase reflect.Struct:\n\t\tif resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {\n\t\t\tout.Set(resolvedv)\n\t\t\treturn true\n\t\t}\n\tcase reflect.Ptr:\n\t\tpanic(\"yaml internal error: please report the issue\")\n\t}\n\td.terror(n, tag, out)\n\treturn false\n}\n\nfunc settableValueOf(i interface{}) reflect.Value {\n\tv := reflect.ValueOf(i)\n\tsv := reflect.New(v.Type()).Elem()\n\tsv.Set(v)\n\treturn sv\n}\n\nfunc (d *decoder) sequence(n *Node, out reflect.Value) (good bool) {\n\tl := len(n.Content)\n\n\tvar iface reflect.Value\n\tswitch out.Kind() {\n\tcase reflect.Slice:\n\t\tout.Set(reflect.MakeSlice(out.Type(), l, l))\n\tcase reflect.Array:\n\t\tif l != out.Len() {\n\t\t\tfailf(\"invalid array: want %d elements but got %d\", out.Len(), l)\n\t\t}\n\tcase reflect.Interface:\n\t\t// No type hints. Will have to use a generic sequence.\n\t\tiface = out\n\t\tout = settableValueOf(make([]interface{}, l))\n\tdefault:\n\t\td.terror(n, seqTag, out)\n\t\treturn false\n\t}\n\tet := out.Type().Elem()\n\n\tj := 0\n\tfor i := 0; i < l; i++ {\n\t\te := reflect.New(et).Elem()\n\t\tif ok := d.unmarshal(n.Content[i], e); ok {\n\t\t\tout.Index(j).Set(e)\n\t\t\tj++\n\t\t}\n\t}\n\tif out.Kind() != reflect.Array {\n\t\tout.Set(out.Slice(0, j))\n\t}\n\tif iface.IsValid() {\n\t\tiface.Set(out)\n\t}\n\treturn true\n}\n\nfunc (d *decoder) mapping(n *Node, out reflect.Value) (good bool) {\n\tl := len(n.Content)\n\tif d.uniqueKeys {\n\t\tnerrs := len(d.terrors)\n\t\tfor i := 0; i < l; i += 2 {\n\t\t\tni := n.Content[i]\n\t\t\tfor j := i + 2; j < l; j += 2 {\n\t\t\t\tnj := n.Content[j]\n\t\t\t\tif ni.Kind == nj.Kind && ni.Value == nj.Value {\n\t\t\t\t\td.terrors = append(d.terrors, fmt.Sprintf(\"line %d: mapping key %#v already defined at line %d\", nj.Line, nj.Value, ni.Line))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif len(d.terrors) > nerrs {\n\t\t\treturn false\n\t\t}\n\t}\n\tswitch out.Kind() {\n\tcase reflect.Struct:\n\t\treturn d.mappingStruct(n, out)\n\tcase reflect.Map:\n\t\t// okay\n\tcase reflect.Interface:\n\t\tiface := out\n\t\tif isStringMap(n) {\n\t\t\tout = reflect.MakeMap(d.stringMapType)\n\t\t} else {\n\t\t\tout = reflect.MakeMap(d.generalMapType)\n\t\t}\n\t\tiface.Set(out)\n\tdefault:\n\t\td.terror(n, mapTag, out)\n\t\treturn false\n\t}\n\n\toutt := out.Type()\n\tkt := outt.Key()\n\tet := outt.Elem()\n\n\tstringMapType := d.stringMapType\n\tgeneralMapType := d.generalMapType\n\tif outt.Elem() == ifaceType {\n\t\tif outt.Key().Kind() == reflect.String {\n\t\t\td.stringMapType = outt\n\t\t} else if outt.Key() == ifaceType {\n\t\t\td.generalMapType = outt\n\t\t}\n\t}\n\n\tmergedFields := d.mergedFields\n\td.mergedFields = nil\n\n\tvar mergeNode *Node\n\n\tmapIsNew := false\n\tif out.IsNil() {\n\t\tout.Set(reflect.MakeMap(outt))\n\t\tmapIsNew = true\n\t}\n\tfor i := 0; i < l; i += 2 {\n\t\tif isMerge(n.Content[i]) {\n\t\t\tmergeNode = n.Content[i+1]\n\t\t\tcontinue\n\t\t}\n\t\tk := reflect.New(kt).Elem()\n\t\tif d.unmarshal(n.Content[i], k) {\n\t\t\tif mergedFields != nil {\n\t\t\t\tki := k.Interface()\n\t\t\t\tif mergedFields[ki] {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tmergedFields[ki] = true\n\t\t\t}\n\t\t\tkkind := k.Kind()\n\t\t\tif kkind == reflect.Interface {\n\t\t\t\tkkind = k.Elem().Kind()\n\t\t\t}\n\t\t\tif kkind == reflect.Map || kkind == reflect.Slice {\n\t\t\t\tfailf(\"invalid map key: %#v\", k.Interface())\n\t\t\t}\n\t\t\te := reflect.New(et).Elem()\n\t\t\tif d.unmarshal(n.Content[i+1], e) || n.Content[i+1].ShortTag() == nullTag && (mapIsNew || !out.MapIndex(k).IsValid()) {\n\t\t\t\tout.SetMapIndex(k, e)\n\t\t\t}\n\t\t}\n\t}\n\n\td.mergedFields = mergedFields\n\tif mergeNode != nil {\n\t\td.merge(n, mergeNode, out)\n\t}\n\n\td.stringMapType = stringMapType\n\td.generalMapType = generalMapType\n\treturn true\n}\n\nfunc isStringMap(n *Node) bool {\n\tif n.Kind != MappingNode {\n\t\treturn false\n\t}\n\tl := len(n.Content)\n\tfor i := 0; i < l; i += 2 {\n\t\tshortTag := n.Content[i].ShortTag()\n\t\tif shortTag != strTag && shortTag != mergeTag {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc (d *decoder) mappingStruct(n *Node, out reflect.Value) (good bool) {\n\tsinfo, err := getStructInfo(out.Type())\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tvar inlineMap reflect.Value\n\tvar elemType reflect.Type\n\tif sinfo.InlineMap != -1 {\n\t\tinlineMap = out.Field(sinfo.InlineMap)\n\t\telemType = inlineMap.Type().Elem()\n\t}\n\n\tfor _, index := range sinfo.InlineUnmarshalers {\n\t\tfield := d.fieldByIndex(n, out, index)\n\t\td.prepare(n, field)\n\t}\n\n\tmergedFields := d.mergedFields\n\td.mergedFields = nil\n\tvar mergeNode *Node\n\tvar doneFields []bool\n\tif d.uniqueKeys {\n\t\tdoneFields = make([]bool, len(sinfo.FieldsList))\n\t}\n\tname := settableValueOf(\"\")\n\tl := len(n.Content)\n\tfor i := 0; i < l; i += 2 {\n\t\tni := n.Content[i]\n\t\tif isMerge(ni) {\n\t\t\tmergeNode = n.Content[i+1]\n\t\t\tcontinue\n\t\t}\n\t\tif !d.unmarshal(ni, name) {\n\t\t\tcontinue\n\t\t}\n\t\tsname := name.String()\n\t\tif mergedFields != nil {\n\t\t\tif mergedFields[sname] {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tmergedFields[sname] = true\n\t\t}\n\t\tif info, ok := sinfo.FieldsMap[sname]; ok {\n\t\t\tif d.uniqueKeys {\n\t\t\t\tif doneFields[info.Id] {\n\t\t\t\t\td.terrors = append(d.terrors, fmt.Sprintf(\"line %d: field %s already set in type %s\", ni.Line, name.String(), out.Type()))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tdoneFields[info.Id] = true\n\t\t\t}\n\t\t\tvar field reflect.Value\n\t\t\tif info.Inline == nil {\n\t\t\t\tfield = out.Field(info.Num)\n\t\t\t} else {\n\t\t\t\tfield = d.fieldByIndex(n, out, info.Inline)\n\t\t\t}\n\t\t\td.unmarshal(n.Content[i+1], field)\n\t\t} else if sinfo.InlineMap != -1 {\n\t\t\tif inlineMap.IsNil() {\n\t\t\t\tinlineMap.Set(reflect.MakeMap(inlineMap.Type()))\n\t\t\t}\n\t\t\tvalue := reflect.New(elemType).Elem()\n\t\t\td.unmarshal(n.Content[i+1], value)\n\t\t\tinlineMap.SetMapIndex(name, value)\n\t\t} else if d.knownFields {\n\t\t\td.terrors = append(d.terrors, fmt.Sprintf(\"line %d: field %s not found in type %s\", ni.Line, name.String(), out.Type()))\n\t\t}\n\t}\n\n\td.mergedFields = mergedFields\n\tif mergeNode != nil {\n\t\td.merge(n, mergeNode, out)\n\t}\n\treturn true\n}\n\nfunc failWantMap() {\n\tfailf(\"map merge requires map or sequence of maps as the value\")\n}\n\nfunc (d *decoder) merge(parent *Node, merge *Node, out reflect.Value) {\n\tmergedFields := d.mergedFields\n\tif mergedFields == nil {\n\t\td.mergedFields = make(map[interface{}]bool)\n\t\tfor i := 0; i < len(parent.Content); i += 2 {\n\t\t\tk := reflect.New(ifaceType).Elem()\n\t\t\tif d.unmarshal(parent.Content[i], k) {\n\t\t\t\td.mergedFields[k.Interface()] = true\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch merge.Kind {\n\tcase MappingNode:\n\t\td.unmarshal(merge, out)\n\tcase AliasNode:\n\t\tif merge.Alias != nil && merge.Alias.Kind != MappingNode {\n\t\t\tfailWantMap()\n\t\t}\n\t\td.unmarshal(merge, out)\n\tcase SequenceNode:\n\t\tfor i := 0; i < len(merge.Content); i++ {\n\t\t\tni := merge.Content[i]\n\t\t\tif ni.Kind == AliasNode {\n\t\t\t\tif ni.Alias != nil && ni.Alias.Kind != MappingNode {\n\t\t\t\t\tfailWantMap()\n\t\t\t\t}\n\t\t\t} else if ni.Kind != MappingNode {\n\t\t\t\tfailWantMap()\n\t\t\t}\n\t\t\td.unmarshal(ni, out)\n\t\t}\n\tdefault:\n\t\tfailWantMap()\n\t}\n\n\td.mergedFields = mergedFields\n}\n\nfunc isMerge(n *Node) bool {\n\treturn n.Kind == ScalarNode && n.Value == \"<<\" && (n.Tag == \"\" || n.Tag == \"!\" || shortTag(n.Tag) == mergeTag)\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/emitterc.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n//\n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n//\n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n//\n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n)\n\n// Flush the buffer if needed.\nfunc flush(emitter *yaml_emitter_t) bool {\n\tif emitter.buffer_pos+5 >= len(emitter.buffer) {\n\t\treturn yaml_emitter_flush(emitter)\n\t}\n\treturn true\n}\n\n// Put a character to the output buffer.\nfunc put(emitter *yaml_emitter_t, value byte) bool {\n\tif emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) {\n\t\treturn false\n\t}\n\temitter.buffer[emitter.buffer_pos] = value\n\temitter.buffer_pos++\n\temitter.column++\n\treturn true\n}\n\n// Put a line break to the output buffer.\nfunc put_break(emitter *yaml_emitter_t) bool {\n\tif emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) {\n\t\treturn false\n\t}\n\tswitch emitter.line_break {\n\tcase yaml_CR_BREAK:\n\t\temitter.buffer[emitter.buffer_pos] = '\\r'\n\t\temitter.buffer_pos += 1\n\tcase yaml_LN_BREAK:\n\t\temitter.buffer[emitter.buffer_pos] = '\\n'\n\t\temitter.buffer_pos += 1\n\tcase yaml_CRLN_BREAK:\n\t\temitter.buffer[emitter.buffer_pos+0] = '\\r'\n\t\temitter.buffer[emitter.buffer_pos+1] = '\\n'\n\t\temitter.buffer_pos += 2\n\tdefault:\n\t\tpanic(\"unknown line break setting\")\n\t}\n\tif emitter.column == 0 {\n\t\temitter.space_above = true\n\t}\n\temitter.column = 0\n\temitter.line++\n\t// [Go] Do this here and below and drop from everywhere else (see commented lines).\n\temitter.indention = true\n\treturn true\n}\n\n// Copy a character from a string into buffer.\nfunc write(emitter *yaml_emitter_t, s []byte, i *int) bool {\n\tif emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) {\n\t\treturn false\n\t}\n\tp := emitter.buffer_pos\n\tw := width(s[*i])\n\tswitch w {\n\tcase 4:\n\t\temitter.buffer[p+3] = s[*i+3]\n\t\tfallthrough\n\tcase 3:\n\t\temitter.buffer[p+2] = s[*i+2]\n\t\tfallthrough\n\tcase 2:\n\t\temitter.buffer[p+1] = s[*i+1]\n\t\tfallthrough\n\tcase 1:\n\t\temitter.buffer[p+0] = s[*i+0]\n\tdefault:\n\t\tpanic(\"unknown character width\")\n\t}\n\temitter.column++\n\temitter.buffer_pos += w\n\t*i += w\n\treturn true\n}\n\n// Write a whole string into buffer.\nfunc write_all(emitter *yaml_emitter_t, s []byte) bool {\n\tfor i := 0; i < len(s); {\n\t\tif !write(emitter, s, &i) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// Copy a line break character from a string into buffer.\nfunc write_break(emitter *yaml_emitter_t, s []byte, i *int) bool {\n\tif s[*i] == '\\n' {\n\t\tif !put_break(emitter) {\n\t\t\treturn false\n\t\t}\n\t\t*i++\n\t} else {\n\t\tif !write(emitter, s, i) {\n\t\t\treturn false\n\t\t}\n\t\tif emitter.column == 0 {\n\t\t\temitter.space_above = true\n\t\t}\n\t\temitter.column = 0\n\t\temitter.line++\n\t\t// [Go] Do this here and above and drop from everywhere else (see commented lines).\n\t\temitter.indention = true\n\t}\n\treturn true\n}\n\n// Set an emitter error and return false.\nfunc yaml_emitter_set_emitter_error(emitter *yaml_emitter_t, problem string) bool {\n\temitter.error = yaml_EMITTER_ERROR\n\temitter.problem = problem\n\treturn false\n}\n\n// Emit an event.\nfunc yaml_emitter_emit(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\temitter.events = append(emitter.events, *event)\n\tfor !yaml_emitter_need_more_events(emitter) {\n\t\tevent := &emitter.events[emitter.events_head]\n\t\tif !yaml_emitter_analyze_event(emitter, event) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_state_machine(emitter, event) {\n\t\t\treturn false\n\t\t}\n\t\tyaml_event_delete(event)\n\t\temitter.events_head++\n\t}\n\treturn true\n}\n\n// Check if we need to accumulate more events before emitting.\n//\n// We accumulate extra\n//  - 1 event for DOCUMENT-START\n//  - 2 events for SEQUENCE-START\n//  - 3 events for MAPPING-START\n//\nfunc yaml_emitter_need_more_events(emitter *yaml_emitter_t) bool {\n\tif emitter.events_head == len(emitter.events) {\n\t\treturn true\n\t}\n\tvar accumulate int\n\tswitch emitter.events[emitter.events_head].typ {\n\tcase yaml_DOCUMENT_START_EVENT:\n\t\taccumulate = 1\n\t\tbreak\n\tcase yaml_SEQUENCE_START_EVENT:\n\t\taccumulate = 2\n\t\tbreak\n\tcase yaml_MAPPING_START_EVENT:\n\t\taccumulate = 3\n\t\tbreak\n\tdefault:\n\t\treturn false\n\t}\n\tif len(emitter.events)-emitter.events_head > accumulate {\n\t\treturn false\n\t}\n\tvar level int\n\tfor i := emitter.events_head; i < len(emitter.events); i++ {\n\t\tswitch emitter.events[i].typ {\n\t\tcase yaml_STREAM_START_EVENT, yaml_DOCUMENT_START_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT:\n\t\t\tlevel++\n\t\tcase yaml_STREAM_END_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_END_EVENT, yaml_MAPPING_END_EVENT:\n\t\t\tlevel--\n\t\t}\n\t\tif level == 0 {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// Append a directive to the directives stack.\nfunc yaml_emitter_append_tag_directive(emitter *yaml_emitter_t, value *yaml_tag_directive_t, allow_duplicates bool) bool {\n\tfor i := 0; i < len(emitter.tag_directives); i++ {\n\t\tif bytes.Equal(value.handle, emitter.tag_directives[i].handle) {\n\t\t\tif allow_duplicates {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\treturn yaml_emitter_set_emitter_error(emitter, \"duplicate %TAG directive\")\n\t\t}\n\t}\n\n\t// [Go] Do we actually need to copy this given garbage collection\n\t// and the lack of deallocating destructors?\n\ttag_copy := yaml_tag_directive_t{\n\t\thandle: make([]byte, len(value.handle)),\n\t\tprefix: make([]byte, len(value.prefix)),\n\t}\n\tcopy(tag_copy.handle, value.handle)\n\tcopy(tag_copy.prefix, value.prefix)\n\temitter.tag_directives = append(emitter.tag_directives, tag_copy)\n\treturn true\n}\n\n// Increase the indentation level.\nfunc yaml_emitter_increase_indent(emitter *yaml_emitter_t, flow, indentless bool) bool {\n\temitter.indents = append(emitter.indents, emitter.indent)\n\tif emitter.indent < 0 {\n\t\tif flow {\n\t\t\temitter.indent = emitter.best_indent\n\t\t} else {\n\t\t\temitter.indent = 0\n\t\t}\n\t} else if !indentless {\n\t\t// [Go] This was changed so that indentations are more regular.\n\t\tif emitter.states[len(emitter.states)-1] == yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE {\n\t\t\t// The first indent inside a sequence will just skip the \"- \" indicator.\n\t\t\temitter.indent += 2\n\t\t} else {\n\t\t\t// Everything else aligns to the chosen indentation.\n\t\t\temitter.indent = emitter.best_indent*((emitter.indent+emitter.best_indent)/emitter.best_indent)\n\t\t}\n\t}\n\treturn true\n}\n\n// State dispatcher.\nfunc yaml_emitter_state_machine(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\tswitch emitter.state {\n\tdefault:\n\tcase yaml_EMIT_STREAM_START_STATE:\n\t\treturn yaml_emitter_emit_stream_start(emitter, event)\n\n\tcase yaml_EMIT_FIRST_DOCUMENT_START_STATE:\n\t\treturn yaml_emitter_emit_document_start(emitter, event, true)\n\n\tcase yaml_EMIT_DOCUMENT_START_STATE:\n\t\treturn yaml_emitter_emit_document_start(emitter, event, false)\n\n\tcase yaml_EMIT_DOCUMENT_CONTENT_STATE:\n\t\treturn yaml_emitter_emit_document_content(emitter, event)\n\n\tcase yaml_EMIT_DOCUMENT_END_STATE:\n\t\treturn yaml_emitter_emit_document_end(emitter, event)\n\n\tcase yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE:\n\t\treturn yaml_emitter_emit_flow_sequence_item(emitter, event, true, false)\n\n\tcase yaml_EMIT_FLOW_SEQUENCE_TRAIL_ITEM_STATE:\n\t\treturn yaml_emitter_emit_flow_sequence_item(emitter, event, false, true)\n\n\tcase yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE:\n\t\treturn yaml_emitter_emit_flow_sequence_item(emitter, event, false, false)\n\n\tcase yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE:\n\t\treturn yaml_emitter_emit_flow_mapping_key(emitter, event, true, false)\n\n\tcase yaml_EMIT_FLOW_MAPPING_TRAIL_KEY_STATE:\n\t\treturn yaml_emitter_emit_flow_mapping_key(emitter, event, false, true)\n\n\tcase yaml_EMIT_FLOW_MAPPING_KEY_STATE:\n\t\treturn yaml_emitter_emit_flow_mapping_key(emitter, event, false, false)\n\n\tcase yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE:\n\t\treturn yaml_emitter_emit_flow_mapping_value(emitter, event, true)\n\n\tcase yaml_EMIT_FLOW_MAPPING_VALUE_STATE:\n\t\treturn yaml_emitter_emit_flow_mapping_value(emitter, event, false)\n\n\tcase yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE:\n\t\treturn yaml_emitter_emit_block_sequence_item(emitter, event, true)\n\n\tcase yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE:\n\t\treturn yaml_emitter_emit_block_sequence_item(emitter, event, false)\n\n\tcase yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE:\n\t\treturn yaml_emitter_emit_block_mapping_key(emitter, event, true)\n\n\tcase yaml_EMIT_BLOCK_MAPPING_KEY_STATE:\n\t\treturn yaml_emitter_emit_block_mapping_key(emitter, event, false)\n\n\tcase yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE:\n\t\treturn yaml_emitter_emit_block_mapping_value(emitter, event, true)\n\n\tcase yaml_EMIT_BLOCK_MAPPING_VALUE_STATE:\n\t\treturn yaml_emitter_emit_block_mapping_value(emitter, event, false)\n\n\tcase yaml_EMIT_END_STATE:\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"expected nothing after STREAM-END\")\n\t}\n\tpanic(\"invalid emitter state\")\n}\n\n// Expect STREAM-START.\nfunc yaml_emitter_emit_stream_start(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\tif event.typ != yaml_STREAM_START_EVENT {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"expected STREAM-START\")\n\t}\n\tif emitter.encoding == yaml_ANY_ENCODING {\n\t\temitter.encoding = event.encoding\n\t\tif emitter.encoding == yaml_ANY_ENCODING {\n\t\t\temitter.encoding = yaml_UTF8_ENCODING\n\t\t}\n\t}\n\tif emitter.best_indent < 2 || emitter.best_indent > 9 {\n\t\temitter.best_indent = 2\n\t}\n\tif emitter.best_width >= 0 && emitter.best_width <= emitter.best_indent*2 {\n\t\temitter.best_width = 80\n\t}\n\tif emitter.best_width < 0 {\n\t\temitter.best_width = 1<<31 - 1\n\t}\n\tif emitter.line_break == yaml_ANY_BREAK {\n\t\temitter.line_break = yaml_LN_BREAK\n\t}\n\n\temitter.indent = -1\n\temitter.line = 0\n\temitter.column = 0\n\temitter.whitespace = true\n\temitter.indention = true\n\temitter.space_above = true\n\temitter.foot_indent = -1\n\n\tif emitter.encoding != yaml_UTF8_ENCODING {\n\t\tif !yaml_emitter_write_bom(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\temitter.state = yaml_EMIT_FIRST_DOCUMENT_START_STATE\n\treturn true\n}\n\n// Expect DOCUMENT-START or STREAM-END.\nfunc yaml_emitter_emit_document_start(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {\n\n\tif event.typ == yaml_DOCUMENT_START_EVENT {\n\n\t\tif event.version_directive != nil {\n\t\t\tif !yaml_emitter_analyze_version_directive(emitter, event.version_directive) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\tfor i := 0; i < len(event.tag_directives); i++ {\n\t\t\ttag_directive := &event.tag_directives[i]\n\t\t\tif !yaml_emitter_analyze_tag_directive(emitter, tag_directive) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !yaml_emitter_append_tag_directive(emitter, tag_directive, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\tfor i := 0; i < len(default_tag_directives); i++ {\n\t\t\ttag_directive := &default_tag_directives[i]\n\t\t\tif !yaml_emitter_append_tag_directive(emitter, tag_directive, true) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\timplicit := event.implicit\n\t\tif !first || emitter.canonical {\n\t\t\timplicit = false\n\t\t}\n\n\t\tif emitter.open_ended && (event.version_directive != nil || len(event.tag_directives) > 0) {\n\t\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"...\"), true, false, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\tif event.version_directive != nil {\n\t\t\timplicit = false\n\t\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"%YAML\"), true, false, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"1.1\"), true, false, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\tif len(event.tag_directives) > 0 {\n\t\t\timplicit = false\n\t\t\tfor i := 0; i < len(event.tag_directives); i++ {\n\t\t\t\ttag_directive := &event.tag_directives[i]\n\t\t\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"%TAG\"), true, false, false) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tif !yaml_emitter_write_tag_handle(emitter, tag_directive.handle) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tif !yaml_emitter_write_tag_content(emitter, tag_directive.prefix, true) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif yaml_emitter_check_empty_document(emitter) {\n\t\t\timplicit = false\n\t\t}\n\t\tif !implicit {\n\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"---\"), true, false, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif emitter.canonical || true {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif len(emitter.head_comment) > 0 {\n\t\t\tif !yaml_emitter_process_head_comment(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !put_break(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\temitter.state = yaml_EMIT_DOCUMENT_CONTENT_STATE\n\t\treturn true\n\t}\n\n\tif event.typ == yaml_STREAM_END_EVENT {\n\t\tif emitter.open_ended {\n\t\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"...\"), true, false, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif !yaml_emitter_flush(emitter) {\n\t\t\treturn false\n\t\t}\n\t\temitter.state = yaml_EMIT_END_STATE\n\t\treturn true\n\t}\n\n\treturn yaml_emitter_set_emitter_error(emitter, \"expected DOCUMENT-START or STREAM-END\")\n}\n\n// Expect the root node.\nfunc yaml_emitter_emit_document_content(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\temitter.states = append(emitter.states, yaml_EMIT_DOCUMENT_END_STATE)\n\n\tif !yaml_emitter_process_head_comment(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_emit_node(emitter, event, true, false, false, false) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_line_comment(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\treturn false\n\t}\n\treturn true\n}\n\n// Expect DOCUMENT-END.\nfunc yaml_emitter_emit_document_end(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\tif event.typ != yaml_DOCUMENT_END_EVENT {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"expected DOCUMENT-END\")\n\t}\n\t// [Go] Force document foot separation.\n\temitter.foot_indent = 0\n\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\treturn false\n\t}\n\temitter.foot_indent = -1\n\tif !yaml_emitter_write_indent(emitter) {\n\t\treturn false\n\t}\n\tif !event.implicit {\n\t\t// [Go] Allocate the slice elsewhere.\n\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"...\"), true, false, false) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !yaml_emitter_flush(emitter) {\n\t\treturn false\n\t}\n\temitter.state = yaml_EMIT_DOCUMENT_START_STATE\n\temitter.tag_directives = emitter.tag_directives[:0]\n\treturn true\n}\n\n// Expect a flow item node.\nfunc yaml_emitter_emit_flow_sequence_item(emitter *yaml_emitter_t, event *yaml_event_t, first, trail bool) bool {\n\tif first {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{'['}, true, true, false) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_increase_indent(emitter, true, false) {\n\t\t\treturn false\n\t\t}\n\t\temitter.flow_level++\n\t}\n\n\tif event.typ == yaml_SEQUENCE_END_EVENT {\n\t\tif emitter.canonical && !first && !trail {\n\t\t\tif !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\temitter.flow_level--\n\t\temitter.indent = emitter.indents[len(emitter.indents)-1]\n\t\temitter.indents = emitter.indents[:len(emitter.indents)-1]\n\t\tif emitter.column == 0 || emitter.canonical && !first {\n\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{']'}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_process_line_comment(emitter) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\t\treturn false\n\t\t}\n\t\temitter.state = emitter.states[len(emitter.states)-1]\n\t\temitter.states = emitter.states[:len(emitter.states)-1]\n\n\t\treturn true\n\t}\n\n\tif !first && !trail {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif !yaml_emitter_process_head_comment(emitter) {\n\t\treturn false\n\t}\n\tif emitter.column == 0 {\n\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif emitter.canonical || emitter.column > emitter.best_width {\n\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif len(emitter.line_comment)+len(emitter.foot_comment)+len(emitter.tail_comment) > 0 {\n\t\temitter.states = append(emitter.states, yaml_EMIT_FLOW_SEQUENCE_TRAIL_ITEM_STATE)\n\t} else {\n\t\temitter.states = append(emitter.states, yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE)\n\t}\n\tif !yaml_emitter_emit_node(emitter, event, false, true, false, false) {\n\t\treturn false\n\t}\n\tif len(emitter.line_comment)+len(emitter.foot_comment)+len(emitter.tail_comment) > 0 {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !yaml_emitter_process_line_comment(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\treturn false\n\t}\n\treturn true\n}\n\n// Expect a flow key node.\nfunc yaml_emitter_emit_flow_mapping_key(emitter *yaml_emitter_t, event *yaml_event_t, first, trail bool) bool {\n\tif first {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{'{'}, true, true, false) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_increase_indent(emitter, true, false) {\n\t\t\treturn false\n\t\t}\n\t\temitter.flow_level++\n\t}\n\n\tif event.typ == yaml_MAPPING_END_EVENT {\n\t\tif (emitter.canonical || len(emitter.head_comment)+len(emitter.foot_comment)+len(emitter.tail_comment) > 0) && !first && !trail {\n\t\t\tif !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif !yaml_emitter_process_head_comment(emitter) {\n\t\t\treturn false\n\t\t}\n\t\temitter.flow_level--\n\t\temitter.indent = emitter.indents[len(emitter.indents)-1]\n\t\temitter.indents = emitter.indents[:len(emitter.indents)-1]\n\t\tif emitter.canonical && !first {\n\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{'}'}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_process_line_comment(emitter) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\t\treturn false\n\t\t}\n\t\temitter.state = emitter.states[len(emitter.states)-1]\n\t\temitter.states = emitter.states[:len(emitter.states)-1]\n\t\treturn true\n\t}\n\n\tif !first && !trail {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif !yaml_emitter_process_head_comment(emitter) {\n\t\treturn false\n\t}\n\n\tif emitter.column == 0 {\n\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif emitter.canonical || emitter.column > emitter.best_width {\n\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif !emitter.canonical && yaml_emitter_check_simple_key(emitter) {\n\t\temitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE)\n\t\treturn yaml_emitter_emit_node(emitter, event, false, false, true, true)\n\t}\n\tif !yaml_emitter_write_indicator(emitter, []byte{'?'}, true, false, false) {\n\t\treturn false\n\t}\n\temitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_VALUE_STATE)\n\treturn yaml_emitter_emit_node(emitter, event, false, false, true, false)\n}\n\n// Expect a flow value node.\nfunc yaml_emitter_emit_flow_mapping_value(emitter *yaml_emitter_t, event *yaml_event_t, simple bool) bool {\n\tif simple {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{':'}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t} else {\n\t\tif emitter.canonical || emitter.column > emitter.best_width {\n\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{':'}, true, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif len(emitter.line_comment)+len(emitter.foot_comment)+len(emitter.tail_comment) > 0 {\n\t\temitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_TRAIL_KEY_STATE)\n\t} else {\n\t\temitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_KEY_STATE)\n\t}\n\tif !yaml_emitter_emit_node(emitter, event, false, false, true, false) {\n\t\treturn false\n\t}\n\tif len(emitter.line_comment)+len(emitter.foot_comment)+len(emitter.tail_comment) > 0 {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !yaml_emitter_process_line_comment(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\treturn false\n\t}\n\treturn true\n}\n\n// Expect a block item node.\nfunc yaml_emitter_emit_block_sequence_item(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {\n\tif first {\n\t\tif !yaml_emitter_increase_indent(emitter, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif event.typ == yaml_SEQUENCE_END_EVENT {\n\t\temitter.indent = emitter.indents[len(emitter.indents)-1]\n\t\temitter.indents = emitter.indents[:len(emitter.indents)-1]\n\t\temitter.state = emitter.states[len(emitter.states)-1]\n\t\temitter.states = emitter.states[:len(emitter.states)-1]\n\t\treturn true\n\t}\n\tif !yaml_emitter_process_head_comment(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_write_indent(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_write_indicator(emitter, []byte{'-'}, true, false, true) {\n\t\treturn false\n\t}\n\temitter.states = append(emitter.states, yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE)\n\tif !yaml_emitter_emit_node(emitter, event, false, true, false, false) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_line_comment(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\treturn false\n\t}\n\treturn true\n}\n\n// Expect a block key node.\nfunc yaml_emitter_emit_block_mapping_key(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool {\n\tif first {\n\t\tif !yaml_emitter_increase_indent(emitter, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !yaml_emitter_process_head_comment(emitter) {\n\t\treturn false\n\t}\n\tif event.typ == yaml_MAPPING_END_EVENT {\n\t\temitter.indent = emitter.indents[len(emitter.indents)-1]\n\t\temitter.indents = emitter.indents[:len(emitter.indents)-1]\n\t\temitter.state = emitter.states[len(emitter.states)-1]\n\t\temitter.states = emitter.states[:len(emitter.states)-1]\n\t\treturn true\n\t}\n\tif !yaml_emitter_write_indent(emitter) {\n\t\treturn false\n\t}\n\tif len(emitter.line_comment) > 0 {\n\t\t// [Go] A line comment was provided for the key. That's unusual as the\n\t\t//      scanner associates line comments with the value. Either way,\n\t\t//      save the line comment and render it appropriately later.\n\t\temitter.key_line_comment = emitter.line_comment\n\t\temitter.line_comment = nil\n\t}\n\tif yaml_emitter_check_simple_key(emitter) {\n\t\temitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE)\n\t\treturn yaml_emitter_emit_node(emitter, event, false, false, true, true)\n\t}\n\tif !yaml_emitter_write_indicator(emitter, []byte{'?'}, true, false, true) {\n\t\treturn false\n\t}\n\temitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_VALUE_STATE)\n\treturn yaml_emitter_emit_node(emitter, event, false, false, true, false)\n}\n\n// Expect a block value node.\nfunc yaml_emitter_emit_block_mapping_value(emitter *yaml_emitter_t, event *yaml_event_t, simple bool) bool {\n\tif simple {\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{':'}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t} else {\n\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{':'}, true, false, true) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif len(emitter.key_line_comment) > 0 {\n\t\t// [Go] Line comments are generally associated with the value, but when there's\n\t\t//      no value on the same line as a mapping key they end up attached to the\n\t\t//      key itself.\n\t\tif event.typ == yaml_SCALAR_EVENT {\n\t\t\tif len(emitter.line_comment) == 0 {\n\t\t\t\t// A scalar is coming and it has no line comments by itself yet,\n\t\t\t\t// so just let it handle the line comment as usual. If it has a\n\t\t\t\t// line comment, we can't have both so the one from the key is lost.\n\t\t\t\temitter.line_comment = emitter.key_line_comment\n\t\t\t\temitter.key_line_comment = nil\n\t\t\t}\n\t\t} else if event.sequence_style() != yaml_FLOW_SEQUENCE_STYLE && (event.typ == yaml_MAPPING_START_EVENT || event.typ == yaml_SEQUENCE_START_EVENT) {\n\t\t\t// An indented block follows, so write the comment right now.\n\t\t\temitter.line_comment, emitter.key_line_comment = emitter.key_line_comment, emitter.line_comment\n\t\t\tif !yaml_emitter_process_line_comment(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\temitter.line_comment, emitter.key_line_comment = emitter.key_line_comment, emitter.line_comment\n\t\t}\n\t}\n\temitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_KEY_STATE)\n\tif !yaml_emitter_emit_node(emitter, event, false, false, true, false) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_line_comment(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_foot_comment(emitter) {\n\t\treturn false\n\t}\n\treturn true\n}\n\nfunc yaml_emitter_silent_nil_event(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\treturn event.typ == yaml_SCALAR_EVENT && event.implicit && !emitter.canonical && len(emitter.scalar_data.value) == 0\n}\n\n// Expect a node.\nfunc yaml_emitter_emit_node(emitter *yaml_emitter_t, event *yaml_event_t,\n\troot bool, sequence bool, mapping bool, simple_key bool) bool {\n\n\temitter.root_context = root\n\temitter.sequence_context = sequence\n\temitter.mapping_context = mapping\n\temitter.simple_key_context = simple_key\n\n\tswitch event.typ {\n\tcase yaml_ALIAS_EVENT:\n\t\treturn yaml_emitter_emit_alias(emitter, event)\n\tcase yaml_SCALAR_EVENT:\n\t\treturn yaml_emitter_emit_scalar(emitter, event)\n\tcase yaml_SEQUENCE_START_EVENT:\n\t\treturn yaml_emitter_emit_sequence_start(emitter, event)\n\tcase yaml_MAPPING_START_EVENT:\n\t\treturn yaml_emitter_emit_mapping_start(emitter, event)\n\tdefault:\n\t\treturn yaml_emitter_set_emitter_error(emitter,\n\t\t\tfmt.Sprintf(\"expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS, but got %v\", event.typ))\n\t}\n}\n\n// Expect ALIAS.\nfunc yaml_emitter_emit_alias(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\tif !yaml_emitter_process_anchor(emitter) {\n\t\treturn false\n\t}\n\temitter.state = emitter.states[len(emitter.states)-1]\n\temitter.states = emitter.states[:len(emitter.states)-1]\n\treturn true\n}\n\n// Expect SCALAR.\nfunc yaml_emitter_emit_scalar(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\tif !yaml_emitter_select_scalar_style(emitter, event) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_anchor(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_tag(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_increase_indent(emitter, true, false) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_scalar(emitter) {\n\t\treturn false\n\t}\n\temitter.indent = emitter.indents[len(emitter.indents)-1]\n\temitter.indents = emitter.indents[:len(emitter.indents)-1]\n\temitter.state = emitter.states[len(emitter.states)-1]\n\temitter.states = emitter.states[:len(emitter.states)-1]\n\treturn true\n}\n\n// Expect SEQUENCE-START.\nfunc yaml_emitter_emit_sequence_start(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\tif !yaml_emitter_process_anchor(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_tag(emitter) {\n\t\treturn false\n\t}\n\tif emitter.flow_level > 0 || emitter.canonical || event.sequence_style() == yaml_FLOW_SEQUENCE_STYLE ||\n\t\tyaml_emitter_check_empty_sequence(emitter) {\n\t\temitter.state = yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE\n\t} else {\n\t\temitter.state = yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE\n\t}\n\treturn true\n}\n\n// Expect MAPPING-START.\nfunc yaml_emitter_emit_mapping_start(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\tif !yaml_emitter_process_anchor(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_tag(emitter) {\n\t\treturn false\n\t}\n\tif emitter.flow_level > 0 || emitter.canonical || event.mapping_style() == yaml_FLOW_MAPPING_STYLE ||\n\t\tyaml_emitter_check_empty_mapping(emitter) {\n\t\temitter.state = yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE\n\t} else {\n\t\temitter.state = yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE\n\t}\n\treturn true\n}\n\n// Check if the document content is an empty scalar.\nfunc yaml_emitter_check_empty_document(emitter *yaml_emitter_t) bool {\n\treturn false // [Go] Huh?\n}\n\n// Check if the next events represent an empty sequence.\nfunc yaml_emitter_check_empty_sequence(emitter *yaml_emitter_t) bool {\n\tif len(emitter.events)-emitter.events_head < 2 {\n\t\treturn false\n\t}\n\treturn emitter.events[emitter.events_head].typ == yaml_SEQUENCE_START_EVENT &&\n\t\temitter.events[emitter.events_head+1].typ == yaml_SEQUENCE_END_EVENT\n}\n\n// Check if the next events represent an empty mapping.\nfunc yaml_emitter_check_empty_mapping(emitter *yaml_emitter_t) bool {\n\tif len(emitter.events)-emitter.events_head < 2 {\n\t\treturn false\n\t}\n\treturn emitter.events[emitter.events_head].typ == yaml_MAPPING_START_EVENT &&\n\t\temitter.events[emitter.events_head+1].typ == yaml_MAPPING_END_EVENT\n}\n\n// Check if the next node can be expressed as a simple key.\nfunc yaml_emitter_check_simple_key(emitter *yaml_emitter_t) bool {\n\tlength := 0\n\tswitch emitter.events[emitter.events_head].typ {\n\tcase yaml_ALIAS_EVENT:\n\t\tlength += len(emitter.anchor_data.anchor)\n\tcase yaml_SCALAR_EVENT:\n\t\tif emitter.scalar_data.multiline {\n\t\t\treturn false\n\t\t}\n\t\tlength += len(emitter.anchor_data.anchor) +\n\t\t\tlen(emitter.tag_data.handle) +\n\t\t\tlen(emitter.tag_data.suffix) +\n\t\t\tlen(emitter.scalar_data.value)\n\tcase yaml_SEQUENCE_START_EVENT:\n\t\tif !yaml_emitter_check_empty_sequence(emitter) {\n\t\t\treturn false\n\t\t}\n\t\tlength += len(emitter.anchor_data.anchor) +\n\t\t\tlen(emitter.tag_data.handle) +\n\t\t\tlen(emitter.tag_data.suffix)\n\tcase yaml_MAPPING_START_EVENT:\n\t\tif !yaml_emitter_check_empty_mapping(emitter) {\n\t\t\treturn false\n\t\t}\n\t\tlength += len(emitter.anchor_data.anchor) +\n\t\t\tlen(emitter.tag_data.handle) +\n\t\t\tlen(emitter.tag_data.suffix)\n\tdefault:\n\t\treturn false\n\t}\n\treturn length <= 128\n}\n\n// Determine an acceptable scalar style.\nfunc yaml_emitter_select_scalar_style(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\n\tno_tag := len(emitter.tag_data.handle) == 0 && len(emitter.tag_data.suffix) == 0\n\tif no_tag && !event.implicit && !event.quoted_implicit {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"neither tag nor implicit flags are specified\")\n\t}\n\n\tstyle := event.scalar_style()\n\tif style == yaml_ANY_SCALAR_STYLE {\n\t\tstyle = yaml_PLAIN_SCALAR_STYLE\n\t}\n\tif emitter.canonical {\n\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t}\n\tif emitter.simple_key_context && emitter.scalar_data.multiline {\n\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t}\n\n\tif style == yaml_PLAIN_SCALAR_STYLE {\n\t\tif emitter.flow_level > 0 && !emitter.scalar_data.flow_plain_allowed ||\n\t\t\temitter.flow_level == 0 && !emitter.scalar_data.block_plain_allowed {\n\t\t\tstyle = yaml_SINGLE_QUOTED_SCALAR_STYLE\n\t\t}\n\t\tif len(emitter.scalar_data.value) == 0 && (emitter.flow_level > 0 || emitter.simple_key_context) {\n\t\t\tstyle = yaml_SINGLE_QUOTED_SCALAR_STYLE\n\t\t}\n\t\tif no_tag && !event.implicit {\n\t\t\tstyle = yaml_SINGLE_QUOTED_SCALAR_STYLE\n\t\t}\n\t}\n\tif style == yaml_SINGLE_QUOTED_SCALAR_STYLE {\n\t\tif !emitter.scalar_data.single_quoted_allowed {\n\t\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t\t}\n\t}\n\tif style == yaml_LITERAL_SCALAR_STYLE || style == yaml_FOLDED_SCALAR_STYLE {\n\t\tif !emitter.scalar_data.block_allowed || emitter.flow_level > 0 || emitter.simple_key_context {\n\t\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t\t}\n\t}\n\n\tif no_tag && !event.quoted_implicit && style != yaml_PLAIN_SCALAR_STYLE {\n\t\temitter.tag_data.handle = []byte{'!'}\n\t}\n\temitter.scalar_data.style = style\n\treturn true\n}\n\n// Write an anchor.\nfunc yaml_emitter_process_anchor(emitter *yaml_emitter_t) bool {\n\tif emitter.anchor_data.anchor == nil {\n\t\treturn true\n\t}\n\tc := []byte{'&'}\n\tif emitter.anchor_data.alias {\n\t\tc[0] = '*'\n\t}\n\tif !yaml_emitter_write_indicator(emitter, c, true, false, false) {\n\t\treturn false\n\t}\n\treturn yaml_emitter_write_anchor(emitter, emitter.anchor_data.anchor)\n}\n\n// Write a tag.\nfunc yaml_emitter_process_tag(emitter *yaml_emitter_t) bool {\n\tif len(emitter.tag_data.handle) == 0 && len(emitter.tag_data.suffix) == 0 {\n\t\treturn true\n\t}\n\tif len(emitter.tag_data.handle) > 0 {\n\t\tif !yaml_emitter_write_tag_handle(emitter, emitter.tag_data.handle) {\n\t\t\treturn false\n\t\t}\n\t\tif len(emitter.tag_data.suffix) > 0 {\n\t\t\tif !yaml_emitter_write_tag_content(emitter, emitter.tag_data.suffix, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t} else {\n\t\t// [Go] Allocate these slices elsewhere.\n\t\tif !yaml_emitter_write_indicator(emitter, []byte(\"!<\"), true, false, false) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_write_tag_content(emitter, emitter.tag_data.suffix, false) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_write_indicator(emitter, []byte{'>'}, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\n// Write a scalar.\nfunc yaml_emitter_process_scalar(emitter *yaml_emitter_t) bool {\n\tswitch emitter.scalar_data.style {\n\tcase yaml_PLAIN_SCALAR_STYLE:\n\t\treturn yaml_emitter_write_plain_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context)\n\n\tcase yaml_SINGLE_QUOTED_SCALAR_STYLE:\n\t\treturn yaml_emitter_write_single_quoted_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context)\n\n\tcase yaml_DOUBLE_QUOTED_SCALAR_STYLE:\n\t\treturn yaml_emitter_write_double_quoted_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context)\n\n\tcase yaml_LITERAL_SCALAR_STYLE:\n\t\treturn yaml_emitter_write_literal_scalar(emitter, emitter.scalar_data.value)\n\n\tcase yaml_FOLDED_SCALAR_STYLE:\n\t\treturn yaml_emitter_write_folded_scalar(emitter, emitter.scalar_data.value)\n\t}\n\tpanic(\"unknown scalar style\")\n}\n\n// Write a head comment.\nfunc yaml_emitter_process_head_comment(emitter *yaml_emitter_t) bool {\n\tif len(emitter.tail_comment) > 0 {\n\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\treturn false\n\t\t}\n\t\tif !yaml_emitter_write_comment(emitter, emitter.tail_comment) {\n\t\t\treturn false\n\t\t}\n\t\temitter.tail_comment = emitter.tail_comment[:0]\n\t\temitter.foot_indent = emitter.indent\n\t\tif emitter.foot_indent < 0 {\n\t\t\temitter.foot_indent = 0\n\t\t}\n\t}\n\n\tif len(emitter.head_comment) == 0 {\n\t\treturn true\n\t}\n\tif !yaml_emitter_write_indent(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_write_comment(emitter, emitter.head_comment) {\n\t\treturn false\n\t}\n\temitter.head_comment = emitter.head_comment[:0]\n\treturn true\n}\n\n// Write an line comment.\nfunc yaml_emitter_process_line_comment(emitter *yaml_emitter_t) bool {\n\tif len(emitter.line_comment) == 0 {\n\t\treturn true\n\t}\n\tif !emitter.whitespace {\n\t\tif !put(emitter, ' ') {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !yaml_emitter_write_comment(emitter, emitter.line_comment) {\n\t\treturn false\n\t}\n\temitter.line_comment = emitter.line_comment[:0]\n\treturn true\n}\n\n// Write a foot comment.\nfunc yaml_emitter_process_foot_comment(emitter *yaml_emitter_t) bool {\n\tif len(emitter.foot_comment) == 0 {\n\t\treturn true\n\t}\n\tif !yaml_emitter_write_indent(emitter) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_write_comment(emitter, emitter.foot_comment) {\n\t\treturn false\n\t}\n\temitter.foot_comment = emitter.foot_comment[:0]\n\temitter.foot_indent = emitter.indent\n\tif emitter.foot_indent < 0 {\n\t\temitter.foot_indent = 0\n\t}\n\treturn true\n}\n\n// Check if a %YAML directive is valid.\nfunc yaml_emitter_analyze_version_directive(emitter *yaml_emitter_t, version_directive *yaml_version_directive_t) bool {\n\tif version_directive.major != 1 || version_directive.minor != 1 {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"incompatible %YAML directive\")\n\t}\n\treturn true\n}\n\n// Check if a %TAG directive is valid.\nfunc yaml_emitter_analyze_tag_directive(emitter *yaml_emitter_t, tag_directive *yaml_tag_directive_t) bool {\n\thandle := tag_directive.handle\n\tprefix := tag_directive.prefix\n\tif len(handle) == 0 {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"tag handle must not be empty\")\n\t}\n\tif handle[0] != '!' {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"tag handle must start with '!'\")\n\t}\n\tif handle[len(handle)-1] != '!' {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"tag handle must end with '!'\")\n\t}\n\tfor i := 1; i < len(handle)-1; i += width(handle[i]) {\n\t\tif !is_alpha(handle, i) {\n\t\t\treturn yaml_emitter_set_emitter_error(emitter, \"tag handle must contain alphanumerical characters only\")\n\t\t}\n\t}\n\tif len(prefix) == 0 {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"tag prefix must not be empty\")\n\t}\n\treturn true\n}\n\n// Check if an anchor is valid.\nfunc yaml_emitter_analyze_anchor(emitter *yaml_emitter_t, anchor []byte, alias bool) bool {\n\tif len(anchor) == 0 {\n\t\tproblem := \"anchor value must not be empty\"\n\t\tif alias {\n\t\t\tproblem = \"alias value must not be empty\"\n\t\t}\n\t\treturn yaml_emitter_set_emitter_error(emitter, problem)\n\t}\n\tfor i := 0; i < len(anchor); i += width(anchor[i]) {\n\t\tif !is_alpha(anchor, i) {\n\t\t\tproblem := \"anchor value must contain alphanumerical characters only\"\n\t\t\tif alias {\n\t\t\t\tproblem = \"alias value must contain alphanumerical characters only\"\n\t\t\t}\n\t\t\treturn yaml_emitter_set_emitter_error(emitter, problem)\n\t\t}\n\t}\n\temitter.anchor_data.anchor = anchor\n\temitter.anchor_data.alias = alias\n\treturn true\n}\n\n// Check if a tag is valid.\nfunc yaml_emitter_analyze_tag(emitter *yaml_emitter_t, tag []byte) bool {\n\tif len(tag) == 0 {\n\t\treturn yaml_emitter_set_emitter_error(emitter, \"tag value must not be empty\")\n\t}\n\tfor i := 0; i < len(emitter.tag_directives); i++ {\n\t\ttag_directive := &emitter.tag_directives[i]\n\t\tif bytes.HasPrefix(tag, tag_directive.prefix) {\n\t\t\temitter.tag_data.handle = tag_directive.handle\n\t\t\temitter.tag_data.suffix = tag[len(tag_directive.prefix):]\n\t\t\treturn true\n\t\t}\n\t}\n\temitter.tag_data.suffix = tag\n\treturn true\n}\n\n// Check if a scalar is valid.\nfunc yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool {\n\tvar (\n\t\tblock_indicators   = false\n\t\tflow_indicators    = false\n\t\tline_breaks        = false\n\t\tspecial_characters = false\n\t\ttab_characters     = false\n\n\t\tleading_space  = false\n\t\tleading_break  = false\n\t\ttrailing_space = false\n\t\ttrailing_break = false\n\t\tbreak_space    = false\n\t\tspace_break    = false\n\n\t\tpreceded_by_whitespace = false\n\t\tfollowed_by_whitespace = false\n\t\tprevious_space         = false\n\t\tprevious_break         = false\n\t)\n\n\temitter.scalar_data.value = value\n\n\tif len(value) == 0 {\n\t\temitter.scalar_data.multiline = false\n\t\temitter.scalar_data.flow_plain_allowed = false\n\t\temitter.scalar_data.block_plain_allowed = true\n\t\temitter.scalar_data.single_quoted_allowed = true\n\t\temitter.scalar_data.block_allowed = false\n\t\treturn true\n\t}\n\n\tif len(value) >= 3 && ((value[0] == '-' && value[1] == '-' && value[2] == '-') || (value[0] == '.' && value[1] == '.' && value[2] == '.')) {\n\t\tblock_indicators = true\n\t\tflow_indicators = true\n\t}\n\n\tpreceded_by_whitespace = true\n\tfor i, w := 0, 0; i < len(value); i += w {\n\t\tw = width(value[i])\n\t\tfollowed_by_whitespace = i+w >= len(value) || is_blank(value, i+w)\n\n\t\tif i == 0 {\n\t\t\tswitch value[i] {\n\t\t\tcase '#', ',', '[', ']', '{', '}', '&', '*', '!', '|', '>', '\\'', '\"', '%', '@', '`':\n\t\t\t\tflow_indicators = true\n\t\t\t\tblock_indicators = true\n\t\t\tcase '?', ':':\n\t\t\t\tflow_indicators = true\n\t\t\t\tif followed_by_whitespace {\n\t\t\t\t\tblock_indicators = true\n\t\t\t\t}\n\t\t\tcase '-':\n\t\t\t\tif followed_by_whitespace {\n\t\t\t\t\tflow_indicators = true\n\t\t\t\t\tblock_indicators = true\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tswitch value[i] {\n\t\t\tcase ',', '?', '[', ']', '{', '}':\n\t\t\t\tflow_indicators = true\n\t\t\tcase ':':\n\t\t\t\tflow_indicators = true\n\t\t\t\tif followed_by_whitespace {\n\t\t\t\t\tblock_indicators = true\n\t\t\t\t}\n\t\t\tcase '#':\n\t\t\t\tif preceded_by_whitespace {\n\t\t\t\t\tflow_indicators = true\n\t\t\t\t\tblock_indicators = true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif value[i] == '\\t' {\n\t\t\ttab_characters = true\n\t\t} else if !is_printable(value, i) || !is_ascii(value, i) && !emitter.unicode {\n\t\t\tspecial_characters = true\n\t\t}\n\t\tif is_space(value, i) {\n\t\t\tif i == 0 {\n\t\t\t\tleading_space = true\n\t\t\t}\n\t\t\tif i+width(value[i]) == len(value) {\n\t\t\t\ttrailing_space = true\n\t\t\t}\n\t\t\tif previous_break {\n\t\t\t\tbreak_space = true\n\t\t\t}\n\t\t\tprevious_space = true\n\t\t\tprevious_break = false\n\t\t} else if is_break(value, i) {\n\t\t\tline_breaks = true\n\t\t\tif i == 0 {\n\t\t\t\tleading_break = true\n\t\t\t}\n\t\t\tif i+width(value[i]) == len(value) {\n\t\t\t\ttrailing_break = true\n\t\t\t}\n\t\t\tif previous_space {\n\t\t\t\tspace_break = true\n\t\t\t}\n\t\t\tprevious_space = false\n\t\t\tprevious_break = true\n\t\t} else {\n\t\t\tprevious_space = false\n\t\t\tprevious_break = false\n\t\t}\n\n\t\t// [Go]: Why 'z'? Couldn't be the end of the string as that's the loop condition.\n\t\tpreceded_by_whitespace = is_blankz(value, i)\n\t}\n\n\temitter.scalar_data.multiline = line_breaks\n\temitter.scalar_data.flow_plain_allowed = true\n\temitter.scalar_data.block_plain_allowed = true\n\temitter.scalar_data.single_quoted_allowed = true\n\temitter.scalar_data.block_allowed = true\n\n\tif leading_space || leading_break || trailing_space || trailing_break {\n\t\temitter.scalar_data.flow_plain_allowed = false\n\t\temitter.scalar_data.block_plain_allowed = false\n\t}\n\tif trailing_space {\n\t\temitter.scalar_data.block_allowed = false\n\t}\n\tif break_space {\n\t\temitter.scalar_data.flow_plain_allowed = false\n\t\temitter.scalar_data.block_plain_allowed = false\n\t\temitter.scalar_data.single_quoted_allowed = false\n\t}\n\tif space_break || tab_characters || special_characters {\n\t\temitter.scalar_data.flow_plain_allowed = false\n\t\temitter.scalar_data.block_plain_allowed = false\n\t\temitter.scalar_data.single_quoted_allowed = false\n\t}\n\tif space_break || special_characters {\n\t\temitter.scalar_data.block_allowed = false\n\t}\n\tif line_breaks {\n\t\temitter.scalar_data.flow_plain_allowed = false\n\t\temitter.scalar_data.block_plain_allowed = false\n\t}\n\tif flow_indicators {\n\t\temitter.scalar_data.flow_plain_allowed = false\n\t}\n\tif block_indicators {\n\t\temitter.scalar_data.block_plain_allowed = false\n\t}\n\treturn true\n}\n\n// Check if the event data is valid.\nfunc yaml_emitter_analyze_event(emitter *yaml_emitter_t, event *yaml_event_t) bool {\n\n\temitter.anchor_data.anchor = nil\n\temitter.tag_data.handle = nil\n\temitter.tag_data.suffix = nil\n\temitter.scalar_data.value = nil\n\n\tif len(event.head_comment) > 0 {\n\t\temitter.head_comment = event.head_comment\n\t}\n\tif len(event.line_comment) > 0 {\n\t\temitter.line_comment = event.line_comment\n\t}\n\tif len(event.foot_comment) > 0 {\n\t\temitter.foot_comment = event.foot_comment\n\t}\n\tif len(event.tail_comment) > 0 {\n\t\temitter.tail_comment = event.tail_comment\n\t}\n\n\tswitch event.typ {\n\tcase yaml_ALIAS_EVENT:\n\t\tif !yaml_emitter_analyze_anchor(emitter, event.anchor, true) {\n\t\t\treturn false\n\t\t}\n\n\tcase yaml_SCALAR_EVENT:\n\t\tif len(event.anchor) > 0 {\n\t\t\tif !yaml_emitter_analyze_anchor(emitter, event.anchor, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif len(event.tag) > 0 && (emitter.canonical || (!event.implicit && !event.quoted_implicit)) {\n\t\t\tif !yaml_emitter_analyze_tag(emitter, event.tag) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif !yaml_emitter_analyze_scalar(emitter, event.value) {\n\t\t\treturn false\n\t\t}\n\n\tcase yaml_SEQUENCE_START_EVENT:\n\t\tif len(event.anchor) > 0 {\n\t\t\tif !yaml_emitter_analyze_anchor(emitter, event.anchor, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif len(event.tag) > 0 && (emitter.canonical || !event.implicit) {\n\t\t\tif !yaml_emitter_analyze_tag(emitter, event.tag) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\tcase yaml_MAPPING_START_EVENT:\n\t\tif len(event.anchor) > 0 {\n\t\t\tif !yaml_emitter_analyze_anchor(emitter, event.anchor, false) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif len(event.tag) > 0 && (emitter.canonical || !event.implicit) {\n\t\t\tif !yaml_emitter_analyze_tag(emitter, event.tag) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t}\n\treturn true\n}\n\n// Write the BOM character.\nfunc yaml_emitter_write_bom(emitter *yaml_emitter_t) bool {\n\tif !flush(emitter) {\n\t\treturn false\n\t}\n\tpos := emitter.buffer_pos\n\temitter.buffer[pos+0] = '\\xEF'\n\temitter.buffer[pos+1] = '\\xBB'\n\temitter.buffer[pos+2] = '\\xBF'\n\temitter.buffer_pos += 3\n\treturn true\n}\n\nfunc yaml_emitter_write_indent(emitter *yaml_emitter_t) bool {\n\tindent := emitter.indent\n\tif indent < 0 {\n\t\tindent = 0\n\t}\n\tif !emitter.indention || emitter.column > indent || (emitter.column == indent && !emitter.whitespace) {\n\t\tif !put_break(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif emitter.foot_indent == indent {\n\t\tif !put_break(emitter) {\n\t\t\treturn false\n\t\t}\n\t}\n\tfor emitter.column < indent {\n\t\tif !put(emitter, ' ') {\n\t\t\treturn false\n\t\t}\n\t}\n\temitter.whitespace = true\n\t//emitter.indention = true\n\temitter.space_above = false\n\temitter.foot_indent = -1\n\treturn true\n}\n\nfunc yaml_emitter_write_indicator(emitter *yaml_emitter_t, indicator []byte, need_whitespace, is_whitespace, is_indention bool) bool {\n\tif need_whitespace && !emitter.whitespace {\n\t\tif !put(emitter, ' ') {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !write_all(emitter, indicator) {\n\t\treturn false\n\t}\n\temitter.whitespace = is_whitespace\n\temitter.indention = (emitter.indention && is_indention)\n\temitter.open_ended = false\n\treturn true\n}\n\nfunc yaml_emitter_write_anchor(emitter *yaml_emitter_t, value []byte) bool {\n\tif !write_all(emitter, value) {\n\t\treturn false\n\t}\n\temitter.whitespace = false\n\temitter.indention = false\n\treturn true\n}\n\nfunc yaml_emitter_write_tag_handle(emitter *yaml_emitter_t, value []byte) bool {\n\tif !emitter.whitespace {\n\t\tif !put(emitter, ' ') {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !write_all(emitter, value) {\n\t\treturn false\n\t}\n\temitter.whitespace = false\n\temitter.indention = false\n\treturn true\n}\n\nfunc yaml_emitter_write_tag_content(emitter *yaml_emitter_t, value []byte, need_whitespace bool) bool {\n\tif need_whitespace && !emitter.whitespace {\n\t\tif !put(emitter, ' ') {\n\t\t\treturn false\n\t\t}\n\t}\n\tfor i := 0; i < len(value); {\n\t\tvar must_write bool\n\t\tswitch value[i] {\n\t\tcase ';', '/', '?', ':', '@', '&', '=', '+', '$', ',', '_', '.', '~', '*', '\\'', '(', ')', '[', ']':\n\t\t\tmust_write = true\n\t\tdefault:\n\t\t\tmust_write = is_alpha(value, i)\n\t\t}\n\t\tif must_write {\n\t\t\tif !write(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t} else {\n\t\t\tw := width(value[i])\n\t\t\tfor k := 0; k < w; k++ {\n\t\t\t\toctet := value[i]\n\t\t\t\ti++\n\t\t\t\tif !put(emitter, '%') {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\n\t\t\t\tc := octet >> 4\n\t\t\t\tif c < 10 {\n\t\t\t\t\tc += '0'\n\t\t\t\t} else {\n\t\t\t\t\tc += 'A' - 10\n\t\t\t\t}\n\t\t\t\tif !put(emitter, c) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\n\t\t\t\tc = octet & 0x0f\n\t\t\t\tif c < 10 {\n\t\t\t\t\tc += '0'\n\t\t\t\t} else {\n\t\t\t\t\tc += 'A' - 10\n\t\t\t\t}\n\t\t\t\tif !put(emitter, c) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\temitter.whitespace = false\n\temitter.indention = false\n\treturn true\n}\n\nfunc yaml_emitter_write_plain_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool {\n\tif len(value) > 0 && !emitter.whitespace {\n\t\tif !put(emitter, ' ') {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tspaces := false\n\tbreaks := false\n\tfor i := 0; i < len(value); {\n\t\tif is_space(value, i) {\n\t\t\tif allow_breaks && !spaces && emitter.column > emitter.best_width && !is_space(value, i+1) {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\ti += width(value[i])\n\t\t\t} else {\n\t\t\t\tif !write(emitter, value, &i) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tspaces = true\n\t\t} else if is_break(value, i) {\n\t\t\tif !breaks && value[i] == '\\n' {\n\t\t\t\tif !put_break(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !write_break(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\t//emitter.indention = true\n\t\t\tbreaks = true\n\t\t} else {\n\t\t\tif breaks {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !write(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\temitter.indention = false\n\t\t\tspaces = false\n\t\t\tbreaks = false\n\t\t}\n\t}\n\n\tif len(value) > 0 {\n\t\temitter.whitespace = false\n\t}\n\temitter.indention = false\n\tif emitter.root_context {\n\t\temitter.open_ended = true\n\t}\n\n\treturn true\n}\n\nfunc yaml_emitter_write_single_quoted_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool {\n\n\tif !yaml_emitter_write_indicator(emitter, []byte{'\\''}, true, false, false) {\n\t\treturn false\n\t}\n\n\tspaces := false\n\tbreaks := false\n\tfor i := 0; i < len(value); {\n\t\tif is_space(value, i) {\n\t\t\tif allow_breaks && !spaces && emitter.column > emitter.best_width && i > 0 && i < len(value)-1 && !is_space(value, i+1) {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\ti += width(value[i])\n\t\t\t} else {\n\t\t\t\tif !write(emitter, value, &i) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tspaces = true\n\t\t} else if is_break(value, i) {\n\t\t\tif !breaks && value[i] == '\\n' {\n\t\t\t\tif !put_break(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !write_break(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\t//emitter.indention = true\n\t\t\tbreaks = true\n\t\t} else {\n\t\t\tif breaks {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tif value[i] == '\\'' {\n\t\t\t\tif !put(emitter, '\\'') {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !write(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\temitter.indention = false\n\t\t\tspaces = false\n\t\t\tbreaks = false\n\t\t}\n\t}\n\tif !yaml_emitter_write_indicator(emitter, []byte{'\\''}, false, false, false) {\n\t\treturn false\n\t}\n\temitter.whitespace = false\n\temitter.indention = false\n\treturn true\n}\n\nfunc yaml_emitter_write_double_quoted_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool {\n\tspaces := false\n\tif !yaml_emitter_write_indicator(emitter, []byte{'\"'}, true, false, false) {\n\t\treturn false\n\t}\n\n\tfor i := 0; i < len(value); {\n\t\tif !is_printable(value, i) || (!emitter.unicode && !is_ascii(value, i)) ||\n\t\t\tis_bom(value, i) || is_break(value, i) ||\n\t\t\tvalue[i] == '\"' || value[i] == '\\\\' {\n\n\t\t\toctet := value[i]\n\n\t\t\tvar w int\n\t\t\tvar v rune\n\t\t\tswitch {\n\t\t\tcase octet&0x80 == 0x00:\n\t\t\t\tw, v = 1, rune(octet&0x7F)\n\t\t\tcase octet&0xE0 == 0xC0:\n\t\t\t\tw, v = 2, rune(octet&0x1F)\n\t\t\tcase octet&0xF0 == 0xE0:\n\t\t\t\tw, v = 3, rune(octet&0x0F)\n\t\t\tcase octet&0xF8 == 0xF0:\n\t\t\t\tw, v = 4, rune(octet&0x07)\n\t\t\t}\n\t\t\tfor k := 1; k < w; k++ {\n\t\t\t\toctet = value[i+k]\n\t\t\t\tv = (v << 6) + (rune(octet) & 0x3F)\n\t\t\t}\n\t\t\ti += w\n\n\t\t\tif !put(emitter, '\\\\') {\n\t\t\t\treturn false\n\t\t\t}\n\n\t\t\tvar ok bool\n\t\t\tswitch v {\n\t\t\tcase 0x00:\n\t\t\t\tok = put(emitter, '0')\n\t\t\tcase 0x07:\n\t\t\t\tok = put(emitter, 'a')\n\t\t\tcase 0x08:\n\t\t\t\tok = put(emitter, 'b')\n\t\t\tcase 0x09:\n\t\t\t\tok = put(emitter, 't')\n\t\t\tcase 0x0A:\n\t\t\t\tok = put(emitter, 'n')\n\t\t\tcase 0x0b:\n\t\t\t\tok = put(emitter, 'v')\n\t\t\tcase 0x0c:\n\t\t\t\tok = put(emitter, 'f')\n\t\t\tcase 0x0d:\n\t\t\t\tok = put(emitter, 'r')\n\t\t\tcase 0x1b:\n\t\t\t\tok = put(emitter, 'e')\n\t\t\tcase 0x22:\n\t\t\t\tok = put(emitter, '\"')\n\t\t\tcase 0x5c:\n\t\t\t\tok = put(emitter, '\\\\')\n\t\t\tcase 0x85:\n\t\t\t\tok = put(emitter, 'N')\n\t\t\tcase 0xA0:\n\t\t\t\tok = put(emitter, '_')\n\t\t\tcase 0x2028:\n\t\t\t\tok = put(emitter, 'L')\n\t\t\tcase 0x2029:\n\t\t\t\tok = put(emitter, 'P')\n\t\t\tdefault:\n\t\t\t\tif v <= 0xFF {\n\t\t\t\t\tok = put(emitter, 'x')\n\t\t\t\t\tw = 2\n\t\t\t\t} else if v <= 0xFFFF {\n\t\t\t\t\tok = put(emitter, 'u')\n\t\t\t\t\tw = 4\n\t\t\t\t} else {\n\t\t\t\t\tok = put(emitter, 'U')\n\t\t\t\t\tw = 8\n\t\t\t\t}\n\t\t\t\tfor k := (w - 1) * 4; ok && k >= 0; k -= 4 {\n\t\t\t\t\tdigit := byte((v >> uint(k)) & 0x0F)\n\t\t\t\t\tif digit < 10 {\n\t\t\t\t\t\tok = put(emitter, digit+'0')\n\t\t\t\t\t} else {\n\t\t\t\t\t\tok = put(emitter, digit+'A'-10)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !ok {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tspaces = false\n\t\t} else if is_space(value, i) {\n\t\t\tif allow_breaks && !spaces && emitter.column > emitter.best_width && i > 0 && i < len(value)-1 {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tif is_space(value, i+1) {\n\t\t\t\t\tif !put(emitter, '\\\\') {\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ti += width(value[i])\n\t\t\t} else if !write(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tspaces = true\n\t\t} else {\n\t\t\tif !write(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tspaces = false\n\t\t}\n\t}\n\tif !yaml_emitter_write_indicator(emitter, []byte{'\"'}, false, false, false) {\n\t\treturn false\n\t}\n\temitter.whitespace = false\n\temitter.indention = false\n\treturn true\n}\n\nfunc yaml_emitter_write_block_scalar_hints(emitter *yaml_emitter_t, value []byte) bool {\n\tif is_space(value, 0) || is_break(value, 0) {\n\t\tindent_hint := []byte{'0' + byte(emitter.best_indent)}\n\t\tif !yaml_emitter_write_indicator(emitter, indent_hint, false, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\temitter.open_ended = false\n\n\tvar chomp_hint [1]byte\n\tif len(value) == 0 {\n\t\tchomp_hint[0] = '-'\n\t} else {\n\t\ti := len(value) - 1\n\t\tfor value[i]&0xC0 == 0x80 {\n\t\t\ti--\n\t\t}\n\t\tif !is_break(value, i) {\n\t\t\tchomp_hint[0] = '-'\n\t\t} else if i == 0 {\n\t\t\tchomp_hint[0] = '+'\n\t\t\temitter.open_ended = true\n\t\t} else {\n\t\t\ti--\n\t\t\tfor value[i]&0xC0 == 0x80 {\n\t\t\t\ti--\n\t\t\t}\n\t\t\tif is_break(value, i) {\n\t\t\t\tchomp_hint[0] = '+'\n\t\t\t\temitter.open_ended = true\n\t\t\t}\n\t\t}\n\t}\n\tif chomp_hint[0] != 0 {\n\t\tif !yaml_emitter_write_indicator(emitter, chomp_hint[:], false, false, false) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc yaml_emitter_write_literal_scalar(emitter *yaml_emitter_t, value []byte) bool {\n\tif !yaml_emitter_write_indicator(emitter, []byte{'|'}, true, false, false) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_write_block_scalar_hints(emitter, value) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_line_comment(emitter) {\n\t\treturn false\n\t}\n\t//emitter.indention = true\n\temitter.whitespace = true\n\tbreaks := true\n\tfor i := 0; i < len(value); {\n\t\tif is_break(value, i) {\n\t\t\tif !write_break(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\t//emitter.indention = true\n\t\t\tbreaks = true\n\t\t} else {\n\t\t\tif breaks {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !write(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\temitter.indention = false\n\t\t\tbreaks = false\n\t\t}\n\t}\n\n\treturn true\n}\n\nfunc yaml_emitter_write_folded_scalar(emitter *yaml_emitter_t, value []byte) bool {\n\tif !yaml_emitter_write_indicator(emitter, []byte{'>'}, true, false, false) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_write_block_scalar_hints(emitter, value) {\n\t\treturn false\n\t}\n\tif !yaml_emitter_process_line_comment(emitter) {\n\t\treturn false\n\t}\n\n\t//emitter.indention = true\n\temitter.whitespace = true\n\n\tbreaks := true\n\tleading_spaces := true\n\tfor i := 0; i < len(value); {\n\t\tif is_break(value, i) {\n\t\t\tif !breaks && !leading_spaces && value[i] == '\\n' {\n\t\t\t\tk := 0\n\t\t\t\tfor is_break(value, k) {\n\t\t\t\t\tk += width(value[k])\n\t\t\t\t}\n\t\t\t\tif !is_blankz(value, k) {\n\t\t\t\t\tif !put_break(emitter) {\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !write_break(emitter, value, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\t//emitter.indention = true\n\t\t\tbreaks = true\n\t\t} else {\n\t\t\tif breaks {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tleading_spaces = is_blank(value, i)\n\t\t\t}\n\t\t\tif !breaks && is_space(value, i) && !is_space(value, i+1) && emitter.column > emitter.best_width {\n\t\t\t\tif !yaml_emitter_write_indent(emitter) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\ti += width(value[i])\n\t\t\t} else {\n\t\t\t\tif !write(emitter, value, &i) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t}\n\t\t\temitter.indention = false\n\t\t\tbreaks = false\n\t\t}\n\t}\n\treturn true\n}\n\nfunc yaml_emitter_write_comment(emitter *yaml_emitter_t, comment []byte) bool {\n\tbreaks := false\n\tpound := false\n\tfor i := 0; i < len(comment); {\n\t\tif is_break(comment, i) {\n\t\t\tif !write_break(emitter, comment, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\t//emitter.indention = true\n\t\t\tbreaks = true\n\t\t\tpound = false\n\t\t} else {\n\t\t\tif breaks && !yaml_emitter_write_indent(emitter) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif !pound {\n\t\t\t\tif comment[i] != '#' && (!put(emitter, '#') || !put(emitter, ' ')) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tpound = true\n\t\t\t}\n\t\t\tif !write(emitter, comment, &i) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\temitter.indention = false\n\t\t\tbreaks = false\n\t\t}\n\t}\n\tif !breaks && !put_break(emitter) {\n\t\treturn false\n\t}\n\n\temitter.whitespace = true\n\t//emitter.indention = true\n\treturn true\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/encode.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage yaml\n\nimport (\n\t\"encoding\"\n\t\"fmt\"\n\t\"io\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\t\"unicode/utf8\"\n)\n\ntype encoder struct {\n\temitter  yaml_emitter_t\n\tevent    yaml_event_t\n\tout      []byte\n\tflow     bool\n\tindent   int\n\tdoneInit bool\n}\n\nfunc newEncoder() *encoder {\n\te := &encoder{}\n\tyaml_emitter_initialize(&e.emitter)\n\tyaml_emitter_set_output_string(&e.emitter, &e.out)\n\tyaml_emitter_set_unicode(&e.emitter, true)\n\treturn e\n}\n\nfunc newEncoderWithWriter(w io.Writer) *encoder {\n\te := &encoder{}\n\tyaml_emitter_initialize(&e.emitter)\n\tyaml_emitter_set_output_writer(&e.emitter, w)\n\tyaml_emitter_set_unicode(&e.emitter, true)\n\treturn e\n}\n\nfunc (e *encoder) init() {\n\tif e.doneInit {\n\t\treturn\n\t}\n\tif e.indent == 0 {\n\t\te.indent = 4\n\t}\n\te.emitter.best_indent = e.indent\n\tyaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING)\n\te.emit()\n\te.doneInit = true\n}\n\nfunc (e *encoder) finish() {\n\te.emitter.open_ended = false\n\tyaml_stream_end_event_initialize(&e.event)\n\te.emit()\n}\n\nfunc (e *encoder) destroy() {\n\tyaml_emitter_delete(&e.emitter)\n}\n\nfunc (e *encoder) emit() {\n\t// This will internally delete the e.event value.\n\te.must(yaml_emitter_emit(&e.emitter, &e.event))\n}\n\nfunc (e *encoder) must(ok bool) {\n\tif !ok {\n\t\tmsg := e.emitter.problem\n\t\tif msg == \"\" {\n\t\t\tmsg = \"unknown problem generating YAML content\"\n\t\t}\n\t\tfailf(\"%s\", msg)\n\t}\n}\n\nfunc (e *encoder) marshalDoc(tag string, in reflect.Value) {\n\te.init()\n\tvar node *Node\n\tif in.IsValid() {\n\t\tnode, _ = in.Interface().(*Node)\n\t}\n\tif node != nil && node.Kind == DocumentNode {\n\t\te.nodev(in)\n\t} else {\n\t\tyaml_document_start_event_initialize(&e.event, nil, nil, true)\n\t\te.emit()\n\t\te.marshal(tag, in)\n\t\tyaml_document_end_event_initialize(&e.event, true)\n\t\te.emit()\n\t}\n}\n\nfunc (e *encoder) marshal(tag string, in reflect.Value) {\n\ttag = shortTag(tag)\n\tif !in.IsValid() || in.Kind() == reflect.Ptr && in.IsNil() {\n\t\te.nilv()\n\t\treturn\n\t}\n\tiface := in.Interface()\n\tswitch value := iface.(type) {\n\tcase *Node:\n\t\te.nodev(in)\n\t\treturn\n\tcase Node:\n\t\tif !in.CanAddr() {\n\t\t\tvar n = reflect.New(in.Type()).Elem()\n\t\t\tn.Set(in)\n\t\t\tin = n\n\t\t}\n\t\te.nodev(in.Addr())\n\t\treturn\n\tcase time.Time:\n\t\te.timev(tag, in)\n\t\treturn\n\tcase *time.Time:\n\t\te.timev(tag, in.Elem())\n\t\treturn\n\tcase time.Duration:\n\t\te.stringv(tag, reflect.ValueOf(value.String()))\n\t\treturn\n\tcase Marshaler:\n\t\tv, err := value.MarshalYAML()\n\t\tif err != nil {\n\t\t\tfail(err)\n\t\t}\n\t\tif v == nil {\n\t\t\te.nilv()\n\t\t\treturn\n\t\t}\n\t\te.marshal(tag, reflect.ValueOf(v))\n\t\treturn\n\tcase encoding.TextMarshaler:\n\t\ttext, err := value.MarshalText()\n\t\tif err != nil {\n\t\t\tfail(err)\n\t\t}\n\t\tin = reflect.ValueOf(string(text))\n\tcase nil:\n\t\te.nilv()\n\t\treturn\n\t}\n\tswitch in.Kind() {\n\tcase reflect.Interface:\n\t\te.marshal(tag, in.Elem())\n\tcase reflect.Map:\n\t\te.mapv(tag, in)\n\tcase reflect.Ptr:\n\t\te.marshal(tag, in.Elem())\n\tcase reflect.Struct:\n\t\te.structv(tag, in)\n\tcase reflect.Slice, reflect.Array:\n\t\te.slicev(tag, in)\n\tcase reflect.String:\n\t\te.stringv(tag, in)\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\te.intv(tag, in)\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:\n\t\te.uintv(tag, in)\n\tcase reflect.Float32, reflect.Float64:\n\t\te.floatv(tag, in)\n\tcase reflect.Bool:\n\t\te.boolv(tag, in)\n\tdefault:\n\t\tpanic(\"cannot marshal type: \" + in.Type().String())\n\t}\n}\n\nfunc (e *encoder) mapv(tag string, in reflect.Value) {\n\te.mappingv(tag, func() {\n\t\tkeys := keyList(in.MapKeys())\n\t\tsort.Sort(keys)\n\t\tfor _, k := range keys {\n\t\t\te.marshal(\"\", k)\n\t\t\te.marshal(\"\", in.MapIndex(k))\n\t\t}\n\t})\n}\n\nfunc (e *encoder) fieldByIndex(v reflect.Value, index []int) (field reflect.Value) {\n\tfor _, num := range index {\n\t\tfor {\n\t\t\tif v.Kind() == reflect.Ptr {\n\t\t\t\tif v.IsNil() {\n\t\t\t\t\treturn reflect.Value{}\n\t\t\t\t}\n\t\t\t\tv = v.Elem()\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t\tv = v.Field(num)\n\t}\n\treturn v\n}\n\nfunc (e *encoder) structv(tag string, in reflect.Value) {\n\tsinfo, err := getStructInfo(in.Type())\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\te.mappingv(tag, func() {\n\t\tfor _, info := range sinfo.FieldsList {\n\t\t\tvar value reflect.Value\n\t\t\tif info.Inline == nil {\n\t\t\t\tvalue = in.Field(info.Num)\n\t\t\t} else {\n\t\t\t\tvalue = e.fieldByIndex(in, info.Inline)\n\t\t\t\tif !value.IsValid() {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t}\n\t\t\tif info.OmitEmpty && isZero(value) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\te.marshal(\"\", reflect.ValueOf(info.Key))\n\t\t\te.flow = info.Flow\n\t\t\te.marshal(\"\", value)\n\t\t}\n\t\tif sinfo.InlineMap >= 0 {\n\t\t\tm := in.Field(sinfo.InlineMap)\n\t\t\tif m.Len() > 0 {\n\t\t\t\te.flow = false\n\t\t\t\tkeys := keyList(m.MapKeys())\n\t\t\t\tsort.Sort(keys)\n\t\t\t\tfor _, k := range keys {\n\t\t\t\t\tif _, found := sinfo.FieldsMap[k.String()]; found {\n\t\t\t\t\t\tpanic(fmt.Sprintf(\"cannot have key %q in inlined map: conflicts with struct field\", k.String()))\n\t\t\t\t\t}\n\t\t\t\t\te.marshal(\"\", k)\n\t\t\t\t\te.flow = false\n\t\t\t\t\te.marshal(\"\", m.MapIndex(k))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc (e *encoder) mappingv(tag string, f func()) {\n\timplicit := tag == \"\"\n\tstyle := yaml_BLOCK_MAPPING_STYLE\n\tif e.flow {\n\t\te.flow = false\n\t\tstyle = yaml_FLOW_MAPPING_STYLE\n\t}\n\tyaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style)\n\te.emit()\n\tf()\n\tyaml_mapping_end_event_initialize(&e.event)\n\te.emit()\n}\n\nfunc (e *encoder) slicev(tag string, in reflect.Value) {\n\timplicit := tag == \"\"\n\tstyle := yaml_BLOCK_SEQUENCE_STYLE\n\tif e.flow {\n\t\te.flow = false\n\t\tstyle = yaml_FLOW_SEQUENCE_STYLE\n\t}\n\te.must(yaml_sequence_start_event_initialize(&e.event, nil, []byte(tag), implicit, style))\n\te.emit()\n\tn := in.Len()\n\tfor i := 0; i < n; i++ {\n\t\te.marshal(\"\", in.Index(i))\n\t}\n\te.must(yaml_sequence_end_event_initialize(&e.event))\n\te.emit()\n}\n\n// isBase60 returns whether s is in base 60 notation as defined in YAML 1.1.\n//\n// The base 60 float notation in YAML 1.1 is a terrible idea and is unsupported\n// in YAML 1.2 and by this package, but these should be marshalled quoted for\n// the time being for compatibility with other parsers.\nfunc isBase60Float(s string) (result bool) {\n\t// Fast path.\n\tif s == \"\" {\n\t\treturn false\n\t}\n\tc := s[0]\n\tif !(c == '+' || c == '-' || c >= '0' && c <= '9') || strings.IndexByte(s, ':') < 0 {\n\t\treturn false\n\t}\n\t// Do the full match.\n\treturn base60float.MatchString(s)\n}\n\n// From http://yaml.org/type/float.html, except the regular expression there\n// is bogus. In practice parsers do not enforce the \"\\.[0-9_]*\" suffix.\nvar base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\\.[0-9_]*)?$`)\n\n// isOldBool returns whether s is bool notation as defined in YAML 1.1.\n//\n// We continue to force strings that YAML 1.1 would interpret as booleans to be\n// rendered as quotes strings so that the marshalled output valid for YAML 1.1\n// parsing.\nfunc isOldBool(s string) (result bool) {\n\tswitch s {\n\tcase \"y\", \"Y\", \"yes\", \"Yes\", \"YES\", \"on\", \"On\", \"ON\",\n\t\t\"n\", \"N\", \"no\", \"No\", \"NO\", \"off\", \"Off\", \"OFF\":\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\nfunc (e *encoder) stringv(tag string, in reflect.Value) {\n\tvar style yaml_scalar_style_t\n\ts := in.String()\n\tcanUsePlain := true\n\tswitch {\n\tcase !utf8.ValidString(s):\n\t\tif tag == binaryTag {\n\t\t\tfailf(\"explicitly tagged !!binary data must be base64-encoded\")\n\t\t}\n\t\tif tag != \"\" {\n\t\t\tfailf(\"cannot marshal invalid UTF-8 data as %s\", shortTag(tag))\n\t\t}\n\t\t// It can't be encoded directly as YAML so use a binary tag\n\t\t// and encode it as base64.\n\t\ttag = binaryTag\n\t\ts = encodeBase64(s)\n\tcase tag == \"\":\n\t\t// Check to see if it would resolve to a specific\n\t\t// tag when encoded unquoted. If it doesn't,\n\t\t// there's no need to quote it.\n\t\trtag, _ := resolve(\"\", s)\n\t\tcanUsePlain = rtag == strTag && !(isBase60Float(s) || isOldBool(s))\n\t}\n\t// Note: it's possible for user code to emit invalid YAML\n\t// if they explicitly specify a tag and a string containing\n\t// text that's incompatible with that tag.\n\tswitch {\n\tcase strings.Contains(s, \"\\n\"):\n\t\tif e.flow {\n\t\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t\t} else {\n\t\t\tstyle = yaml_LITERAL_SCALAR_STYLE\n\t\t}\n\tcase canUsePlain:\n\t\tstyle = yaml_PLAIN_SCALAR_STYLE\n\tdefault:\n\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t}\n\te.emitScalar(s, \"\", tag, style, nil, nil, nil, nil)\n}\n\nfunc (e *encoder) boolv(tag string, in reflect.Value) {\n\tvar s string\n\tif in.Bool() {\n\t\ts = \"true\"\n\t} else {\n\t\ts = \"false\"\n\t}\n\te.emitScalar(s, \"\", tag, yaml_PLAIN_SCALAR_STYLE, nil, nil, nil, nil)\n}\n\nfunc (e *encoder) intv(tag string, in reflect.Value) {\n\ts := strconv.FormatInt(in.Int(), 10)\n\te.emitScalar(s, \"\", tag, yaml_PLAIN_SCALAR_STYLE, nil, nil, nil, nil)\n}\n\nfunc (e *encoder) uintv(tag string, in reflect.Value) {\n\ts := strconv.FormatUint(in.Uint(), 10)\n\te.emitScalar(s, \"\", tag, yaml_PLAIN_SCALAR_STYLE, nil, nil, nil, nil)\n}\n\nfunc (e *encoder) timev(tag string, in reflect.Value) {\n\tt := in.Interface().(time.Time)\n\ts := t.Format(time.RFC3339Nano)\n\te.emitScalar(s, \"\", tag, yaml_PLAIN_SCALAR_STYLE, nil, nil, nil, nil)\n}\n\nfunc (e *encoder) floatv(tag string, in reflect.Value) {\n\t// Issue #352: When formatting, use the precision of the underlying value\n\tprecision := 64\n\tif in.Kind() == reflect.Float32 {\n\t\tprecision = 32\n\t}\n\n\ts := strconv.FormatFloat(in.Float(), 'g', -1, precision)\n\tswitch s {\n\tcase \"+Inf\":\n\t\ts = \".inf\"\n\tcase \"-Inf\":\n\t\ts = \"-.inf\"\n\tcase \"NaN\":\n\t\ts = \".nan\"\n\t}\n\te.emitScalar(s, \"\", tag, yaml_PLAIN_SCALAR_STYLE, nil, nil, nil, nil)\n}\n\nfunc (e *encoder) nilv() {\n\te.emitScalar(\"null\", \"\", \"\", yaml_PLAIN_SCALAR_STYLE, nil, nil, nil, nil)\n}\n\nfunc (e *encoder) emitScalar(value, anchor, tag string, style yaml_scalar_style_t, head, line, foot, tail []byte) {\n\t// TODO Kill this function. Replace all initialize calls by their underlining Go literals.\n\timplicit := tag == \"\"\n\tif !implicit {\n\t\ttag = longTag(tag)\n\t}\n\te.must(yaml_scalar_event_initialize(&e.event, []byte(anchor), []byte(tag), []byte(value), implicit, implicit, style))\n\te.event.head_comment = head\n\te.event.line_comment = line\n\te.event.foot_comment = foot\n\te.event.tail_comment = tail\n\te.emit()\n}\n\nfunc (e *encoder) nodev(in reflect.Value) {\n\te.node(in.Interface().(*Node), \"\")\n}\n\nfunc (e *encoder) node(node *Node, tail string) {\n\t// Zero nodes behave as nil.\n\tif node.Kind == 0 && node.IsZero() {\n\t\te.nilv()\n\t\treturn\n\t}\n\n\t// If the tag was not explicitly requested, and dropping it won't change the\n\t// implicit tag of the value, don't include it in the presentation.\n\tvar tag = node.Tag\n\tvar stag = shortTag(tag)\n\tvar forceQuoting bool\n\tif tag != \"\" && node.Style&TaggedStyle == 0 {\n\t\tif node.Kind == ScalarNode {\n\t\t\tif stag == strTag && node.Style&(SingleQuotedStyle|DoubleQuotedStyle|LiteralStyle|FoldedStyle) != 0 {\n\t\t\t\ttag = \"\"\n\t\t\t} else {\n\t\t\t\trtag, _ := resolve(\"\", node.Value)\n\t\t\t\tif rtag == stag {\n\t\t\t\t\ttag = \"\"\n\t\t\t\t} else if stag == strTag {\n\t\t\t\t\ttag = \"\"\n\t\t\t\t\tforceQuoting = true\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tvar rtag string\n\t\t\tswitch node.Kind {\n\t\t\tcase MappingNode:\n\t\t\t\trtag = mapTag\n\t\t\tcase SequenceNode:\n\t\t\t\trtag = seqTag\n\t\t\t}\n\t\t\tif rtag == stag {\n\t\t\t\ttag = \"\"\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch node.Kind {\n\tcase DocumentNode:\n\t\tyaml_document_start_event_initialize(&e.event, nil, nil, true)\n\t\te.event.head_comment = []byte(node.HeadComment)\n\t\te.emit()\n\t\tfor _, node := range node.Content {\n\t\t\te.node(node, \"\")\n\t\t}\n\t\tyaml_document_end_event_initialize(&e.event, true)\n\t\te.event.foot_comment = []byte(node.FootComment)\n\t\te.emit()\n\n\tcase SequenceNode:\n\t\tstyle := yaml_BLOCK_SEQUENCE_STYLE\n\t\tif node.Style&FlowStyle != 0 {\n\t\t\tstyle = yaml_FLOW_SEQUENCE_STYLE\n\t\t}\n\t\te.must(yaml_sequence_start_event_initialize(&e.event, []byte(node.Anchor), []byte(longTag(tag)), tag == \"\", style))\n\t\te.event.head_comment = []byte(node.HeadComment)\n\t\te.emit()\n\t\tfor _, node := range node.Content {\n\t\t\te.node(node, \"\")\n\t\t}\n\t\te.must(yaml_sequence_end_event_initialize(&e.event))\n\t\te.event.line_comment = []byte(node.LineComment)\n\t\te.event.foot_comment = []byte(node.FootComment)\n\t\te.emit()\n\n\tcase MappingNode:\n\t\tstyle := yaml_BLOCK_MAPPING_STYLE\n\t\tif node.Style&FlowStyle != 0 {\n\t\t\tstyle = yaml_FLOW_MAPPING_STYLE\n\t\t}\n\t\tyaml_mapping_start_event_initialize(&e.event, []byte(node.Anchor), []byte(longTag(tag)), tag == \"\", style)\n\t\te.event.tail_comment = []byte(tail)\n\t\te.event.head_comment = []byte(node.HeadComment)\n\t\te.emit()\n\n\t\t// The tail logic below moves the foot comment of prior keys to the following key,\n\t\t// since the value for each key may be a nested structure and the foot needs to be\n\t\t// processed only the entirety of the value is streamed. The last tail is processed\n\t\t// with the mapping end event.\n\t\tvar tail string\n\t\tfor i := 0; i+1 < len(node.Content); i += 2 {\n\t\t\tk := node.Content[i]\n\t\t\tfoot := k.FootComment\n\t\t\tif foot != \"\" {\n\t\t\t\tkopy := *k\n\t\t\t\tkopy.FootComment = \"\"\n\t\t\t\tk = &kopy\n\t\t\t}\n\t\t\te.node(k, tail)\n\t\t\ttail = foot\n\n\t\t\tv := node.Content[i+1]\n\t\t\te.node(v, \"\")\n\t\t}\n\n\t\tyaml_mapping_end_event_initialize(&e.event)\n\t\te.event.tail_comment = []byte(tail)\n\t\te.event.line_comment = []byte(node.LineComment)\n\t\te.event.foot_comment = []byte(node.FootComment)\n\t\te.emit()\n\n\tcase AliasNode:\n\t\tyaml_alias_event_initialize(&e.event, []byte(node.Value))\n\t\te.event.head_comment = []byte(node.HeadComment)\n\t\te.event.line_comment = []byte(node.LineComment)\n\t\te.event.foot_comment = []byte(node.FootComment)\n\t\te.emit()\n\n\tcase ScalarNode:\n\t\tvalue := node.Value\n\t\tif !utf8.ValidString(value) {\n\t\t\tif stag == binaryTag {\n\t\t\t\tfailf(\"explicitly tagged !!binary data must be base64-encoded\")\n\t\t\t}\n\t\t\tif stag != \"\" {\n\t\t\t\tfailf(\"cannot marshal invalid UTF-8 data as %s\", stag)\n\t\t\t}\n\t\t\t// It can't be encoded directly as YAML so use a binary tag\n\t\t\t// and encode it as base64.\n\t\t\ttag = binaryTag\n\t\t\tvalue = encodeBase64(value)\n\t\t}\n\n\t\tstyle := yaml_PLAIN_SCALAR_STYLE\n\t\tswitch {\n\t\tcase node.Style&DoubleQuotedStyle != 0:\n\t\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t\tcase node.Style&SingleQuotedStyle != 0:\n\t\t\tstyle = yaml_SINGLE_QUOTED_SCALAR_STYLE\n\t\tcase node.Style&LiteralStyle != 0:\n\t\t\tstyle = yaml_LITERAL_SCALAR_STYLE\n\t\tcase node.Style&FoldedStyle != 0:\n\t\t\tstyle = yaml_FOLDED_SCALAR_STYLE\n\t\tcase strings.Contains(value, \"\\n\"):\n\t\t\tstyle = yaml_LITERAL_SCALAR_STYLE\n\t\tcase forceQuoting:\n\t\t\tstyle = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t\t}\n\n\t\te.emitScalar(value, node.Anchor, tag, style, []byte(node.HeadComment), []byte(node.LineComment), []byte(node.FootComment), []byte(tail))\n\tdefault:\n\t\tfailf(\"cannot encode node with unknown kind %d\", node.Kind)\n\t}\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/parserc.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n//\n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n//\n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n//\n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\nimport (\n\t\"bytes\"\n)\n\n// The parser implements the following grammar:\n//\n// stream               ::= STREAM-START implicit_document? explicit_document* STREAM-END\n// implicit_document    ::= block_node DOCUMENT-END*\n// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*\n// block_node_or_indentless_sequence    ::=\n//                          ALIAS\n//                          | properties (block_content | indentless_block_sequence)?\n//                          | block_content\n//                          | indentless_block_sequence\n// block_node           ::= ALIAS\n//                          | properties block_content?\n//                          | block_content\n// flow_node            ::= ALIAS\n//                          | properties flow_content?\n//                          | flow_content\n// properties           ::= TAG ANCHOR? | ANCHOR TAG?\n// block_content        ::= block_collection | flow_collection | SCALAR\n// flow_content         ::= flow_collection | SCALAR\n// block_collection     ::= block_sequence | block_mapping\n// flow_collection      ::= flow_sequence | flow_mapping\n// block_sequence       ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END\n// indentless_sequence  ::= (BLOCK-ENTRY block_node?)+\n// block_mapping        ::= BLOCK-MAPPING_START\n//                          ((KEY block_node_or_indentless_sequence?)?\n//                          (VALUE block_node_or_indentless_sequence?)?)*\n//                          BLOCK-END\n// flow_sequence        ::= FLOW-SEQUENCE-START\n//                          (flow_sequence_entry FLOW-ENTRY)*\n//                          flow_sequence_entry?\n//                          FLOW-SEQUENCE-END\n// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n// flow_mapping         ::= FLOW-MAPPING-START\n//                          (flow_mapping_entry FLOW-ENTRY)*\n//                          flow_mapping_entry?\n//                          FLOW-MAPPING-END\n// flow_mapping_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n\n// Peek the next token in the token queue.\nfunc peek_token(parser *yaml_parser_t) *yaml_token_t {\n\tif parser.token_available || yaml_parser_fetch_more_tokens(parser) {\n\t\ttoken := &parser.tokens[parser.tokens_head]\n\t\tyaml_parser_unfold_comments(parser, token)\n\t\treturn token\n\t}\n\treturn nil\n}\n\n// yaml_parser_unfold_comments walks through the comments queue and joins all\n// comments behind the position of the provided token into the respective\n// top-level comment slices in the parser.\nfunc yaml_parser_unfold_comments(parser *yaml_parser_t, token *yaml_token_t) {\n\tfor parser.comments_head < len(parser.comments) && token.start_mark.index >= parser.comments[parser.comments_head].token_mark.index {\n\t\tcomment := &parser.comments[parser.comments_head]\n\t\tif len(comment.head) > 0 {\n\t\t\tif token.typ == yaml_BLOCK_END_TOKEN {\n\t\t\t\t// No heads on ends, so keep comment.head for a follow up token.\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tif len(parser.head_comment) > 0 {\n\t\t\t\tparser.head_comment = append(parser.head_comment, '\\n')\n\t\t\t}\n\t\t\tparser.head_comment = append(parser.head_comment, comment.head...)\n\t\t}\n\t\tif len(comment.foot) > 0 {\n\t\t\tif len(parser.foot_comment) > 0 {\n\t\t\t\tparser.foot_comment = append(parser.foot_comment, '\\n')\n\t\t\t}\n\t\t\tparser.foot_comment = append(parser.foot_comment, comment.foot...)\n\t\t}\n\t\tif len(comment.line) > 0 {\n\t\t\tif len(parser.line_comment) > 0 {\n\t\t\t\tparser.line_comment = append(parser.line_comment, '\\n')\n\t\t\t}\n\t\t\tparser.line_comment = append(parser.line_comment, comment.line...)\n\t\t}\n\t\t*comment = yaml_comment_t{}\n\t\tparser.comments_head++\n\t}\n}\n\n// Remove the next token from the queue (must be called after peek_token).\nfunc skip_token(parser *yaml_parser_t) {\n\tparser.token_available = false\n\tparser.tokens_parsed++\n\tparser.stream_end_produced = parser.tokens[parser.tokens_head].typ == yaml_STREAM_END_TOKEN\n\tparser.tokens_head++\n}\n\n// Get the next event.\nfunc yaml_parser_parse(parser *yaml_parser_t, event *yaml_event_t) bool {\n\t// Erase the event object.\n\t*event = yaml_event_t{}\n\n\t// No events after the end of the stream or error.\n\tif parser.stream_end_produced || parser.error != yaml_NO_ERROR || parser.state == yaml_PARSE_END_STATE {\n\t\treturn true\n\t}\n\n\t// Generate the next event.\n\treturn yaml_parser_state_machine(parser, event)\n}\n\n// Set parser error.\nfunc yaml_parser_set_parser_error(parser *yaml_parser_t, problem string, problem_mark yaml_mark_t) bool {\n\tparser.error = yaml_PARSER_ERROR\n\tparser.problem = problem\n\tparser.problem_mark = problem_mark\n\treturn false\n}\n\nfunc yaml_parser_set_parser_error_context(parser *yaml_parser_t, context string, context_mark yaml_mark_t, problem string, problem_mark yaml_mark_t) bool {\n\tparser.error = yaml_PARSER_ERROR\n\tparser.context = context\n\tparser.context_mark = context_mark\n\tparser.problem = problem\n\tparser.problem_mark = problem_mark\n\treturn false\n}\n\n// State dispatcher.\nfunc yaml_parser_state_machine(parser *yaml_parser_t, event *yaml_event_t) bool {\n\t//trace(\"yaml_parser_state_machine\", \"state:\", parser.state.String())\n\n\tswitch parser.state {\n\tcase yaml_PARSE_STREAM_START_STATE:\n\t\treturn yaml_parser_parse_stream_start(parser, event)\n\n\tcase yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE:\n\t\treturn yaml_parser_parse_document_start(parser, event, true)\n\n\tcase yaml_PARSE_DOCUMENT_START_STATE:\n\t\treturn yaml_parser_parse_document_start(parser, event, false)\n\n\tcase yaml_PARSE_DOCUMENT_CONTENT_STATE:\n\t\treturn yaml_parser_parse_document_content(parser, event)\n\n\tcase yaml_PARSE_DOCUMENT_END_STATE:\n\t\treturn yaml_parser_parse_document_end(parser, event)\n\n\tcase yaml_PARSE_BLOCK_NODE_STATE:\n\t\treturn yaml_parser_parse_node(parser, event, true, false)\n\n\tcase yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE:\n\t\treturn yaml_parser_parse_node(parser, event, true, true)\n\n\tcase yaml_PARSE_FLOW_NODE_STATE:\n\t\treturn yaml_parser_parse_node(parser, event, false, false)\n\n\tcase yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE:\n\t\treturn yaml_parser_parse_block_sequence_entry(parser, event, true)\n\n\tcase yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE:\n\t\treturn yaml_parser_parse_block_sequence_entry(parser, event, false)\n\n\tcase yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE:\n\t\treturn yaml_parser_parse_indentless_sequence_entry(parser, event)\n\n\tcase yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE:\n\t\treturn yaml_parser_parse_block_mapping_key(parser, event, true)\n\n\tcase yaml_PARSE_BLOCK_MAPPING_KEY_STATE:\n\t\treturn yaml_parser_parse_block_mapping_key(parser, event, false)\n\n\tcase yaml_PARSE_BLOCK_MAPPING_VALUE_STATE:\n\t\treturn yaml_parser_parse_block_mapping_value(parser, event)\n\n\tcase yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE:\n\t\treturn yaml_parser_parse_flow_sequence_entry(parser, event, true)\n\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE:\n\t\treturn yaml_parser_parse_flow_sequence_entry(parser, event, false)\n\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE:\n\t\treturn yaml_parser_parse_flow_sequence_entry_mapping_key(parser, event)\n\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE:\n\t\treturn yaml_parser_parse_flow_sequence_entry_mapping_value(parser, event)\n\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE:\n\t\treturn yaml_parser_parse_flow_sequence_entry_mapping_end(parser, event)\n\n\tcase yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE:\n\t\treturn yaml_parser_parse_flow_mapping_key(parser, event, true)\n\n\tcase yaml_PARSE_FLOW_MAPPING_KEY_STATE:\n\t\treturn yaml_parser_parse_flow_mapping_key(parser, event, false)\n\n\tcase yaml_PARSE_FLOW_MAPPING_VALUE_STATE:\n\t\treturn yaml_parser_parse_flow_mapping_value(parser, event, false)\n\n\tcase yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE:\n\t\treturn yaml_parser_parse_flow_mapping_value(parser, event, true)\n\n\tdefault:\n\t\tpanic(\"invalid parser state\")\n\t}\n}\n\n// Parse the production:\n// stream   ::= STREAM-START implicit_document? explicit_document* STREAM-END\n//              ************\nfunc yaml_parser_parse_stream_start(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\tif token.typ != yaml_STREAM_START_TOKEN {\n\t\treturn yaml_parser_set_parser_error(parser, \"did not find expected <stream-start>\", token.start_mark)\n\t}\n\tparser.state = yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE\n\t*event = yaml_event_t{\n\t\ttyp:        yaml_STREAM_START_EVENT,\n\t\tstart_mark: token.start_mark,\n\t\tend_mark:   token.end_mark,\n\t\tencoding:   token.encoding,\n\t}\n\tskip_token(parser)\n\treturn true\n}\n\n// Parse the productions:\n// implicit_document    ::= block_node DOCUMENT-END*\n//                          *\n// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*\n//                          *************************\nfunc yaml_parser_parse_document_start(parser *yaml_parser_t, event *yaml_event_t, implicit bool) bool {\n\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\t// Parse extra document end indicators.\n\tif !implicit {\n\t\tfor token.typ == yaml_DOCUMENT_END_TOKEN {\n\t\t\tskip_token(parser)\n\t\t\ttoken = peek_token(parser)\n\t\t\tif token == nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t}\n\n\tif implicit && token.typ != yaml_VERSION_DIRECTIVE_TOKEN &&\n\t\ttoken.typ != yaml_TAG_DIRECTIVE_TOKEN &&\n\t\ttoken.typ != yaml_DOCUMENT_START_TOKEN &&\n\t\ttoken.typ != yaml_STREAM_END_TOKEN {\n\t\t// Parse an implicit document.\n\t\tif !yaml_parser_process_directives(parser, nil, nil) {\n\t\t\treturn false\n\t\t}\n\t\tparser.states = append(parser.states, yaml_PARSE_DOCUMENT_END_STATE)\n\t\tparser.state = yaml_PARSE_BLOCK_NODE_STATE\n\n\t\tvar head_comment []byte\n\t\tif len(parser.head_comment) > 0 {\n\t\t\t// [Go] Scan the header comment backwards, and if an empty line is found, break\n\t\t\t//      the header so the part before the last empty line goes into the\n\t\t\t//      document header, while the bottom of it goes into a follow up event.\n\t\t\tfor i := len(parser.head_comment) - 1; i > 0; i-- {\n\t\t\t\tif parser.head_comment[i] == '\\n' {\n\t\t\t\t\tif i == len(parser.head_comment)-1 {\n\t\t\t\t\t\thead_comment = parser.head_comment[:i]\n\t\t\t\t\t\tparser.head_comment = parser.head_comment[i+1:]\n\t\t\t\t\t\tbreak\n\t\t\t\t\t} else if parser.head_comment[i-1] == '\\n' {\n\t\t\t\t\t\thead_comment = parser.head_comment[:i-1]\n\t\t\t\t\t\tparser.head_comment = parser.head_comment[i+1:]\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_DOCUMENT_START_EVENT,\n\t\t\tstart_mark: token.start_mark,\n\t\t\tend_mark:   token.end_mark,\n\n\t\t\thead_comment: head_comment,\n\t\t}\n\n\t} else if token.typ != yaml_STREAM_END_TOKEN {\n\t\t// Parse an explicit document.\n\t\tvar version_directive *yaml_version_directive_t\n\t\tvar tag_directives []yaml_tag_directive_t\n\t\tstart_mark := token.start_mark\n\t\tif !yaml_parser_process_directives(parser, &version_directive, &tag_directives) {\n\t\t\treturn false\n\t\t}\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ != yaml_DOCUMENT_START_TOKEN {\n\t\t\tyaml_parser_set_parser_error(parser,\n\t\t\t\t\"did not find expected <document start>\", token.start_mark)\n\t\t\treturn false\n\t\t}\n\t\tparser.states = append(parser.states, yaml_PARSE_DOCUMENT_END_STATE)\n\t\tparser.state = yaml_PARSE_DOCUMENT_CONTENT_STATE\n\t\tend_mark := token.end_mark\n\n\t\t*event = yaml_event_t{\n\t\t\ttyp:               yaml_DOCUMENT_START_EVENT,\n\t\t\tstart_mark:        start_mark,\n\t\t\tend_mark:          end_mark,\n\t\t\tversion_directive: version_directive,\n\t\t\ttag_directives:    tag_directives,\n\t\t\timplicit:          false,\n\t\t}\n\t\tskip_token(parser)\n\n\t} else {\n\t\t// Parse the stream end.\n\t\tparser.state = yaml_PARSE_END_STATE\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_STREAM_END_EVENT,\n\t\t\tstart_mark: token.start_mark,\n\t\t\tend_mark:   token.end_mark,\n\t\t}\n\t\tskip_token(parser)\n\t}\n\n\treturn true\n}\n\n// Parse the productions:\n// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*\n//                                                    ***********\n//\nfunc yaml_parser_parse_document_content(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\tif token.typ == yaml_VERSION_DIRECTIVE_TOKEN ||\n\t\ttoken.typ == yaml_TAG_DIRECTIVE_TOKEN ||\n\t\ttoken.typ == yaml_DOCUMENT_START_TOKEN ||\n\t\ttoken.typ == yaml_DOCUMENT_END_TOKEN ||\n\t\ttoken.typ == yaml_STREAM_END_TOKEN {\n\t\tparser.state = parser.states[len(parser.states)-1]\n\t\tparser.states = parser.states[:len(parser.states)-1]\n\t\treturn yaml_parser_process_empty_scalar(parser, event,\n\t\t\ttoken.start_mark)\n\t}\n\treturn yaml_parser_parse_node(parser, event, true, false)\n}\n\n// Parse the productions:\n// implicit_document    ::= block_node DOCUMENT-END*\n//                                     *************\n// explicit_document    ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*\n//\nfunc yaml_parser_parse_document_end(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\tstart_mark := token.start_mark\n\tend_mark := token.start_mark\n\n\timplicit := true\n\tif token.typ == yaml_DOCUMENT_END_TOKEN {\n\t\tend_mark = token.end_mark\n\t\tskip_token(parser)\n\t\timplicit = false\n\t}\n\n\tparser.tag_directives = parser.tag_directives[:0]\n\n\tparser.state = yaml_PARSE_DOCUMENT_START_STATE\n\t*event = yaml_event_t{\n\t\ttyp:        yaml_DOCUMENT_END_EVENT,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t\timplicit:   implicit,\n\t}\n\tyaml_parser_set_event_comments(parser, event)\n\tif len(event.head_comment) > 0 && len(event.foot_comment) == 0 {\n\t\tevent.foot_comment = event.head_comment\n\t\tevent.head_comment = nil\n\t}\n\treturn true\n}\n\nfunc yaml_parser_set_event_comments(parser *yaml_parser_t, event *yaml_event_t) {\n\tevent.head_comment = parser.head_comment\n\tevent.line_comment = parser.line_comment\n\tevent.foot_comment = parser.foot_comment\n\tparser.head_comment = nil\n\tparser.line_comment = nil\n\tparser.foot_comment = nil\n\tparser.tail_comment = nil\n\tparser.stem_comment = nil\n}\n\n// Parse the productions:\n// block_node_or_indentless_sequence    ::=\n//                          ALIAS\n//                          *****\n//                          | properties (block_content | indentless_block_sequence)?\n//                            **********  *\n//                          | block_content | indentless_block_sequence\n//                            *\n// block_node           ::= ALIAS\n//                          *****\n//                          | properties block_content?\n//                            ********** *\n//                          | block_content\n//                            *\n// flow_node            ::= ALIAS\n//                          *****\n//                          | properties flow_content?\n//                            ********** *\n//                          | flow_content\n//                            *\n// properties           ::= TAG ANCHOR? | ANCHOR TAG?\n//                          *************************\n// block_content        ::= block_collection | flow_collection | SCALAR\n//                                                               ******\n// flow_content         ::= flow_collection | SCALAR\n//                                            ******\nfunc yaml_parser_parse_node(parser *yaml_parser_t, event *yaml_event_t, block, indentless_sequence bool) bool {\n\t//defer trace(\"yaml_parser_parse_node\", \"block:\", block, \"indentless_sequence:\", indentless_sequence)()\n\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\tif token.typ == yaml_ALIAS_TOKEN {\n\t\tparser.state = parser.states[len(parser.states)-1]\n\t\tparser.states = parser.states[:len(parser.states)-1]\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_ALIAS_EVENT,\n\t\t\tstart_mark: token.start_mark,\n\t\t\tend_mark:   token.end_mark,\n\t\t\tanchor:     token.value,\n\t\t}\n\t\tyaml_parser_set_event_comments(parser, event)\n\t\tskip_token(parser)\n\t\treturn true\n\t}\n\n\tstart_mark := token.start_mark\n\tend_mark := token.start_mark\n\n\tvar tag_token bool\n\tvar tag_handle, tag_suffix, anchor []byte\n\tvar tag_mark yaml_mark_t\n\tif token.typ == yaml_ANCHOR_TOKEN {\n\t\tanchor = token.value\n\t\tstart_mark = token.start_mark\n\t\tend_mark = token.end_mark\n\t\tskip_token(parser)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ == yaml_TAG_TOKEN {\n\t\t\ttag_token = true\n\t\t\ttag_handle = token.value\n\t\t\ttag_suffix = token.suffix\n\t\t\ttag_mark = token.start_mark\n\t\t\tend_mark = token.end_mark\n\t\t\tskip_token(parser)\n\t\t\ttoken = peek_token(parser)\n\t\t\tif token == nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t} else if token.typ == yaml_TAG_TOKEN {\n\t\ttag_token = true\n\t\ttag_handle = token.value\n\t\ttag_suffix = token.suffix\n\t\tstart_mark = token.start_mark\n\t\ttag_mark = token.start_mark\n\t\tend_mark = token.end_mark\n\t\tskip_token(parser)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ == yaml_ANCHOR_TOKEN {\n\t\t\tanchor = token.value\n\t\t\tend_mark = token.end_mark\n\t\t\tskip_token(parser)\n\t\t\ttoken = peek_token(parser)\n\t\t\tif token == nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t}\n\n\tvar tag []byte\n\tif tag_token {\n\t\tif len(tag_handle) == 0 {\n\t\t\ttag = tag_suffix\n\t\t\ttag_suffix = nil\n\t\t} else {\n\t\t\tfor i := range parser.tag_directives {\n\t\t\t\tif bytes.Equal(parser.tag_directives[i].handle, tag_handle) {\n\t\t\t\t\ttag = append([]byte(nil), parser.tag_directives[i].prefix...)\n\t\t\t\t\ttag = append(tag, tag_suffix...)\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif len(tag) == 0 {\n\t\t\t\tyaml_parser_set_parser_error_context(parser,\n\t\t\t\t\t\"while parsing a node\", start_mark,\n\t\t\t\t\t\"found undefined tag handle\", tag_mark)\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t}\n\n\timplicit := len(tag) == 0\n\tif indentless_sequence && token.typ == yaml_BLOCK_ENTRY_TOKEN {\n\t\tend_mark = token.end_mark\n\t\tparser.state = yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_SEQUENCE_START_EVENT,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   end_mark,\n\t\t\tanchor:     anchor,\n\t\t\ttag:        tag,\n\t\t\timplicit:   implicit,\n\t\t\tstyle:      yaml_style_t(yaml_BLOCK_SEQUENCE_STYLE),\n\t\t}\n\t\treturn true\n\t}\n\tif token.typ == yaml_SCALAR_TOKEN {\n\t\tvar plain_implicit, quoted_implicit bool\n\t\tend_mark = token.end_mark\n\t\tif (len(tag) == 0 && token.style == yaml_PLAIN_SCALAR_STYLE) || (len(tag) == 1 && tag[0] == '!') {\n\t\t\tplain_implicit = true\n\t\t} else if len(tag) == 0 {\n\t\t\tquoted_implicit = true\n\t\t}\n\t\tparser.state = parser.states[len(parser.states)-1]\n\t\tparser.states = parser.states[:len(parser.states)-1]\n\n\t\t*event = yaml_event_t{\n\t\t\ttyp:             yaml_SCALAR_EVENT,\n\t\t\tstart_mark:      start_mark,\n\t\t\tend_mark:        end_mark,\n\t\t\tanchor:          anchor,\n\t\t\ttag:             tag,\n\t\t\tvalue:           token.value,\n\t\t\timplicit:        plain_implicit,\n\t\t\tquoted_implicit: quoted_implicit,\n\t\t\tstyle:           yaml_style_t(token.style),\n\t\t}\n\t\tyaml_parser_set_event_comments(parser, event)\n\t\tskip_token(parser)\n\t\treturn true\n\t}\n\tif token.typ == yaml_FLOW_SEQUENCE_START_TOKEN {\n\t\t// [Go] Some of the events below can be merged as they differ only on style.\n\t\tend_mark = token.end_mark\n\t\tparser.state = yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_SEQUENCE_START_EVENT,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   end_mark,\n\t\t\tanchor:     anchor,\n\t\t\ttag:        tag,\n\t\t\timplicit:   implicit,\n\t\t\tstyle:      yaml_style_t(yaml_FLOW_SEQUENCE_STYLE),\n\t\t}\n\t\tyaml_parser_set_event_comments(parser, event)\n\t\treturn true\n\t}\n\tif token.typ == yaml_FLOW_MAPPING_START_TOKEN {\n\t\tend_mark = token.end_mark\n\t\tparser.state = yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_MAPPING_START_EVENT,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   end_mark,\n\t\t\tanchor:     anchor,\n\t\t\ttag:        tag,\n\t\t\timplicit:   implicit,\n\t\t\tstyle:      yaml_style_t(yaml_FLOW_MAPPING_STYLE),\n\t\t}\n\t\tyaml_parser_set_event_comments(parser, event)\n\t\treturn true\n\t}\n\tif block && token.typ == yaml_BLOCK_SEQUENCE_START_TOKEN {\n\t\tend_mark = token.end_mark\n\t\tparser.state = yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_SEQUENCE_START_EVENT,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   end_mark,\n\t\t\tanchor:     anchor,\n\t\t\ttag:        tag,\n\t\t\timplicit:   implicit,\n\t\t\tstyle:      yaml_style_t(yaml_BLOCK_SEQUENCE_STYLE),\n\t\t}\n\t\tif parser.stem_comment != nil {\n\t\t\tevent.head_comment = parser.stem_comment\n\t\t\tparser.stem_comment = nil\n\t\t}\n\t\treturn true\n\t}\n\tif block && token.typ == yaml_BLOCK_MAPPING_START_TOKEN {\n\t\tend_mark = token.end_mark\n\t\tparser.state = yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_MAPPING_START_EVENT,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   end_mark,\n\t\t\tanchor:     anchor,\n\t\t\ttag:        tag,\n\t\t\timplicit:   implicit,\n\t\t\tstyle:      yaml_style_t(yaml_BLOCK_MAPPING_STYLE),\n\t\t}\n\t\tif parser.stem_comment != nil {\n\t\t\tevent.head_comment = parser.stem_comment\n\t\t\tparser.stem_comment = nil\n\t\t}\n\t\treturn true\n\t}\n\tif len(anchor) > 0 || len(tag) > 0 {\n\t\tparser.state = parser.states[len(parser.states)-1]\n\t\tparser.states = parser.states[:len(parser.states)-1]\n\n\t\t*event = yaml_event_t{\n\t\t\ttyp:             yaml_SCALAR_EVENT,\n\t\t\tstart_mark:      start_mark,\n\t\t\tend_mark:        end_mark,\n\t\t\tanchor:          anchor,\n\t\t\ttag:             tag,\n\t\t\timplicit:        implicit,\n\t\t\tquoted_implicit: false,\n\t\t\tstyle:           yaml_style_t(yaml_PLAIN_SCALAR_STYLE),\n\t\t}\n\t\treturn true\n\t}\n\n\tcontext := \"while parsing a flow node\"\n\tif block {\n\t\tcontext = \"while parsing a block node\"\n\t}\n\tyaml_parser_set_parser_error_context(parser, context, start_mark,\n\t\t\"did not find expected node content\", token.start_mark)\n\treturn false\n}\n\n// Parse the productions:\n// block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END\n//                    ********************  *********** *             *********\n//\nfunc yaml_parser_parse_block_sequence_entry(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {\n\tif first {\n\t\ttoken := peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tparser.marks = append(parser.marks, token.start_mark)\n\t\tskip_token(parser)\n\t}\n\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\tif token.typ == yaml_BLOCK_ENTRY_TOKEN {\n\t\tmark := token.end_mark\n\t\tprior_head_len := len(parser.head_comment)\n\t\tskip_token(parser)\n\t\tyaml_parser_split_stem_comment(parser, prior_head_len)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ != yaml_BLOCK_ENTRY_TOKEN && token.typ != yaml_BLOCK_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, true, false)\n\t\t} else {\n\t\t\tparser.state = yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE\n\t\t\treturn yaml_parser_process_empty_scalar(parser, event, mark)\n\t\t}\n\t}\n\tif token.typ == yaml_BLOCK_END_TOKEN {\n\t\tparser.state = parser.states[len(parser.states)-1]\n\t\tparser.states = parser.states[:len(parser.states)-1]\n\t\tparser.marks = parser.marks[:len(parser.marks)-1]\n\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_SEQUENCE_END_EVENT,\n\t\t\tstart_mark: token.start_mark,\n\t\t\tend_mark:   token.end_mark,\n\t\t}\n\n\t\tskip_token(parser)\n\t\treturn true\n\t}\n\n\tcontext_mark := parser.marks[len(parser.marks)-1]\n\tparser.marks = parser.marks[:len(parser.marks)-1]\n\treturn yaml_parser_set_parser_error_context(parser,\n\t\t\"while parsing a block collection\", context_mark,\n\t\t\"did not find expected '-' indicator\", token.start_mark)\n}\n\n// Parse the productions:\n// indentless_sequence  ::= (BLOCK-ENTRY block_node?)+\n//                           *********** *\nfunc yaml_parser_parse_indentless_sequence_entry(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\tif token.typ == yaml_BLOCK_ENTRY_TOKEN {\n\t\tmark := token.end_mark\n\t\tprior_head_len := len(parser.head_comment)\n\t\tskip_token(parser)\n\t\tyaml_parser_split_stem_comment(parser, prior_head_len)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ != yaml_BLOCK_ENTRY_TOKEN &&\n\t\t\ttoken.typ != yaml_KEY_TOKEN &&\n\t\t\ttoken.typ != yaml_VALUE_TOKEN &&\n\t\t\ttoken.typ != yaml_BLOCK_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, true, false)\n\t\t}\n\t\tparser.state = yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE\n\t\treturn yaml_parser_process_empty_scalar(parser, event, mark)\n\t}\n\tparser.state = parser.states[len(parser.states)-1]\n\tparser.states = parser.states[:len(parser.states)-1]\n\n\t*event = yaml_event_t{\n\t\ttyp:        yaml_SEQUENCE_END_EVENT,\n\t\tstart_mark: token.start_mark,\n\t\tend_mark:   token.start_mark, // [Go] Shouldn't this be token.end_mark?\n\t}\n\treturn true\n}\n\n// Split stem comment from head comment.\n//\n// When a sequence or map is found under a sequence entry, the former head comment\n// is assigned to the underlying sequence or map as a whole, not the individual\n// sequence or map entry as would be expected otherwise. To handle this case the\n// previous head comment is moved aside as the stem comment.\nfunc yaml_parser_split_stem_comment(parser *yaml_parser_t, stem_len int) {\n\tif stem_len == 0 {\n\t\treturn\n\t}\n\n\ttoken := peek_token(parser)\n\tif token == nil || token.typ != yaml_BLOCK_SEQUENCE_START_TOKEN && token.typ != yaml_BLOCK_MAPPING_START_TOKEN {\n\t\treturn\n\t}\n\n\tparser.stem_comment = parser.head_comment[:stem_len]\n\tif len(parser.head_comment) == stem_len {\n\t\tparser.head_comment = nil\n\t} else {\n\t\t// Copy suffix to prevent very strange bugs if someone ever appends\n\t\t// further bytes to the prefix in the stem_comment slice above.\n\t\tparser.head_comment = append([]byte(nil), parser.head_comment[stem_len+1:]...)\n\t}\n}\n\n// Parse the productions:\n// block_mapping        ::= BLOCK-MAPPING_START\n//                          *******************\n//                          ((KEY block_node_or_indentless_sequence?)?\n//                            *** *\n//                          (VALUE block_node_or_indentless_sequence?)?)*\n//\n//                          BLOCK-END\n//                          *********\n//\nfunc yaml_parser_parse_block_mapping_key(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {\n\tif first {\n\t\ttoken := peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tparser.marks = append(parser.marks, token.start_mark)\n\t\tskip_token(parser)\n\t}\n\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\t// [Go] A tail comment was left from the prior mapping value processed. Emit an event\n\t//      as it needs to be processed with that value and not the following key.\n\tif len(parser.tail_comment) > 0 {\n\t\t*event = yaml_event_t{\n\t\t\ttyp:          yaml_TAIL_COMMENT_EVENT,\n\t\t\tstart_mark:   token.start_mark,\n\t\t\tend_mark:     token.end_mark,\n\t\t\tfoot_comment: parser.tail_comment,\n\t\t}\n\t\tparser.tail_comment = nil\n\t\treturn true\n\t}\n\n\tif token.typ == yaml_KEY_TOKEN {\n\t\tmark := token.end_mark\n\t\tskip_token(parser)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ != yaml_KEY_TOKEN &&\n\t\t\ttoken.typ != yaml_VALUE_TOKEN &&\n\t\t\ttoken.typ != yaml_BLOCK_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_BLOCK_MAPPING_VALUE_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, true, true)\n\t\t} else {\n\t\t\tparser.state = yaml_PARSE_BLOCK_MAPPING_VALUE_STATE\n\t\t\treturn yaml_parser_process_empty_scalar(parser, event, mark)\n\t\t}\n\t} else if token.typ == yaml_BLOCK_END_TOKEN {\n\t\tparser.state = parser.states[len(parser.states)-1]\n\t\tparser.states = parser.states[:len(parser.states)-1]\n\t\tparser.marks = parser.marks[:len(parser.marks)-1]\n\t\t*event = yaml_event_t{\n\t\t\ttyp:        yaml_MAPPING_END_EVENT,\n\t\t\tstart_mark: token.start_mark,\n\t\t\tend_mark:   token.end_mark,\n\t\t}\n\t\tyaml_parser_set_event_comments(parser, event)\n\t\tskip_token(parser)\n\t\treturn true\n\t}\n\n\tcontext_mark := parser.marks[len(parser.marks)-1]\n\tparser.marks = parser.marks[:len(parser.marks)-1]\n\treturn yaml_parser_set_parser_error_context(parser,\n\t\t\"while parsing a block mapping\", context_mark,\n\t\t\"did not find expected key\", token.start_mark)\n}\n\n// Parse the productions:\n// block_mapping        ::= BLOCK-MAPPING_START\n//\n//                          ((KEY block_node_or_indentless_sequence?)?\n//\n//                          (VALUE block_node_or_indentless_sequence?)?)*\n//                           ***** *\n//                          BLOCK-END\n//\n//\nfunc yaml_parser_parse_block_mapping_value(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\tif token.typ == yaml_VALUE_TOKEN {\n\t\tmark := token.end_mark\n\t\tskip_token(parser)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ != yaml_KEY_TOKEN &&\n\t\t\ttoken.typ != yaml_VALUE_TOKEN &&\n\t\t\ttoken.typ != yaml_BLOCK_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_BLOCK_MAPPING_KEY_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, true, true)\n\t\t}\n\t\tparser.state = yaml_PARSE_BLOCK_MAPPING_KEY_STATE\n\t\treturn yaml_parser_process_empty_scalar(parser, event, mark)\n\t}\n\tparser.state = yaml_PARSE_BLOCK_MAPPING_KEY_STATE\n\treturn yaml_parser_process_empty_scalar(parser, event, token.start_mark)\n}\n\n// Parse the productions:\n// flow_sequence        ::= FLOW-SEQUENCE-START\n//                          *******************\n//                          (flow_sequence_entry FLOW-ENTRY)*\n//                           *                   **********\n//                          flow_sequence_entry?\n//                          *\n//                          FLOW-SEQUENCE-END\n//                          *****************\n// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n//                          *\n//\nfunc yaml_parser_parse_flow_sequence_entry(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {\n\tif first {\n\t\ttoken := peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tparser.marks = append(parser.marks, token.start_mark)\n\t\tskip_token(parser)\n\t}\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\tif token.typ != yaml_FLOW_SEQUENCE_END_TOKEN {\n\t\tif !first {\n\t\t\tif token.typ == yaml_FLOW_ENTRY_TOKEN {\n\t\t\t\tskip_token(parser)\n\t\t\t\ttoken = peek_token(parser)\n\t\t\t\tif token == nil {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tcontext_mark := parser.marks[len(parser.marks)-1]\n\t\t\t\tparser.marks = parser.marks[:len(parser.marks)-1]\n\t\t\t\treturn yaml_parser_set_parser_error_context(parser,\n\t\t\t\t\t\"while parsing a flow sequence\", context_mark,\n\t\t\t\t\t\"did not find expected ',' or ']'\", token.start_mark)\n\t\t\t}\n\t\t}\n\n\t\tif token.typ == yaml_KEY_TOKEN {\n\t\t\tparser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE\n\t\t\t*event = yaml_event_t{\n\t\t\t\ttyp:        yaml_MAPPING_START_EVENT,\n\t\t\t\tstart_mark: token.start_mark,\n\t\t\t\tend_mark:   token.end_mark,\n\t\t\t\timplicit:   true,\n\t\t\t\tstyle:      yaml_style_t(yaml_FLOW_MAPPING_STYLE),\n\t\t\t}\n\t\t\tskip_token(parser)\n\t\t\treturn true\n\t\t} else if token.typ != yaml_FLOW_SEQUENCE_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, false, false)\n\t\t}\n\t}\n\n\tparser.state = parser.states[len(parser.states)-1]\n\tparser.states = parser.states[:len(parser.states)-1]\n\tparser.marks = parser.marks[:len(parser.marks)-1]\n\n\t*event = yaml_event_t{\n\t\ttyp:        yaml_SEQUENCE_END_EVENT,\n\t\tstart_mark: token.start_mark,\n\t\tend_mark:   token.end_mark,\n\t}\n\tyaml_parser_set_event_comments(parser, event)\n\n\tskip_token(parser)\n\treturn true\n}\n\n//\n// Parse the productions:\n// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n//                                      *** *\n//\nfunc yaml_parser_parse_flow_sequence_entry_mapping_key(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\tif token.typ != yaml_VALUE_TOKEN &&\n\t\ttoken.typ != yaml_FLOW_ENTRY_TOKEN &&\n\t\ttoken.typ != yaml_FLOW_SEQUENCE_END_TOKEN {\n\t\tparser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE)\n\t\treturn yaml_parser_parse_node(parser, event, false, false)\n\t}\n\tmark := token.end_mark\n\tskip_token(parser)\n\tparser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE\n\treturn yaml_parser_process_empty_scalar(parser, event, mark)\n}\n\n// Parse the productions:\n// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n//                                                      ***** *\n//\nfunc yaml_parser_parse_flow_sequence_entry_mapping_value(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\tif token.typ == yaml_VALUE_TOKEN {\n\t\tskip_token(parser)\n\t\ttoken := peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ != yaml_FLOW_ENTRY_TOKEN && token.typ != yaml_FLOW_SEQUENCE_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, false, false)\n\t\t}\n\t}\n\tparser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE\n\treturn yaml_parser_process_empty_scalar(parser, event, token.start_mark)\n}\n\n// Parse the productions:\n// flow_sequence_entry  ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n//                                                                      *\n//\nfunc yaml_parser_parse_flow_sequence_entry_mapping_end(parser *yaml_parser_t, event *yaml_event_t) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\tparser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE\n\t*event = yaml_event_t{\n\t\ttyp:        yaml_MAPPING_END_EVENT,\n\t\tstart_mark: token.start_mark,\n\t\tend_mark:   token.start_mark, // [Go] Shouldn't this be end_mark?\n\t}\n\treturn true\n}\n\n// Parse the productions:\n// flow_mapping         ::= FLOW-MAPPING-START\n//                          ******************\n//                          (flow_mapping_entry FLOW-ENTRY)*\n//                           *                  **********\n//                          flow_mapping_entry?\n//                          ******************\n//                          FLOW-MAPPING-END\n//                          ****************\n// flow_mapping_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n//                          *           *** *\n//\nfunc yaml_parser_parse_flow_mapping_key(parser *yaml_parser_t, event *yaml_event_t, first bool) bool {\n\tif first {\n\t\ttoken := peek_token(parser)\n\t\tparser.marks = append(parser.marks, token.start_mark)\n\t\tskip_token(parser)\n\t}\n\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\tif token.typ != yaml_FLOW_MAPPING_END_TOKEN {\n\t\tif !first {\n\t\t\tif token.typ == yaml_FLOW_ENTRY_TOKEN {\n\t\t\t\tskip_token(parser)\n\t\t\t\ttoken = peek_token(parser)\n\t\t\t\tif token == nil {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tcontext_mark := parser.marks[len(parser.marks)-1]\n\t\t\t\tparser.marks = parser.marks[:len(parser.marks)-1]\n\t\t\t\treturn yaml_parser_set_parser_error_context(parser,\n\t\t\t\t\t\"while parsing a flow mapping\", context_mark,\n\t\t\t\t\t\"did not find expected ',' or '}'\", token.start_mark)\n\t\t\t}\n\t\t}\n\n\t\tif token.typ == yaml_KEY_TOKEN {\n\t\t\tskip_token(parser)\n\t\t\ttoken = peek_token(parser)\n\t\t\tif token == nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif token.typ != yaml_VALUE_TOKEN &&\n\t\t\t\ttoken.typ != yaml_FLOW_ENTRY_TOKEN &&\n\t\t\t\ttoken.typ != yaml_FLOW_MAPPING_END_TOKEN {\n\t\t\t\tparser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_VALUE_STATE)\n\t\t\t\treturn yaml_parser_parse_node(parser, event, false, false)\n\t\t\t} else {\n\t\t\t\tparser.state = yaml_PARSE_FLOW_MAPPING_VALUE_STATE\n\t\t\t\treturn yaml_parser_process_empty_scalar(parser, event, token.start_mark)\n\t\t\t}\n\t\t} else if token.typ != yaml_FLOW_MAPPING_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, false, false)\n\t\t}\n\t}\n\n\tparser.state = parser.states[len(parser.states)-1]\n\tparser.states = parser.states[:len(parser.states)-1]\n\tparser.marks = parser.marks[:len(parser.marks)-1]\n\t*event = yaml_event_t{\n\t\ttyp:        yaml_MAPPING_END_EVENT,\n\t\tstart_mark: token.start_mark,\n\t\tend_mark:   token.end_mark,\n\t}\n\tyaml_parser_set_event_comments(parser, event)\n\tskip_token(parser)\n\treturn true\n}\n\n// Parse the productions:\n// flow_mapping_entry   ::= flow_node | KEY flow_node? (VALUE flow_node?)?\n//                                   *                  ***** *\n//\nfunc yaml_parser_parse_flow_mapping_value(parser *yaml_parser_t, event *yaml_event_t, empty bool) bool {\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\tif empty {\n\t\tparser.state = yaml_PARSE_FLOW_MAPPING_KEY_STATE\n\t\treturn yaml_parser_process_empty_scalar(parser, event, token.start_mark)\n\t}\n\tif token.typ == yaml_VALUE_TOKEN {\n\t\tskip_token(parser)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t\tif token.typ != yaml_FLOW_ENTRY_TOKEN && token.typ != yaml_FLOW_MAPPING_END_TOKEN {\n\t\t\tparser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_KEY_STATE)\n\t\t\treturn yaml_parser_parse_node(parser, event, false, false)\n\t\t}\n\t}\n\tparser.state = yaml_PARSE_FLOW_MAPPING_KEY_STATE\n\treturn yaml_parser_process_empty_scalar(parser, event, token.start_mark)\n}\n\n// Generate an empty scalar event.\nfunc yaml_parser_process_empty_scalar(parser *yaml_parser_t, event *yaml_event_t, mark yaml_mark_t) bool {\n\t*event = yaml_event_t{\n\t\ttyp:        yaml_SCALAR_EVENT,\n\t\tstart_mark: mark,\n\t\tend_mark:   mark,\n\t\tvalue:      nil, // Empty\n\t\timplicit:   true,\n\t\tstyle:      yaml_style_t(yaml_PLAIN_SCALAR_STYLE),\n\t}\n\treturn true\n}\n\nvar default_tag_directives = []yaml_tag_directive_t{\n\t{[]byte(\"!\"), []byte(\"!\")},\n\t{[]byte(\"!!\"), []byte(\"tag:yaml.org,2002:\")},\n}\n\n// Parse directives.\nfunc yaml_parser_process_directives(parser *yaml_parser_t,\n\tversion_directive_ref **yaml_version_directive_t,\n\ttag_directives_ref *[]yaml_tag_directive_t) bool {\n\n\tvar version_directive *yaml_version_directive_t\n\tvar tag_directives []yaml_tag_directive_t\n\n\ttoken := peek_token(parser)\n\tif token == nil {\n\t\treturn false\n\t}\n\n\tfor token.typ == yaml_VERSION_DIRECTIVE_TOKEN || token.typ == yaml_TAG_DIRECTIVE_TOKEN {\n\t\tif token.typ == yaml_VERSION_DIRECTIVE_TOKEN {\n\t\t\tif version_directive != nil {\n\t\t\t\tyaml_parser_set_parser_error(parser,\n\t\t\t\t\t\"found duplicate %YAML directive\", token.start_mark)\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif token.major != 1 || token.minor != 1 {\n\t\t\t\tyaml_parser_set_parser_error(parser,\n\t\t\t\t\t\"found incompatible YAML document\", token.start_mark)\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tversion_directive = &yaml_version_directive_t{\n\t\t\t\tmajor: token.major,\n\t\t\t\tminor: token.minor,\n\t\t\t}\n\t\t} else if token.typ == yaml_TAG_DIRECTIVE_TOKEN {\n\t\t\tvalue := yaml_tag_directive_t{\n\t\t\t\thandle: token.value,\n\t\t\t\tprefix: token.prefix,\n\t\t\t}\n\t\t\tif !yaml_parser_append_tag_directive(parser, value, false, token.start_mark) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\ttag_directives = append(tag_directives, value)\n\t\t}\n\n\t\tskip_token(parser)\n\t\ttoken = peek_token(parser)\n\t\tif token == nil {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tfor i := range default_tag_directives {\n\t\tif !yaml_parser_append_tag_directive(parser, default_tag_directives[i], true, token.start_mark) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif version_directive_ref != nil {\n\t\t*version_directive_ref = version_directive\n\t}\n\tif tag_directives_ref != nil {\n\t\t*tag_directives_ref = tag_directives\n\t}\n\treturn true\n}\n\n// Append a tag directive to the directives stack.\nfunc yaml_parser_append_tag_directive(parser *yaml_parser_t, value yaml_tag_directive_t, allow_duplicates bool, mark yaml_mark_t) bool {\n\tfor i := range parser.tag_directives {\n\t\tif bytes.Equal(value.handle, parser.tag_directives[i].handle) {\n\t\t\tif allow_duplicates {\n\t\t\t\treturn true\n\t\t\t}\n\t\t\treturn yaml_parser_set_parser_error(parser, \"found duplicate %TAG directive\", mark)\n\t\t}\n\t}\n\n\t// [Go] I suspect the copy is unnecessary. This was likely done\n\t// because there was no way to track ownership of the data.\n\tvalue_copy := yaml_tag_directive_t{\n\t\thandle: make([]byte, len(value.handle)),\n\t\tprefix: make([]byte, len(value.prefix)),\n\t}\n\tcopy(value_copy.handle, value.handle)\n\tcopy(value_copy.prefix, value.prefix)\n\tparser.tag_directives = append(parser.tag_directives, value_copy)\n\treturn true\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/readerc.go",
    "content": "// \n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n// \n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n// \n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n// \n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\nimport (\n\t\"io\"\n)\n\n// Set the reader error and return 0.\nfunc yaml_parser_set_reader_error(parser *yaml_parser_t, problem string, offset int, value int) bool {\n\tparser.error = yaml_READER_ERROR\n\tparser.problem = problem\n\tparser.problem_offset = offset\n\tparser.problem_value = value\n\treturn false\n}\n\n// Byte order marks.\nconst (\n\tbom_UTF8    = \"\\xef\\xbb\\xbf\"\n\tbom_UTF16LE = \"\\xff\\xfe\"\n\tbom_UTF16BE = \"\\xfe\\xff\"\n)\n\n// Determine the input stream encoding by checking the BOM symbol. If no BOM is\n// found, the UTF-8 encoding is assumed. Return 1 on success, 0 on failure.\nfunc yaml_parser_determine_encoding(parser *yaml_parser_t) bool {\n\t// Ensure that we had enough bytes in the raw buffer.\n\tfor !parser.eof && len(parser.raw_buffer)-parser.raw_buffer_pos < 3 {\n\t\tif !yaml_parser_update_raw_buffer(parser) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Determine the encoding.\n\tbuf := parser.raw_buffer\n\tpos := parser.raw_buffer_pos\n\tavail := len(buf) - pos\n\tif avail >= 2 && buf[pos] == bom_UTF16LE[0] && buf[pos+1] == bom_UTF16LE[1] {\n\t\tparser.encoding = yaml_UTF16LE_ENCODING\n\t\tparser.raw_buffer_pos += 2\n\t\tparser.offset += 2\n\t} else if avail >= 2 && buf[pos] == bom_UTF16BE[0] && buf[pos+1] == bom_UTF16BE[1] {\n\t\tparser.encoding = yaml_UTF16BE_ENCODING\n\t\tparser.raw_buffer_pos += 2\n\t\tparser.offset += 2\n\t} else if avail >= 3 && buf[pos] == bom_UTF8[0] && buf[pos+1] == bom_UTF8[1] && buf[pos+2] == bom_UTF8[2] {\n\t\tparser.encoding = yaml_UTF8_ENCODING\n\t\tparser.raw_buffer_pos += 3\n\t\tparser.offset += 3\n\t} else {\n\t\tparser.encoding = yaml_UTF8_ENCODING\n\t}\n\treturn true\n}\n\n// Update the raw buffer.\nfunc yaml_parser_update_raw_buffer(parser *yaml_parser_t) bool {\n\tsize_read := 0\n\n\t// Return if the raw buffer is full.\n\tif parser.raw_buffer_pos == 0 && len(parser.raw_buffer) == cap(parser.raw_buffer) {\n\t\treturn true\n\t}\n\n\t// Return on EOF.\n\tif parser.eof {\n\t\treturn true\n\t}\n\n\t// Move the remaining bytes in the raw buffer to the beginning.\n\tif parser.raw_buffer_pos > 0 && parser.raw_buffer_pos < len(parser.raw_buffer) {\n\t\tcopy(parser.raw_buffer, parser.raw_buffer[parser.raw_buffer_pos:])\n\t}\n\tparser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)-parser.raw_buffer_pos]\n\tparser.raw_buffer_pos = 0\n\n\t// Call the read handler to fill the buffer.\n\tsize_read, err := parser.read_handler(parser, parser.raw_buffer[len(parser.raw_buffer):cap(parser.raw_buffer)])\n\tparser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)+size_read]\n\tif err == io.EOF {\n\t\tparser.eof = true\n\t} else if err != nil {\n\t\treturn yaml_parser_set_reader_error(parser, \"input error: \"+err.Error(), parser.offset, -1)\n\t}\n\treturn true\n}\n\n// Ensure that the buffer contains at least `length` characters.\n// Return true on success, false on failure.\n//\n// The length is supposed to be significantly less that the buffer size.\nfunc yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {\n\tif parser.read_handler == nil {\n\t\tpanic(\"read handler must be set\")\n\t}\n\n\t// [Go] This function was changed to guarantee the requested length size at EOF.\n\t// The fact we need to do this is pretty awful, but the description above implies\n\t// for that to be the case, and there are tests\n\n\t// If the EOF flag is set and the raw buffer is empty, do nothing.\n\tif parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) {\n\t\t// [Go] ACTUALLY! Read the documentation of this function above.\n\t\t// This is just broken. To return true, we need to have the\n\t\t// given length in the buffer. Not doing that means every single\n\t\t// check that calls this function to make sure the buffer has a\n\t\t// given length is Go) panicking; or C) accessing invalid memory.\n\t\t//return true\n\t}\n\n\t// Return if the buffer contains enough characters.\n\tif parser.unread >= length {\n\t\treturn true\n\t}\n\n\t// Determine the input encoding if it is not known yet.\n\tif parser.encoding == yaml_ANY_ENCODING {\n\t\tif !yaml_parser_determine_encoding(parser) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Move the unread characters to the beginning of the buffer.\n\tbuffer_len := len(parser.buffer)\n\tif parser.buffer_pos > 0 && parser.buffer_pos < buffer_len {\n\t\tcopy(parser.buffer, parser.buffer[parser.buffer_pos:])\n\t\tbuffer_len -= parser.buffer_pos\n\t\tparser.buffer_pos = 0\n\t} else if parser.buffer_pos == buffer_len {\n\t\tbuffer_len = 0\n\t\tparser.buffer_pos = 0\n\t}\n\n\t// Open the whole buffer for writing, and cut it before returning.\n\tparser.buffer = parser.buffer[:cap(parser.buffer)]\n\n\t// Fill the buffer until it has enough characters.\n\tfirst := true\n\tfor parser.unread < length {\n\n\t\t// Fill the raw buffer if necessary.\n\t\tif !first || parser.raw_buffer_pos == len(parser.raw_buffer) {\n\t\t\tif !yaml_parser_update_raw_buffer(parser) {\n\t\t\t\tparser.buffer = parser.buffer[:buffer_len]\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tfirst = false\n\n\t\t// Decode the raw buffer.\n\tinner:\n\t\tfor parser.raw_buffer_pos != len(parser.raw_buffer) {\n\t\t\tvar value rune\n\t\t\tvar width int\n\n\t\t\traw_unread := len(parser.raw_buffer) - parser.raw_buffer_pos\n\n\t\t\t// Decode the next character.\n\t\t\tswitch parser.encoding {\n\t\t\tcase yaml_UTF8_ENCODING:\n\t\t\t\t// Decode a UTF-8 character.  Check RFC 3629\n\t\t\t\t// (http://www.ietf.org/rfc/rfc3629.txt) for more details.\n\t\t\t\t//\n\t\t\t\t// The following table (taken from the RFC) is used for\n\t\t\t\t// decoding.\n\t\t\t\t//\n\t\t\t\t//    Char. number range |        UTF-8 octet sequence\n\t\t\t\t//      (hexadecimal)    |              (binary)\n\t\t\t\t//   --------------------+------------------------------------\n\t\t\t\t//   0000 0000-0000 007F | 0xxxxxxx\n\t\t\t\t//   0000 0080-0000 07FF | 110xxxxx 10xxxxxx\n\t\t\t\t//   0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx\n\t\t\t\t//   0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx\n\t\t\t\t//\n\t\t\t\t// Additionally, the characters in the range 0xD800-0xDFFF\n\t\t\t\t// are prohibited as they are reserved for use with UTF-16\n\t\t\t\t// surrogate pairs.\n\n\t\t\t\t// Determine the length of the UTF-8 sequence.\n\t\t\t\toctet := parser.raw_buffer[parser.raw_buffer_pos]\n\t\t\t\tswitch {\n\t\t\t\tcase octet&0x80 == 0x00:\n\t\t\t\t\twidth = 1\n\t\t\t\tcase octet&0xE0 == 0xC0:\n\t\t\t\t\twidth = 2\n\t\t\t\tcase octet&0xF0 == 0xE0:\n\t\t\t\t\twidth = 3\n\t\t\t\tcase octet&0xF8 == 0xF0:\n\t\t\t\t\twidth = 4\n\t\t\t\tdefault:\n\t\t\t\t\t// The leading octet is invalid.\n\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\"invalid leading UTF-8 octet\",\n\t\t\t\t\t\tparser.offset, int(octet))\n\t\t\t\t}\n\n\t\t\t\t// Check if the raw buffer contains an incomplete character.\n\t\t\t\tif width > raw_unread {\n\t\t\t\t\tif parser.eof {\n\t\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\t\"incomplete UTF-8 octet sequence\",\n\t\t\t\t\t\t\tparser.offset, -1)\n\t\t\t\t\t}\n\t\t\t\t\tbreak inner\n\t\t\t\t}\n\n\t\t\t\t// Decode the leading octet.\n\t\t\t\tswitch {\n\t\t\t\tcase octet&0x80 == 0x00:\n\t\t\t\t\tvalue = rune(octet & 0x7F)\n\t\t\t\tcase octet&0xE0 == 0xC0:\n\t\t\t\t\tvalue = rune(octet & 0x1F)\n\t\t\t\tcase octet&0xF0 == 0xE0:\n\t\t\t\t\tvalue = rune(octet & 0x0F)\n\t\t\t\tcase octet&0xF8 == 0xF0:\n\t\t\t\t\tvalue = rune(octet & 0x07)\n\t\t\t\tdefault:\n\t\t\t\t\tvalue = 0\n\t\t\t\t}\n\n\t\t\t\t// Check and decode the trailing octets.\n\t\t\t\tfor k := 1; k < width; k++ {\n\t\t\t\t\toctet = parser.raw_buffer[parser.raw_buffer_pos+k]\n\n\t\t\t\t\t// Check if the octet is valid.\n\t\t\t\t\tif (octet & 0xC0) != 0x80 {\n\t\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\t\"invalid trailing UTF-8 octet\",\n\t\t\t\t\t\t\tparser.offset+k, int(octet))\n\t\t\t\t\t}\n\n\t\t\t\t\t// Decode the octet.\n\t\t\t\t\tvalue = (value << 6) + rune(octet&0x3F)\n\t\t\t\t}\n\n\t\t\t\t// Check the length of the sequence against the value.\n\t\t\t\tswitch {\n\t\t\t\tcase width == 1:\n\t\t\t\tcase width == 2 && value >= 0x80:\n\t\t\t\tcase width == 3 && value >= 0x800:\n\t\t\t\tcase width == 4 && value >= 0x10000:\n\t\t\t\tdefault:\n\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\"invalid length of a UTF-8 sequence\",\n\t\t\t\t\t\tparser.offset, -1)\n\t\t\t\t}\n\n\t\t\t\t// Check the range of the value.\n\t\t\t\tif value >= 0xD800 && value <= 0xDFFF || value > 0x10FFFF {\n\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\"invalid Unicode character\",\n\t\t\t\t\t\tparser.offset, int(value))\n\t\t\t\t}\n\n\t\t\tcase yaml_UTF16LE_ENCODING, yaml_UTF16BE_ENCODING:\n\t\t\t\tvar low, high int\n\t\t\t\tif parser.encoding == yaml_UTF16LE_ENCODING {\n\t\t\t\t\tlow, high = 0, 1\n\t\t\t\t} else {\n\t\t\t\t\tlow, high = 1, 0\n\t\t\t\t}\n\n\t\t\t\t// The UTF-16 encoding is not as simple as one might\n\t\t\t\t// naively think.  Check RFC 2781\n\t\t\t\t// (http://www.ietf.org/rfc/rfc2781.txt).\n\t\t\t\t//\n\t\t\t\t// Normally, two subsequent bytes describe a Unicode\n\t\t\t\t// character.  However a special technique (called a\n\t\t\t\t// surrogate pair) is used for specifying character\n\t\t\t\t// values larger than 0xFFFF.\n\t\t\t\t//\n\t\t\t\t// A surrogate pair consists of two pseudo-characters:\n\t\t\t\t//      high surrogate area (0xD800-0xDBFF)\n\t\t\t\t//      low surrogate area (0xDC00-0xDFFF)\n\t\t\t\t//\n\t\t\t\t// The following formulas are used for decoding\n\t\t\t\t// and encoding characters using surrogate pairs:\n\t\t\t\t//\n\t\t\t\t//  U  = U' + 0x10000   (0x01 00 00 <= U <= 0x10 FF FF)\n\t\t\t\t//  U' = yyyyyyyyyyxxxxxxxxxx   (0 <= U' <= 0x0F FF FF)\n\t\t\t\t//  W1 = 110110yyyyyyyyyy\n\t\t\t\t//  W2 = 110111xxxxxxxxxx\n\t\t\t\t//\n\t\t\t\t// where U is the character value, W1 is the high surrogate\n\t\t\t\t// area, W2 is the low surrogate area.\n\n\t\t\t\t// Check for incomplete UTF-16 character.\n\t\t\t\tif raw_unread < 2 {\n\t\t\t\t\tif parser.eof {\n\t\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\t\"incomplete UTF-16 character\",\n\t\t\t\t\t\t\tparser.offset, -1)\n\t\t\t\t\t}\n\t\t\t\t\tbreak inner\n\t\t\t\t}\n\n\t\t\t\t// Get the character.\n\t\t\t\tvalue = rune(parser.raw_buffer[parser.raw_buffer_pos+low]) +\n\t\t\t\t\t(rune(parser.raw_buffer[parser.raw_buffer_pos+high]) << 8)\n\n\t\t\t\t// Check for unexpected low surrogate area.\n\t\t\t\tif value&0xFC00 == 0xDC00 {\n\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\"unexpected low surrogate area\",\n\t\t\t\t\t\tparser.offset, int(value))\n\t\t\t\t}\n\n\t\t\t\t// Check for a high surrogate area.\n\t\t\t\tif value&0xFC00 == 0xD800 {\n\t\t\t\t\twidth = 4\n\n\t\t\t\t\t// Check for incomplete surrogate pair.\n\t\t\t\t\tif raw_unread < 4 {\n\t\t\t\t\t\tif parser.eof {\n\t\t\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\t\t\"incomplete UTF-16 surrogate pair\",\n\t\t\t\t\t\t\t\tparser.offset, -1)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tbreak inner\n\t\t\t\t\t}\n\n\t\t\t\t\t// Get the next character.\n\t\t\t\t\tvalue2 := rune(parser.raw_buffer[parser.raw_buffer_pos+low+2]) +\n\t\t\t\t\t\t(rune(parser.raw_buffer[parser.raw_buffer_pos+high+2]) << 8)\n\n\t\t\t\t\t// Check for a low surrogate area.\n\t\t\t\t\tif value2&0xFC00 != 0xDC00 {\n\t\t\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\t\t\"expected low surrogate area\",\n\t\t\t\t\t\t\tparser.offset+2, int(value2))\n\t\t\t\t\t}\n\n\t\t\t\t\t// Generate the value of the surrogate pair.\n\t\t\t\t\tvalue = 0x10000 + ((value & 0x3FF) << 10) + (value2 & 0x3FF)\n\t\t\t\t} else {\n\t\t\t\t\twidth = 2\n\t\t\t\t}\n\n\t\t\tdefault:\n\t\t\t\tpanic(\"impossible\")\n\t\t\t}\n\n\t\t\t// Check if the character is in the allowed range:\n\t\t\t//      #x9 | #xA | #xD | [#x20-#x7E]               (8 bit)\n\t\t\t//      | #x85 | [#xA0-#xD7FF] | [#xE000-#xFFFD]    (16 bit)\n\t\t\t//      | [#x10000-#x10FFFF]                        (32 bit)\n\t\t\tswitch {\n\t\t\tcase value == 0x09:\n\t\t\tcase value == 0x0A:\n\t\t\tcase value == 0x0D:\n\t\t\tcase value >= 0x20 && value <= 0x7E:\n\t\t\tcase value == 0x85:\n\t\t\tcase value >= 0xA0 && value <= 0xD7FF:\n\t\t\tcase value >= 0xE000 && value <= 0xFFFD:\n\t\t\tcase value >= 0x10000 && value <= 0x10FFFF:\n\t\t\tdefault:\n\t\t\t\treturn yaml_parser_set_reader_error(parser,\n\t\t\t\t\t\"control characters are not allowed\",\n\t\t\t\t\tparser.offset, int(value))\n\t\t\t}\n\n\t\t\t// Move the raw pointers.\n\t\t\tparser.raw_buffer_pos += width\n\t\t\tparser.offset += width\n\n\t\t\t// Finally put the character into the buffer.\n\t\t\tif value <= 0x7F {\n\t\t\t\t// 0000 0000-0000 007F . 0xxxxxxx\n\t\t\t\tparser.buffer[buffer_len+0] = byte(value)\n\t\t\t\tbuffer_len += 1\n\t\t\t} else if value <= 0x7FF {\n\t\t\t\t// 0000 0080-0000 07FF . 110xxxxx 10xxxxxx\n\t\t\t\tparser.buffer[buffer_len+0] = byte(0xC0 + (value >> 6))\n\t\t\t\tparser.buffer[buffer_len+1] = byte(0x80 + (value & 0x3F))\n\t\t\t\tbuffer_len += 2\n\t\t\t} else if value <= 0xFFFF {\n\t\t\t\t// 0000 0800-0000 FFFF . 1110xxxx 10xxxxxx 10xxxxxx\n\t\t\t\tparser.buffer[buffer_len+0] = byte(0xE0 + (value >> 12))\n\t\t\t\tparser.buffer[buffer_len+1] = byte(0x80 + ((value >> 6) & 0x3F))\n\t\t\t\tparser.buffer[buffer_len+2] = byte(0x80 + (value & 0x3F))\n\t\t\t\tbuffer_len += 3\n\t\t\t} else {\n\t\t\t\t// 0001 0000-0010 FFFF . 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx\n\t\t\t\tparser.buffer[buffer_len+0] = byte(0xF0 + (value >> 18))\n\t\t\t\tparser.buffer[buffer_len+1] = byte(0x80 + ((value >> 12) & 0x3F))\n\t\t\t\tparser.buffer[buffer_len+2] = byte(0x80 + ((value >> 6) & 0x3F))\n\t\t\t\tparser.buffer[buffer_len+3] = byte(0x80 + (value & 0x3F))\n\t\t\t\tbuffer_len += 4\n\t\t\t}\n\n\t\t\tparser.unread++\n\t\t}\n\n\t\t// On EOF, put NUL into the buffer and return.\n\t\tif parser.eof {\n\t\t\tparser.buffer[buffer_len] = 0\n\t\t\tbuffer_len++\n\t\t\tparser.unread++\n\t\t\tbreak\n\t\t}\n\t}\n\t// [Go] Read the documentation of this function above. To return true,\n\t// we need to have the given length in the buffer. Not doing that means\n\t// every single check that calls this function to make sure the buffer\n\t// has a given length is Go) panicking; or C) accessing invalid memory.\n\t// This happens here due to the EOF above breaking early.\n\tfor buffer_len < length {\n\t\tparser.buffer[buffer_len] = 0\n\t\tbuffer_len++\n\t}\n\tparser.buffer = parser.buffer[:buffer_len]\n\treturn true\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/resolve.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage yaml\n\nimport (\n\t\"encoding/base64\"\n\t\"math\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n)\n\ntype resolveMapItem struct {\n\tvalue interface{}\n\ttag   string\n}\n\nvar resolveTable = make([]byte, 256)\nvar resolveMap = make(map[string]resolveMapItem)\n\nfunc init() {\n\tt := resolveTable\n\tt[int('+')] = 'S' // Sign\n\tt[int('-')] = 'S'\n\tfor _, c := range \"0123456789\" {\n\t\tt[int(c)] = 'D' // Digit\n\t}\n\tfor _, c := range \"yYnNtTfFoO~\" {\n\t\tt[int(c)] = 'M' // In map\n\t}\n\tt[int('.')] = '.' // Float (potentially in map)\n\n\tvar resolveMapList = []struct {\n\t\tv   interface{}\n\t\ttag string\n\t\tl   []string\n\t}{\n\t\t{true, boolTag, []string{\"true\", \"True\", \"TRUE\"}},\n\t\t{false, boolTag, []string{\"false\", \"False\", \"FALSE\"}},\n\t\t{nil, nullTag, []string{\"\", \"~\", \"null\", \"Null\", \"NULL\"}},\n\t\t{math.NaN(), floatTag, []string{\".nan\", \".NaN\", \".NAN\"}},\n\t\t{math.Inf(+1), floatTag, []string{\".inf\", \".Inf\", \".INF\"}},\n\t\t{math.Inf(+1), floatTag, []string{\"+.inf\", \"+.Inf\", \"+.INF\"}},\n\t\t{math.Inf(-1), floatTag, []string{\"-.inf\", \"-.Inf\", \"-.INF\"}},\n\t\t{\"<<\", mergeTag, []string{\"<<\"}},\n\t}\n\n\tm := resolveMap\n\tfor _, item := range resolveMapList {\n\t\tfor _, s := range item.l {\n\t\t\tm[s] = resolveMapItem{item.v, item.tag}\n\t\t}\n\t}\n}\n\nconst (\n\tnullTag      = \"!!null\"\n\tboolTag      = \"!!bool\"\n\tstrTag       = \"!!str\"\n\tintTag       = \"!!int\"\n\tfloatTag     = \"!!float\"\n\ttimestampTag = \"!!timestamp\"\n\tseqTag       = \"!!seq\"\n\tmapTag       = \"!!map\"\n\tbinaryTag    = \"!!binary\"\n\tmergeTag     = \"!!merge\"\n)\n\nvar longTags = make(map[string]string)\nvar shortTags = make(map[string]string)\n\nfunc init() {\n\tfor _, stag := range []string{nullTag, boolTag, strTag, intTag, floatTag, timestampTag, seqTag, mapTag, binaryTag, mergeTag} {\n\t\tltag := longTag(stag)\n\t\tlongTags[stag] = ltag\n\t\tshortTags[ltag] = stag\n\t}\n}\n\nconst longTagPrefix = \"tag:yaml.org,2002:\"\n\nfunc shortTag(tag string) string {\n\tif strings.HasPrefix(tag, longTagPrefix) {\n\t\tif stag, ok := shortTags[tag]; ok {\n\t\t\treturn stag\n\t\t}\n\t\treturn \"!!\" + tag[len(longTagPrefix):]\n\t}\n\treturn tag\n}\n\nfunc longTag(tag string) string {\n\tif strings.HasPrefix(tag, \"!!\") {\n\t\tif ltag, ok := longTags[tag]; ok {\n\t\t\treturn ltag\n\t\t}\n\t\treturn longTagPrefix + tag[2:]\n\t}\n\treturn tag\n}\n\nfunc resolvableTag(tag string) bool {\n\tswitch tag {\n\tcase \"\", strTag, boolTag, intTag, floatTag, nullTag, timestampTag:\n\t\treturn true\n\t}\n\treturn false\n}\n\nvar yamlStyleFloat = regexp.MustCompile(`^[-+]?(\\.[0-9]+|[0-9]+(\\.[0-9]*)?)([eE][-+]?[0-9]+)?$`)\n\nfunc resolve(tag string, in string) (rtag string, out interface{}) {\n\ttag = shortTag(tag)\n\tif !resolvableTag(tag) {\n\t\treturn tag, in\n\t}\n\n\tdefer func() {\n\t\tswitch tag {\n\t\tcase \"\", rtag, strTag, binaryTag:\n\t\t\treturn\n\t\tcase floatTag:\n\t\t\tif rtag == intTag {\n\t\t\t\tswitch v := out.(type) {\n\t\t\t\tcase int64:\n\t\t\t\t\trtag = floatTag\n\t\t\t\t\tout = float64(v)\n\t\t\t\t\treturn\n\t\t\t\tcase int:\n\t\t\t\t\trtag = floatTag\n\t\t\t\t\tout = float64(v)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tfailf(\"cannot decode %s `%s` as a %s\", shortTag(rtag), in, shortTag(tag))\n\t}()\n\n\t// Any data is accepted as a !!str or !!binary.\n\t// Otherwise, the prefix is enough of a hint about what it might be.\n\thint := byte('N')\n\tif in != \"\" {\n\t\thint = resolveTable[in[0]]\n\t}\n\tif hint != 0 && tag != strTag && tag != binaryTag {\n\t\t// Handle things we can lookup in a map.\n\t\tif item, ok := resolveMap[in]; ok {\n\t\t\treturn item.tag, item.value\n\t\t}\n\n\t\t// Base 60 floats are a bad idea, were dropped in YAML 1.2, and\n\t\t// are purposefully unsupported here. They're still quoted on\n\t\t// the way out for compatibility with other parser, though.\n\n\t\tswitch hint {\n\t\tcase 'M':\n\t\t\t// We've already checked the map above.\n\n\t\tcase '.':\n\t\t\t// Not in the map, so maybe a normal float.\n\t\t\tfloatv, err := strconv.ParseFloat(in, 64)\n\t\t\tif err == nil {\n\t\t\t\treturn floatTag, floatv\n\t\t\t}\n\n\t\tcase 'D', 'S':\n\t\t\t// Int, float, or timestamp.\n\t\t\t// Only try values as a timestamp if the value is unquoted or there's an explicit\n\t\t\t// !!timestamp tag.\n\t\t\tif tag == \"\" || tag == timestampTag {\n\t\t\t\tt, ok := parseTimestamp(in)\n\t\t\t\tif ok {\n\t\t\t\t\treturn timestampTag, t\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tplain := strings.Replace(in, \"_\", \"\", -1)\n\t\t\tintv, err := strconv.ParseInt(plain, 0, 64)\n\t\t\tif err == nil {\n\t\t\t\tif intv == int64(int(intv)) {\n\t\t\t\t\treturn intTag, int(intv)\n\t\t\t\t} else {\n\t\t\t\t\treturn intTag, intv\n\t\t\t\t}\n\t\t\t}\n\t\t\tuintv, err := strconv.ParseUint(plain, 0, 64)\n\t\t\tif err == nil {\n\t\t\t\treturn intTag, uintv\n\t\t\t}\n\t\t\tif yamlStyleFloat.MatchString(plain) {\n\t\t\t\tfloatv, err := strconv.ParseFloat(plain, 64)\n\t\t\t\tif err == nil {\n\t\t\t\t\treturn floatTag, floatv\n\t\t\t\t}\n\t\t\t}\n\t\t\tif strings.HasPrefix(plain, \"0b\") {\n\t\t\t\tintv, err := strconv.ParseInt(plain[2:], 2, 64)\n\t\t\t\tif err == nil {\n\t\t\t\t\tif intv == int64(int(intv)) {\n\t\t\t\t\t\treturn intTag, int(intv)\n\t\t\t\t\t} else {\n\t\t\t\t\t\treturn intTag, intv\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tuintv, err := strconv.ParseUint(plain[2:], 2, 64)\n\t\t\t\tif err == nil {\n\t\t\t\t\treturn intTag, uintv\n\t\t\t\t}\n\t\t\t} else if strings.HasPrefix(plain, \"-0b\") {\n\t\t\t\tintv, err := strconv.ParseInt(\"-\"+plain[3:], 2, 64)\n\t\t\t\tif err == nil {\n\t\t\t\t\tif true || intv == int64(int(intv)) {\n\t\t\t\t\t\treturn intTag, int(intv)\n\t\t\t\t\t} else {\n\t\t\t\t\t\treturn intTag, intv\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t// Octals as introduced in version 1.2 of the spec.\n\t\t\t// Octals from the 1.1 spec, spelled as 0777, are still\n\t\t\t// decoded by default in v3 as well for compatibility.\n\t\t\t// May be dropped in v4 depending on how usage evolves.\n\t\t\tif strings.HasPrefix(plain, \"0o\") {\n\t\t\t\tintv, err := strconv.ParseInt(plain[2:], 8, 64)\n\t\t\t\tif err == nil {\n\t\t\t\t\tif intv == int64(int(intv)) {\n\t\t\t\t\t\treturn intTag, int(intv)\n\t\t\t\t\t} else {\n\t\t\t\t\t\treturn intTag, intv\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tuintv, err := strconv.ParseUint(plain[2:], 8, 64)\n\t\t\t\tif err == nil {\n\t\t\t\t\treturn intTag, uintv\n\t\t\t\t}\n\t\t\t} else if strings.HasPrefix(plain, \"-0o\") {\n\t\t\t\tintv, err := strconv.ParseInt(\"-\"+plain[3:], 8, 64)\n\t\t\t\tif err == nil {\n\t\t\t\t\tif true || intv == int64(int(intv)) {\n\t\t\t\t\t\treturn intTag, int(intv)\n\t\t\t\t\t} else {\n\t\t\t\t\t\treturn intTag, intv\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\tdefault:\n\t\t\tpanic(\"internal error: missing handler for resolver table: \" + string(rune(hint)) + \" (with \" + in + \")\")\n\t\t}\n\t}\n\treturn strTag, in\n}\n\n// encodeBase64 encodes s as base64 that is broken up into multiple lines\n// as appropriate for the resulting length.\nfunc encodeBase64(s string) string {\n\tconst lineLen = 70\n\tencLen := base64.StdEncoding.EncodedLen(len(s))\n\tlines := encLen/lineLen + 1\n\tbuf := make([]byte, encLen*2+lines)\n\tin := buf[0:encLen]\n\tout := buf[encLen:]\n\tbase64.StdEncoding.Encode(in, []byte(s))\n\tk := 0\n\tfor i := 0; i < len(in); i += lineLen {\n\t\tj := i + lineLen\n\t\tif j > len(in) {\n\t\t\tj = len(in)\n\t\t}\n\t\tk += copy(out[k:], in[i:j])\n\t\tif lines > 1 {\n\t\t\tout[k] = '\\n'\n\t\t\tk++\n\t\t}\n\t}\n\treturn string(out[:k])\n}\n\n// This is a subset of the formats allowed by the regular expression\n// defined at http://yaml.org/type/timestamp.html.\nvar allowedTimestampFormats = []string{\n\t\"2006-1-2T15:4:5.999999999Z07:00\", // RCF3339Nano with short date fields.\n\t\"2006-1-2t15:4:5.999999999Z07:00\", // RFC3339Nano with short date fields and lower-case \"t\".\n\t\"2006-1-2 15:4:5.999999999\",       // space separated with no time zone\n\t\"2006-1-2\",                        // date only\n\t// Notable exception: time.Parse cannot handle: \"2001-12-14 21:59:43.10 -5\"\n\t// from the set of examples.\n}\n\n// parseTimestamp parses s as a timestamp string and\n// returns the timestamp and reports whether it succeeded.\n// Timestamp formats are defined at http://yaml.org/type/timestamp.html\nfunc parseTimestamp(s string) (time.Time, bool) {\n\t// TODO write code to check all the formats supported by\n\t// http://yaml.org/type/timestamp.html instead of using time.Parse.\n\n\t// Quick check: all date formats start with YYYY-.\n\ti := 0\n\tfor ; i < len(s); i++ {\n\t\tif c := s[i]; c < '0' || c > '9' {\n\t\t\tbreak\n\t\t}\n\t}\n\tif i != 4 || i == len(s) || s[i] != '-' {\n\t\treturn time.Time{}, false\n\t}\n\tfor _, format := range allowedTimestampFormats {\n\t\tif t, err := time.Parse(format, s); err == nil {\n\t\t\treturn t, true\n\t\t}\n\t}\n\treturn time.Time{}, false\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/scannerc.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n//\n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n//\n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n//\n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n)\n\n// Introduction\n// ************\n//\n// The following notes assume that you are familiar with the YAML specification\n// (http://yaml.org/spec/1.2/spec.html).  We mostly follow it, although in\n// some cases we are less restrictive that it requires.\n//\n// The process of transforming a YAML stream into a sequence of events is\n// divided on two steps: Scanning and Parsing.\n//\n// The Scanner transforms the input stream into a sequence of tokens, while the\n// parser transform the sequence of tokens produced by the Scanner into a\n// sequence of parsing events.\n//\n// The Scanner is rather clever and complicated. The Parser, on the contrary,\n// is a straightforward implementation of a recursive-descendant parser (or,\n// LL(1) parser, as it is usually called).\n//\n// Actually there are two issues of Scanning that might be called \"clever\", the\n// rest is quite straightforward.  The issues are \"block collection start\" and\n// \"simple keys\".  Both issues are explained below in details.\n//\n// Here the Scanning step is explained and implemented.  We start with the list\n// of all the tokens produced by the Scanner together with short descriptions.\n//\n// Now, tokens:\n//\n//      STREAM-START(encoding)          # The stream start.\n//      STREAM-END                      # The stream end.\n//      VERSION-DIRECTIVE(major,minor)  # The '%YAML' directive.\n//      TAG-DIRECTIVE(handle,prefix)    # The '%TAG' directive.\n//      DOCUMENT-START                  # '---'\n//      DOCUMENT-END                    # '...'\n//      BLOCK-SEQUENCE-START            # Indentation increase denoting a block\n//      BLOCK-MAPPING-START             # sequence or a block mapping.\n//      BLOCK-END                       # Indentation decrease.\n//      FLOW-SEQUENCE-START             # '['\n//      FLOW-SEQUENCE-END               # ']'\n//      BLOCK-SEQUENCE-START            # '{'\n//      BLOCK-SEQUENCE-END              # '}'\n//      BLOCK-ENTRY                     # '-'\n//      FLOW-ENTRY                      # ','\n//      KEY                             # '?' or nothing (simple keys).\n//      VALUE                           # ':'\n//      ALIAS(anchor)                   # '*anchor'\n//      ANCHOR(anchor)                  # '&anchor'\n//      TAG(handle,suffix)              # '!handle!suffix'\n//      SCALAR(value,style)             # A scalar.\n//\n// The following two tokens are \"virtual\" tokens denoting the beginning and the\n// end of the stream:\n//\n//      STREAM-START(encoding)\n//      STREAM-END\n//\n// We pass the information about the input stream encoding with the\n// STREAM-START token.\n//\n// The next two tokens are responsible for tags:\n//\n//      VERSION-DIRECTIVE(major,minor)\n//      TAG-DIRECTIVE(handle,prefix)\n//\n// Example:\n//\n//      %YAML   1.1\n//      %TAG    !   !foo\n//      %TAG    !yaml!  tag:yaml.org,2002:\n//      ---\n//\n// The correspoding sequence of tokens:\n//\n//      STREAM-START(utf-8)\n//      VERSION-DIRECTIVE(1,1)\n//      TAG-DIRECTIVE(\"!\",\"!foo\")\n//      TAG-DIRECTIVE(\"!yaml\",\"tag:yaml.org,2002:\")\n//      DOCUMENT-START\n//      STREAM-END\n//\n// Note that the VERSION-DIRECTIVE and TAG-DIRECTIVE tokens occupy a whole\n// line.\n//\n// The document start and end indicators are represented by:\n//\n//      DOCUMENT-START\n//      DOCUMENT-END\n//\n// Note that if a YAML stream contains an implicit document (without '---'\n// and '...' indicators), no DOCUMENT-START and DOCUMENT-END tokens will be\n// produced.\n//\n// In the following examples, we present whole documents together with the\n// produced tokens.\n//\n//      1. An implicit document:\n//\n//          'a scalar'\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          SCALAR(\"a scalar\",single-quoted)\n//          STREAM-END\n//\n//      2. An explicit document:\n//\n//          ---\n//          'a scalar'\n//          ...\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          DOCUMENT-START\n//          SCALAR(\"a scalar\",single-quoted)\n//          DOCUMENT-END\n//          STREAM-END\n//\n//      3. Several documents in a stream:\n//\n//          'a scalar'\n//          ---\n//          'another scalar'\n//          ---\n//          'yet another scalar'\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          SCALAR(\"a scalar\",single-quoted)\n//          DOCUMENT-START\n//          SCALAR(\"another scalar\",single-quoted)\n//          DOCUMENT-START\n//          SCALAR(\"yet another scalar\",single-quoted)\n//          STREAM-END\n//\n// We have already introduced the SCALAR token above.  The following tokens are\n// used to describe aliases, anchors, tag, and scalars:\n//\n//      ALIAS(anchor)\n//      ANCHOR(anchor)\n//      TAG(handle,suffix)\n//      SCALAR(value,style)\n//\n// The following series of examples illustrate the usage of these tokens:\n//\n//      1. A recursive sequence:\n//\n//          &A [ *A ]\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          ANCHOR(\"A\")\n//          FLOW-SEQUENCE-START\n//          ALIAS(\"A\")\n//          FLOW-SEQUENCE-END\n//          STREAM-END\n//\n//      2. A tagged scalar:\n//\n//          !!float \"3.14\"  # A good approximation.\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          TAG(\"!!\",\"float\")\n//          SCALAR(\"3.14\",double-quoted)\n//          STREAM-END\n//\n//      3. Various scalar styles:\n//\n//          --- # Implicit empty plain scalars do not produce tokens.\n//          --- a plain scalar\n//          --- 'a single-quoted scalar'\n//          --- \"a double-quoted scalar\"\n//          --- |-\n//            a literal scalar\n//          --- >-\n//            a folded\n//            scalar\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          DOCUMENT-START\n//          DOCUMENT-START\n//          SCALAR(\"a plain scalar\",plain)\n//          DOCUMENT-START\n//          SCALAR(\"a single-quoted scalar\",single-quoted)\n//          DOCUMENT-START\n//          SCALAR(\"a double-quoted scalar\",double-quoted)\n//          DOCUMENT-START\n//          SCALAR(\"a literal scalar\",literal)\n//          DOCUMENT-START\n//          SCALAR(\"a folded scalar\",folded)\n//          STREAM-END\n//\n// Now it's time to review collection-related tokens. We will start with\n// flow collections:\n//\n//      FLOW-SEQUENCE-START\n//      FLOW-SEQUENCE-END\n//      FLOW-MAPPING-START\n//      FLOW-MAPPING-END\n//      FLOW-ENTRY\n//      KEY\n//      VALUE\n//\n// The tokens FLOW-SEQUENCE-START, FLOW-SEQUENCE-END, FLOW-MAPPING-START, and\n// FLOW-MAPPING-END represent the indicators '[', ']', '{', and '}'\n// correspondingly.  FLOW-ENTRY represent the ',' indicator.  Finally the\n// indicators '?' and ':', which are used for denoting mapping keys and values,\n// are represented by the KEY and VALUE tokens.\n//\n// The following examples show flow collections:\n//\n//      1. A flow sequence:\n//\n//          [item 1, item 2, item 3]\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          FLOW-SEQUENCE-START\n//          SCALAR(\"item 1\",plain)\n//          FLOW-ENTRY\n//          SCALAR(\"item 2\",plain)\n//          FLOW-ENTRY\n//          SCALAR(\"item 3\",plain)\n//          FLOW-SEQUENCE-END\n//          STREAM-END\n//\n//      2. A flow mapping:\n//\n//          {\n//              a simple key: a value,  # Note that the KEY token is produced.\n//              ? a complex key: another value,\n//          }\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          FLOW-MAPPING-START\n//          KEY\n//          SCALAR(\"a simple key\",plain)\n//          VALUE\n//          SCALAR(\"a value\",plain)\n//          FLOW-ENTRY\n//          KEY\n//          SCALAR(\"a complex key\",plain)\n//          VALUE\n//          SCALAR(\"another value\",plain)\n//          FLOW-ENTRY\n//          FLOW-MAPPING-END\n//          STREAM-END\n//\n// A simple key is a key which is not denoted by the '?' indicator.  Note that\n// the Scanner still produce the KEY token whenever it encounters a simple key.\n//\n// For scanning block collections, the following tokens are used (note that we\n// repeat KEY and VALUE here):\n//\n//      BLOCK-SEQUENCE-START\n//      BLOCK-MAPPING-START\n//      BLOCK-END\n//      BLOCK-ENTRY\n//      KEY\n//      VALUE\n//\n// The tokens BLOCK-SEQUENCE-START and BLOCK-MAPPING-START denote indentation\n// increase that precedes a block collection (cf. the INDENT token in Python).\n// The token BLOCK-END denote indentation decrease that ends a block collection\n// (cf. the DEDENT token in Python).  However YAML has some syntax pecularities\n// that makes detections of these tokens more complex.\n//\n// The tokens BLOCK-ENTRY, KEY, and VALUE are used to represent the indicators\n// '-', '?', and ':' correspondingly.\n//\n// The following examples show how the tokens BLOCK-SEQUENCE-START,\n// BLOCK-MAPPING-START, and BLOCK-END are emitted by the Scanner:\n//\n//      1. Block sequences:\n//\n//          - item 1\n//          - item 2\n//          -\n//            - item 3.1\n//            - item 3.2\n//          -\n//            key 1: value 1\n//            key 2: value 2\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          BLOCK-SEQUENCE-START\n//          BLOCK-ENTRY\n//          SCALAR(\"item 1\",plain)\n//          BLOCK-ENTRY\n//          SCALAR(\"item 2\",plain)\n//          BLOCK-ENTRY\n//          BLOCK-SEQUENCE-START\n//          BLOCK-ENTRY\n//          SCALAR(\"item 3.1\",plain)\n//          BLOCK-ENTRY\n//          SCALAR(\"item 3.2\",plain)\n//          BLOCK-END\n//          BLOCK-ENTRY\n//          BLOCK-MAPPING-START\n//          KEY\n//          SCALAR(\"key 1\",plain)\n//          VALUE\n//          SCALAR(\"value 1\",plain)\n//          KEY\n//          SCALAR(\"key 2\",plain)\n//          VALUE\n//          SCALAR(\"value 2\",plain)\n//          BLOCK-END\n//          BLOCK-END\n//          STREAM-END\n//\n//      2. Block mappings:\n//\n//          a simple key: a value   # The KEY token is produced here.\n//          ? a complex key\n//          : another value\n//          a mapping:\n//            key 1: value 1\n//            key 2: value 2\n//          a sequence:\n//            - item 1\n//            - item 2\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          BLOCK-MAPPING-START\n//          KEY\n//          SCALAR(\"a simple key\",plain)\n//          VALUE\n//          SCALAR(\"a value\",plain)\n//          KEY\n//          SCALAR(\"a complex key\",plain)\n//          VALUE\n//          SCALAR(\"another value\",plain)\n//          KEY\n//          SCALAR(\"a mapping\",plain)\n//          BLOCK-MAPPING-START\n//          KEY\n//          SCALAR(\"key 1\",plain)\n//          VALUE\n//          SCALAR(\"value 1\",plain)\n//          KEY\n//          SCALAR(\"key 2\",plain)\n//          VALUE\n//          SCALAR(\"value 2\",plain)\n//          BLOCK-END\n//          KEY\n//          SCALAR(\"a sequence\",plain)\n//          VALUE\n//          BLOCK-SEQUENCE-START\n//          BLOCK-ENTRY\n//          SCALAR(\"item 1\",plain)\n//          BLOCK-ENTRY\n//          SCALAR(\"item 2\",plain)\n//          BLOCK-END\n//          BLOCK-END\n//          STREAM-END\n//\n// YAML does not always require to start a new block collection from a new\n// line.  If the current line contains only '-', '?', and ':' indicators, a new\n// block collection may start at the current line.  The following examples\n// illustrate this case:\n//\n//      1. Collections in a sequence:\n//\n//          - - item 1\n//            - item 2\n//          - key 1: value 1\n//            key 2: value 2\n//          - ? complex key\n//            : complex value\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          BLOCK-SEQUENCE-START\n//          BLOCK-ENTRY\n//          BLOCK-SEQUENCE-START\n//          BLOCK-ENTRY\n//          SCALAR(\"item 1\",plain)\n//          BLOCK-ENTRY\n//          SCALAR(\"item 2\",plain)\n//          BLOCK-END\n//          BLOCK-ENTRY\n//          BLOCK-MAPPING-START\n//          KEY\n//          SCALAR(\"key 1\",plain)\n//          VALUE\n//          SCALAR(\"value 1\",plain)\n//          KEY\n//          SCALAR(\"key 2\",plain)\n//          VALUE\n//          SCALAR(\"value 2\",plain)\n//          BLOCK-END\n//          BLOCK-ENTRY\n//          BLOCK-MAPPING-START\n//          KEY\n//          SCALAR(\"complex key\")\n//          VALUE\n//          SCALAR(\"complex value\")\n//          BLOCK-END\n//          BLOCK-END\n//          STREAM-END\n//\n//      2. Collections in a mapping:\n//\n//          ? a sequence\n//          : - item 1\n//            - item 2\n//          ? a mapping\n//          : key 1: value 1\n//            key 2: value 2\n//\n//      Tokens:\n//\n//          STREAM-START(utf-8)\n//          BLOCK-MAPPING-START\n//          KEY\n//          SCALAR(\"a sequence\",plain)\n//          VALUE\n//          BLOCK-SEQUENCE-START\n//          BLOCK-ENTRY\n//          SCALAR(\"item 1\",plain)\n//          BLOCK-ENTRY\n//          SCALAR(\"item 2\",plain)\n//          BLOCK-END\n//          KEY\n//          SCALAR(\"a mapping\",plain)\n//          VALUE\n//          BLOCK-MAPPING-START\n//          KEY\n//          SCALAR(\"key 1\",plain)\n//          VALUE\n//          SCALAR(\"value 1\",plain)\n//          KEY\n//          SCALAR(\"key 2\",plain)\n//          VALUE\n//          SCALAR(\"value 2\",plain)\n//          BLOCK-END\n//          BLOCK-END\n//          STREAM-END\n//\n// YAML also permits non-indented sequences if they are included into a block\n// mapping.  In this case, the token BLOCK-SEQUENCE-START is not produced:\n//\n//      key:\n//      - item 1    # BLOCK-SEQUENCE-START is NOT produced here.\n//      - item 2\n//\n// Tokens:\n//\n//      STREAM-START(utf-8)\n//      BLOCK-MAPPING-START\n//      KEY\n//      SCALAR(\"key\",plain)\n//      VALUE\n//      BLOCK-ENTRY\n//      SCALAR(\"item 1\",plain)\n//      BLOCK-ENTRY\n//      SCALAR(\"item 2\",plain)\n//      BLOCK-END\n//\n\n// Ensure that the buffer contains the required number of characters.\n// Return true on success, false on failure (reader error or memory error).\nfunc cache(parser *yaml_parser_t, length int) bool {\n\t// [Go] This was inlined: !cache(A, B) -> unread < B && !update(A, B)\n\treturn parser.unread >= length || yaml_parser_update_buffer(parser, length)\n}\n\n// Advance the buffer pointer.\nfunc skip(parser *yaml_parser_t) {\n\tif !is_blank(parser.buffer, parser.buffer_pos) {\n\t\tparser.newlines = 0\n\t}\n\tparser.mark.index++\n\tparser.mark.column++\n\tparser.unread--\n\tparser.buffer_pos += width(parser.buffer[parser.buffer_pos])\n}\n\nfunc skip_line(parser *yaml_parser_t) {\n\tif is_crlf(parser.buffer, parser.buffer_pos) {\n\t\tparser.mark.index += 2\n\t\tparser.mark.column = 0\n\t\tparser.mark.line++\n\t\tparser.unread -= 2\n\t\tparser.buffer_pos += 2\n\t\tparser.newlines++\n\t} else if is_break(parser.buffer, parser.buffer_pos) {\n\t\tparser.mark.index++\n\t\tparser.mark.column = 0\n\t\tparser.mark.line++\n\t\tparser.unread--\n\t\tparser.buffer_pos += width(parser.buffer[parser.buffer_pos])\n\t\tparser.newlines++\n\t}\n}\n\n// Copy a character to a string buffer and advance pointers.\nfunc read(parser *yaml_parser_t, s []byte) []byte {\n\tif !is_blank(parser.buffer, parser.buffer_pos) {\n\t\tparser.newlines = 0\n\t}\n\tw := width(parser.buffer[parser.buffer_pos])\n\tif w == 0 {\n\t\tpanic(\"invalid character sequence\")\n\t}\n\tif len(s) == 0 {\n\t\ts = make([]byte, 0, 32)\n\t}\n\tif w == 1 && len(s)+w <= cap(s) {\n\t\ts = s[:len(s)+1]\n\t\ts[len(s)-1] = parser.buffer[parser.buffer_pos]\n\t\tparser.buffer_pos++\n\t} else {\n\t\ts = append(s, parser.buffer[parser.buffer_pos:parser.buffer_pos+w]...)\n\t\tparser.buffer_pos += w\n\t}\n\tparser.mark.index++\n\tparser.mark.column++\n\tparser.unread--\n\treturn s\n}\n\n// Copy a line break character to a string buffer and advance pointers.\nfunc read_line(parser *yaml_parser_t, s []byte) []byte {\n\tbuf := parser.buffer\n\tpos := parser.buffer_pos\n\tswitch {\n\tcase buf[pos] == '\\r' && buf[pos+1] == '\\n':\n\t\t// CR LF . LF\n\t\ts = append(s, '\\n')\n\t\tparser.buffer_pos += 2\n\t\tparser.mark.index++\n\t\tparser.unread--\n\tcase buf[pos] == '\\r' || buf[pos] == '\\n':\n\t\t// CR|LF . LF\n\t\ts = append(s, '\\n')\n\t\tparser.buffer_pos += 1\n\tcase buf[pos] == '\\xC2' && buf[pos+1] == '\\x85':\n\t\t// NEL . LF\n\t\ts = append(s, '\\n')\n\t\tparser.buffer_pos += 2\n\tcase buf[pos] == '\\xE2' && buf[pos+1] == '\\x80' && (buf[pos+2] == '\\xA8' || buf[pos+2] == '\\xA9'):\n\t\t// LS|PS . LS|PS\n\t\ts = append(s, buf[parser.buffer_pos:pos+3]...)\n\t\tparser.buffer_pos += 3\n\tdefault:\n\t\treturn s\n\t}\n\tparser.mark.index++\n\tparser.mark.column = 0\n\tparser.mark.line++\n\tparser.unread--\n\tparser.newlines++\n\treturn s\n}\n\n// Get the next token.\nfunc yaml_parser_scan(parser *yaml_parser_t, token *yaml_token_t) bool {\n\t// Erase the token object.\n\t*token = yaml_token_t{} // [Go] Is this necessary?\n\n\t// No tokens after STREAM-END or error.\n\tif parser.stream_end_produced || parser.error != yaml_NO_ERROR {\n\t\treturn true\n\t}\n\n\t// Ensure that the tokens queue contains enough tokens.\n\tif !parser.token_available {\n\t\tif !yaml_parser_fetch_more_tokens(parser) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Fetch the next token from the queue.\n\t*token = parser.tokens[parser.tokens_head]\n\tparser.tokens_head++\n\tparser.tokens_parsed++\n\tparser.token_available = false\n\n\tif token.typ == yaml_STREAM_END_TOKEN {\n\t\tparser.stream_end_produced = true\n\t}\n\treturn true\n}\n\n// Set the scanner error and return false.\nfunc yaml_parser_set_scanner_error(parser *yaml_parser_t, context string, context_mark yaml_mark_t, problem string) bool {\n\tparser.error = yaml_SCANNER_ERROR\n\tparser.context = context\n\tparser.context_mark = context_mark\n\tparser.problem = problem\n\tparser.problem_mark = parser.mark\n\treturn false\n}\n\nfunc yaml_parser_set_scanner_tag_error(parser *yaml_parser_t, directive bool, context_mark yaml_mark_t, problem string) bool {\n\tcontext := \"while parsing a tag\"\n\tif directive {\n\t\tcontext = \"while parsing a %TAG directive\"\n\t}\n\treturn yaml_parser_set_scanner_error(parser, context, context_mark, problem)\n}\n\nfunc trace(args ...interface{}) func() {\n\tpargs := append([]interface{}{\"+++\"}, args...)\n\tfmt.Println(pargs...)\n\tpargs = append([]interface{}{\"---\"}, args...)\n\treturn func() { fmt.Println(pargs...) }\n}\n\n// Ensure that the tokens queue contains at least one token which can be\n// returned to the Parser.\nfunc yaml_parser_fetch_more_tokens(parser *yaml_parser_t) bool {\n\t// While we need more tokens to fetch, do it.\n\tfor {\n\t\t// [Go] The comment parsing logic requires a lookahead of two tokens\n\t\t// so that foot comments may be parsed in time of associating them\n\t\t// with the tokens that are parsed before them, and also for line\n\t\t// comments to be transformed into head comments in some edge cases.\n\t\tif parser.tokens_head < len(parser.tokens)-2 {\n\t\t\t// If a potential simple key is at the head position, we need to fetch\n\t\t\t// the next token to disambiguate it.\n\t\t\thead_tok_idx, ok := parser.simple_keys_by_tok[parser.tokens_parsed]\n\t\t\tif !ok {\n\t\t\t\tbreak\n\t\t\t} else if valid, ok := yaml_simple_key_is_valid(parser, &parser.simple_keys[head_tok_idx]); !ok {\n\t\t\t\treturn false\n\t\t\t} else if !valid {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\t// Fetch the next token.\n\t\tif !yaml_parser_fetch_next_token(parser) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tparser.token_available = true\n\treturn true\n}\n\n// The dispatcher for token fetchers.\nfunc yaml_parser_fetch_next_token(parser *yaml_parser_t) (ok bool) {\n\t// Ensure that the buffer is initialized.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\n\t// Check if we just started scanning.  Fetch STREAM-START then.\n\tif !parser.stream_start_produced {\n\t\treturn yaml_parser_fetch_stream_start(parser)\n\t}\n\n\tscan_mark := parser.mark\n\n\t// Eat whitespaces and comments until we reach the next token.\n\tif !yaml_parser_scan_to_next_token(parser) {\n\t\treturn false\n\t}\n\n\t// [Go] While unrolling indents, transform the head comments of prior\n\t// indentation levels observed after scan_start into foot comments at\n\t// the respective indexes.\n\n\t// Check the indentation level against the current column.\n\tif !yaml_parser_unroll_indent(parser, parser.mark.column, scan_mark) {\n\t\treturn false\n\t}\n\n\t// Ensure that the buffer contains at least 4 characters.  4 is the length\n\t// of the longest indicators ('--- ' and '... ').\n\tif parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) {\n\t\treturn false\n\t}\n\n\t// Is it the end of the stream?\n\tif is_z(parser.buffer, parser.buffer_pos) {\n\t\treturn yaml_parser_fetch_stream_end(parser)\n\t}\n\n\t// Is it a directive?\n\tif parser.mark.column == 0 && parser.buffer[parser.buffer_pos] == '%' {\n\t\treturn yaml_parser_fetch_directive(parser)\n\t}\n\n\tbuf := parser.buffer\n\tpos := parser.buffer_pos\n\n\t// Is it the document start indicator?\n\tif parser.mark.column == 0 && buf[pos] == '-' && buf[pos+1] == '-' && buf[pos+2] == '-' && is_blankz(buf, pos+3) {\n\t\treturn yaml_parser_fetch_document_indicator(parser, yaml_DOCUMENT_START_TOKEN)\n\t}\n\n\t// Is it the document end indicator?\n\tif parser.mark.column == 0 && buf[pos] == '.' && buf[pos+1] == '.' && buf[pos+2] == '.' && is_blankz(buf, pos+3) {\n\t\treturn yaml_parser_fetch_document_indicator(parser, yaml_DOCUMENT_END_TOKEN)\n\t}\n\n\tcomment_mark := parser.mark\n\tif len(parser.tokens) > 0 && (parser.flow_level == 0 && buf[pos] == ':' || parser.flow_level > 0 && buf[pos] == ',') {\n\t\t// Associate any following comments with the prior token.\n\t\tcomment_mark = parser.tokens[len(parser.tokens)-1].start_mark\n\t}\n\tdefer func() {\n\t\tif !ok {\n\t\t\treturn\n\t\t}\n\t\tif len(parser.tokens) > 0 && parser.tokens[len(parser.tokens)-1].typ == yaml_BLOCK_ENTRY_TOKEN {\n\t\t\t// Sequence indicators alone have no line comments. It becomes\n\t\t\t// a head comment for whatever follows.\n\t\t\treturn\n\t\t}\n\t\tif !yaml_parser_scan_line_comment(parser, comment_mark) {\n\t\t\tok = false\n\t\t\treturn\n\t\t}\n\t}()\n\n\t// Is it the flow sequence start indicator?\n\tif buf[pos] == '[' {\n\t\treturn yaml_parser_fetch_flow_collection_start(parser, yaml_FLOW_SEQUENCE_START_TOKEN)\n\t}\n\n\t// Is it the flow mapping start indicator?\n\tif parser.buffer[parser.buffer_pos] == '{' {\n\t\treturn yaml_parser_fetch_flow_collection_start(parser, yaml_FLOW_MAPPING_START_TOKEN)\n\t}\n\n\t// Is it the flow sequence end indicator?\n\tif parser.buffer[parser.buffer_pos] == ']' {\n\t\treturn yaml_parser_fetch_flow_collection_end(parser,\n\t\t\tyaml_FLOW_SEQUENCE_END_TOKEN)\n\t}\n\n\t// Is it the flow mapping end indicator?\n\tif parser.buffer[parser.buffer_pos] == '}' {\n\t\treturn yaml_parser_fetch_flow_collection_end(parser,\n\t\t\tyaml_FLOW_MAPPING_END_TOKEN)\n\t}\n\n\t// Is it the flow entry indicator?\n\tif parser.buffer[parser.buffer_pos] == ',' {\n\t\treturn yaml_parser_fetch_flow_entry(parser)\n\t}\n\n\t// Is it the block entry indicator?\n\tif parser.buffer[parser.buffer_pos] == '-' && is_blankz(parser.buffer, parser.buffer_pos+1) {\n\t\treturn yaml_parser_fetch_block_entry(parser)\n\t}\n\n\t// Is it the key indicator?\n\tif parser.buffer[parser.buffer_pos] == '?' && (parser.flow_level > 0 || is_blankz(parser.buffer, parser.buffer_pos+1)) {\n\t\treturn yaml_parser_fetch_key(parser)\n\t}\n\n\t// Is it the value indicator?\n\tif parser.buffer[parser.buffer_pos] == ':' && (parser.flow_level > 0 || is_blankz(parser.buffer, parser.buffer_pos+1)) {\n\t\treturn yaml_parser_fetch_value(parser)\n\t}\n\n\t// Is it an alias?\n\tif parser.buffer[parser.buffer_pos] == '*' {\n\t\treturn yaml_parser_fetch_anchor(parser, yaml_ALIAS_TOKEN)\n\t}\n\n\t// Is it an anchor?\n\tif parser.buffer[parser.buffer_pos] == '&' {\n\t\treturn yaml_parser_fetch_anchor(parser, yaml_ANCHOR_TOKEN)\n\t}\n\n\t// Is it a tag?\n\tif parser.buffer[parser.buffer_pos] == '!' {\n\t\treturn yaml_parser_fetch_tag(parser)\n\t}\n\n\t// Is it a literal scalar?\n\tif parser.buffer[parser.buffer_pos] == '|' && parser.flow_level == 0 {\n\t\treturn yaml_parser_fetch_block_scalar(parser, true)\n\t}\n\n\t// Is it a folded scalar?\n\tif parser.buffer[parser.buffer_pos] == '>' && parser.flow_level == 0 {\n\t\treturn yaml_parser_fetch_block_scalar(parser, false)\n\t}\n\n\t// Is it a single-quoted scalar?\n\tif parser.buffer[parser.buffer_pos] == '\\'' {\n\t\treturn yaml_parser_fetch_flow_scalar(parser, true)\n\t}\n\n\t// Is it a double-quoted scalar?\n\tif parser.buffer[parser.buffer_pos] == '\"' {\n\t\treturn yaml_parser_fetch_flow_scalar(parser, false)\n\t}\n\n\t// Is it a plain scalar?\n\t//\n\t// A plain scalar may start with any non-blank characters except\n\t//\n\t//      '-', '?', ':', ',', '[', ']', '{', '}',\n\t//      '#', '&', '*', '!', '|', '>', '\\'', '\\\"',\n\t//      '%', '@', '`'.\n\t//\n\t// In the block context (and, for the '-' indicator, in the flow context\n\t// too), it may also start with the characters\n\t//\n\t//      '-', '?', ':'\n\t//\n\t// if it is followed by a non-space character.\n\t//\n\t// The last rule is more restrictive than the specification requires.\n\t// [Go] TODO Make this logic more reasonable.\n\t//switch parser.buffer[parser.buffer_pos] {\n\t//case '-', '?', ':', ',', '?', '-', ',', ':', ']', '[', '}', '{', '&', '#', '!', '*', '>', '|', '\"', '\\'', '@', '%', '-', '`':\n\t//}\n\tif !(is_blankz(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == '-' ||\n\t\tparser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == ':' ||\n\t\tparser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == '[' ||\n\t\tparser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' ||\n\t\tparser.buffer[parser.buffer_pos] == '}' || parser.buffer[parser.buffer_pos] == '#' ||\n\t\tparser.buffer[parser.buffer_pos] == '&' || parser.buffer[parser.buffer_pos] == '*' ||\n\t\tparser.buffer[parser.buffer_pos] == '!' || parser.buffer[parser.buffer_pos] == '|' ||\n\t\tparser.buffer[parser.buffer_pos] == '>' || parser.buffer[parser.buffer_pos] == '\\'' ||\n\t\tparser.buffer[parser.buffer_pos] == '\"' || parser.buffer[parser.buffer_pos] == '%' ||\n\t\tparser.buffer[parser.buffer_pos] == '@' || parser.buffer[parser.buffer_pos] == '`') ||\n\t\t(parser.buffer[parser.buffer_pos] == '-' && !is_blank(parser.buffer, parser.buffer_pos+1)) ||\n\t\t(parser.flow_level == 0 &&\n\t\t\t(parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == ':') &&\n\t\t\t!is_blankz(parser.buffer, parser.buffer_pos+1)) {\n\t\treturn yaml_parser_fetch_plain_scalar(parser)\n\t}\n\n\t// If we don't determine the token type so far, it is an error.\n\treturn yaml_parser_set_scanner_error(parser,\n\t\t\"while scanning for the next token\", parser.mark,\n\t\t\"found character that cannot start any token\")\n}\n\nfunc yaml_simple_key_is_valid(parser *yaml_parser_t, simple_key *yaml_simple_key_t) (valid, ok bool) {\n\tif !simple_key.possible {\n\t\treturn false, true\n\t}\n\n\t// The 1.2 specification says:\n\t//\n\t//     \"If the ? indicator is omitted, parsing needs to see past the\n\t//     implicit key to recognize it as such. To limit the amount of\n\t//     lookahead required, the “:” indicator must appear at most 1024\n\t//     Unicode characters beyond the start of the key. In addition, the key\n\t//     is restricted to a single line.\"\n\t//\n\tif simple_key.mark.line < parser.mark.line || simple_key.mark.index+1024 < parser.mark.index {\n\t\t// Check if the potential simple key to be removed is required.\n\t\tif simple_key.required {\n\t\t\treturn false, yaml_parser_set_scanner_error(parser,\n\t\t\t\t\"while scanning a simple key\", simple_key.mark,\n\t\t\t\t\"could not find expected ':'\")\n\t\t}\n\t\tsimple_key.possible = false\n\t\treturn false, true\n\t}\n\treturn true, true\n}\n\n// Check if a simple key may start at the current position and add it if\n// needed.\nfunc yaml_parser_save_simple_key(parser *yaml_parser_t) bool {\n\t// A simple key is required at the current position if the scanner is in\n\t// the block context and the current column coincides with the indentation\n\t// level.\n\n\trequired := parser.flow_level == 0 && parser.indent == parser.mark.column\n\n\t//\n\t// If the current position may start a simple key, save it.\n\t//\n\tif parser.simple_key_allowed {\n\t\tsimple_key := yaml_simple_key_t{\n\t\t\tpossible:     true,\n\t\t\trequired:     required,\n\t\t\ttoken_number: parser.tokens_parsed + (len(parser.tokens) - parser.tokens_head),\n\t\t\tmark:         parser.mark,\n\t\t}\n\n\t\tif !yaml_parser_remove_simple_key(parser) {\n\t\t\treturn false\n\t\t}\n\t\tparser.simple_keys[len(parser.simple_keys)-1] = simple_key\n\t\tparser.simple_keys_by_tok[simple_key.token_number] = len(parser.simple_keys) - 1\n\t}\n\treturn true\n}\n\n// Remove a potential simple key at the current flow level.\nfunc yaml_parser_remove_simple_key(parser *yaml_parser_t) bool {\n\ti := len(parser.simple_keys) - 1\n\tif parser.simple_keys[i].possible {\n\t\t// If the key is required, it is an error.\n\t\tif parser.simple_keys[i].required {\n\t\t\treturn yaml_parser_set_scanner_error(parser,\n\t\t\t\t\"while scanning a simple key\", parser.simple_keys[i].mark,\n\t\t\t\t\"could not find expected ':'\")\n\t\t}\n\t\t// Remove the key from the stack.\n\t\tparser.simple_keys[i].possible = false\n\t\tdelete(parser.simple_keys_by_tok, parser.simple_keys[i].token_number)\n\t}\n\treturn true\n}\n\n// max_flow_level limits the flow_level\nconst max_flow_level = 10000\n\n// Increase the flow level and resize the simple key list if needed.\nfunc yaml_parser_increase_flow_level(parser *yaml_parser_t) bool {\n\t// Reset the simple key on the next level.\n\tparser.simple_keys = append(parser.simple_keys, yaml_simple_key_t{\n\t\tpossible:     false,\n\t\trequired:     false,\n\t\ttoken_number: parser.tokens_parsed + (len(parser.tokens) - parser.tokens_head),\n\t\tmark:         parser.mark,\n\t})\n\n\t// Increase the flow level.\n\tparser.flow_level++\n\tif parser.flow_level > max_flow_level {\n\t\treturn yaml_parser_set_scanner_error(parser,\n\t\t\t\"while increasing flow level\", parser.simple_keys[len(parser.simple_keys)-1].mark,\n\t\t\tfmt.Sprintf(\"exceeded max depth of %d\", max_flow_level))\n\t}\n\treturn true\n}\n\n// Decrease the flow level.\nfunc yaml_parser_decrease_flow_level(parser *yaml_parser_t) bool {\n\tif parser.flow_level > 0 {\n\t\tparser.flow_level--\n\t\tlast := len(parser.simple_keys) - 1\n\t\tdelete(parser.simple_keys_by_tok, parser.simple_keys[last].token_number)\n\t\tparser.simple_keys = parser.simple_keys[:last]\n\t}\n\treturn true\n}\n\n// max_indents limits the indents stack size\nconst max_indents = 10000\n\n// Push the current indentation level to the stack and set the new level\n// the current column is greater than the indentation level.  In this case,\n// append or insert the specified token into the token queue.\nfunc yaml_parser_roll_indent(parser *yaml_parser_t, column, number int, typ yaml_token_type_t, mark yaml_mark_t) bool {\n\t// In the flow context, do nothing.\n\tif parser.flow_level > 0 {\n\t\treturn true\n\t}\n\n\tif parser.indent < column {\n\t\t// Push the current indentation level to the stack and set the new\n\t\t// indentation level.\n\t\tparser.indents = append(parser.indents, parser.indent)\n\t\tparser.indent = column\n\t\tif len(parser.indents) > max_indents {\n\t\t\treturn yaml_parser_set_scanner_error(parser,\n\t\t\t\t\"while increasing indent level\", parser.simple_keys[len(parser.simple_keys)-1].mark,\n\t\t\t\tfmt.Sprintf(\"exceeded max depth of %d\", max_indents))\n\t\t}\n\n\t\t// Create a token and insert it into the queue.\n\t\ttoken := yaml_token_t{\n\t\t\ttyp:        typ,\n\t\t\tstart_mark: mark,\n\t\t\tend_mark:   mark,\n\t\t}\n\t\tif number > -1 {\n\t\t\tnumber -= parser.tokens_parsed\n\t\t}\n\t\tyaml_insert_token(parser, number, &token)\n\t}\n\treturn true\n}\n\n// Pop indentation levels from the indents stack until the current level\n// becomes less or equal to the column.  For each indentation level, append\n// the BLOCK-END token.\nfunc yaml_parser_unroll_indent(parser *yaml_parser_t, column int, scan_mark yaml_mark_t) bool {\n\t// In the flow context, do nothing.\n\tif parser.flow_level > 0 {\n\t\treturn true\n\t}\n\n\tblock_mark := scan_mark\n\tblock_mark.index--\n\n\t// Loop through the indentation levels in the stack.\n\tfor parser.indent > column {\n\n\t\t// [Go] Reposition the end token before potential following\n\t\t//      foot comments of parent blocks. For that, search\n\t\t//      backwards for recent comments that were at the same\n\t\t//      indent as the block that is ending now.\n\t\tstop_index := block_mark.index\n\t\tfor i := len(parser.comments) - 1; i >= 0; i-- {\n\t\t\tcomment := &parser.comments[i]\n\n\t\t\tif comment.end_mark.index < stop_index {\n\t\t\t\t// Don't go back beyond the start of the comment/whitespace scan, unless column < 0.\n\t\t\t\t// If requested indent column is < 0, then the document is over and everything else\n\t\t\t\t// is a foot anyway.\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tif comment.start_mark.column == parser.indent+1 {\n\t\t\t\t// This is a good match. But maybe there's a former comment\n\t\t\t\t// at that same indent level, so keep searching.\n\t\t\t\tblock_mark = comment.start_mark\n\t\t\t}\n\n\t\t\t// While the end of the former comment matches with\n\t\t\t// the start of the following one, we know there's\n\t\t\t// nothing in between and scanning is still safe.\n\t\t\tstop_index = comment.scan_mark.index\n\t\t}\n\n\t\t// Create a token and append it to the queue.\n\t\ttoken := yaml_token_t{\n\t\t\ttyp:        yaml_BLOCK_END_TOKEN,\n\t\t\tstart_mark: block_mark,\n\t\t\tend_mark:   block_mark,\n\t\t}\n\t\tyaml_insert_token(parser, -1, &token)\n\n\t\t// Pop the indentation level.\n\t\tparser.indent = parser.indents[len(parser.indents)-1]\n\t\tparser.indents = parser.indents[:len(parser.indents)-1]\n\t}\n\treturn true\n}\n\n// Initialize the scanner and produce the STREAM-START token.\nfunc yaml_parser_fetch_stream_start(parser *yaml_parser_t) bool {\n\n\t// Set the initial indentation.\n\tparser.indent = -1\n\n\t// Initialize the simple key stack.\n\tparser.simple_keys = append(parser.simple_keys, yaml_simple_key_t{})\n\n\tparser.simple_keys_by_tok = make(map[int]int)\n\n\t// A simple key is allowed at the beginning of the stream.\n\tparser.simple_key_allowed = true\n\n\t// We have started.\n\tparser.stream_start_produced = true\n\n\t// Create the STREAM-START token and append it to the queue.\n\ttoken := yaml_token_t{\n\t\ttyp:        yaml_STREAM_START_TOKEN,\n\t\tstart_mark: parser.mark,\n\t\tend_mark:   parser.mark,\n\t\tencoding:   parser.encoding,\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the STREAM-END token and shut down the scanner.\nfunc yaml_parser_fetch_stream_end(parser *yaml_parser_t) bool {\n\n\t// Force new line.\n\tif parser.mark.column != 0 {\n\t\tparser.mark.column = 0\n\t\tparser.mark.line++\n\t}\n\n\t// Reset the indentation level.\n\tif !yaml_parser_unroll_indent(parser, -1, parser.mark) {\n\t\treturn false\n\t}\n\n\t// Reset simple keys.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\tparser.simple_key_allowed = false\n\n\t// Create the STREAM-END token and append it to the queue.\n\ttoken := yaml_token_t{\n\t\ttyp:        yaml_STREAM_END_TOKEN,\n\t\tstart_mark: parser.mark,\n\t\tend_mark:   parser.mark,\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce a VERSION-DIRECTIVE or TAG-DIRECTIVE token.\nfunc yaml_parser_fetch_directive(parser *yaml_parser_t) bool {\n\t// Reset the indentation level.\n\tif !yaml_parser_unroll_indent(parser, -1, parser.mark) {\n\t\treturn false\n\t}\n\n\t// Reset simple keys.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\tparser.simple_key_allowed = false\n\n\t// Create the YAML-DIRECTIVE or TAG-DIRECTIVE token.\n\ttoken := yaml_token_t{}\n\tif !yaml_parser_scan_directive(parser, &token) {\n\t\treturn false\n\t}\n\t// Append the token to the queue.\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the DOCUMENT-START or DOCUMENT-END token.\nfunc yaml_parser_fetch_document_indicator(parser *yaml_parser_t, typ yaml_token_type_t) bool {\n\t// Reset the indentation level.\n\tif !yaml_parser_unroll_indent(parser, -1, parser.mark) {\n\t\treturn false\n\t}\n\n\t// Reset simple keys.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\tparser.simple_key_allowed = false\n\n\t// Consume the token.\n\tstart_mark := parser.mark\n\n\tskip(parser)\n\tskip(parser)\n\tskip(parser)\n\n\tend_mark := parser.mark\n\n\t// Create the DOCUMENT-START or DOCUMENT-END token.\n\ttoken := yaml_token_t{\n\t\ttyp:        typ,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t}\n\t// Append the token to the queue.\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the FLOW-SEQUENCE-START or FLOW-MAPPING-START token.\nfunc yaml_parser_fetch_flow_collection_start(parser *yaml_parser_t, typ yaml_token_type_t) bool {\n\n\t// The indicators '[' and '{' may start a simple key.\n\tif !yaml_parser_save_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// Increase the flow level.\n\tif !yaml_parser_increase_flow_level(parser) {\n\t\treturn false\n\t}\n\n\t// A simple key may follow the indicators '[' and '{'.\n\tparser.simple_key_allowed = true\n\n\t// Consume the token.\n\tstart_mark := parser.mark\n\tskip(parser)\n\tend_mark := parser.mark\n\n\t// Create the FLOW-SEQUENCE-START of FLOW-MAPPING-START token.\n\ttoken := yaml_token_t{\n\t\ttyp:        typ,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t}\n\t// Append the token to the queue.\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the FLOW-SEQUENCE-END or FLOW-MAPPING-END token.\nfunc yaml_parser_fetch_flow_collection_end(parser *yaml_parser_t, typ yaml_token_type_t) bool {\n\t// Reset any potential simple key on the current flow level.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// Decrease the flow level.\n\tif !yaml_parser_decrease_flow_level(parser) {\n\t\treturn false\n\t}\n\n\t// No simple keys after the indicators ']' and '}'.\n\tparser.simple_key_allowed = false\n\n\t// Consume the token.\n\n\tstart_mark := parser.mark\n\tskip(parser)\n\tend_mark := parser.mark\n\n\t// Create the FLOW-SEQUENCE-END of FLOW-MAPPING-END token.\n\ttoken := yaml_token_t{\n\t\ttyp:        typ,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t}\n\t// Append the token to the queue.\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the FLOW-ENTRY token.\nfunc yaml_parser_fetch_flow_entry(parser *yaml_parser_t) bool {\n\t// Reset any potential simple keys on the current flow level.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// Simple keys are allowed after ','.\n\tparser.simple_key_allowed = true\n\n\t// Consume the token.\n\tstart_mark := parser.mark\n\tskip(parser)\n\tend_mark := parser.mark\n\n\t// Create the FLOW-ENTRY token and append it to the queue.\n\ttoken := yaml_token_t{\n\t\ttyp:        yaml_FLOW_ENTRY_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the BLOCK-ENTRY token.\nfunc yaml_parser_fetch_block_entry(parser *yaml_parser_t) bool {\n\t// Check if the scanner is in the block context.\n\tif parser.flow_level == 0 {\n\t\t// Check if we are allowed to start a new entry.\n\t\tif !parser.simple_key_allowed {\n\t\t\treturn yaml_parser_set_scanner_error(parser, \"\", parser.mark,\n\t\t\t\t\"block sequence entries are not allowed in this context\")\n\t\t}\n\t\t// Add the BLOCK-SEQUENCE-START token if needed.\n\t\tif !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_SEQUENCE_START_TOKEN, parser.mark) {\n\t\t\treturn false\n\t\t}\n\t} else {\n\t\t// It is an error for the '-' indicator to occur in the flow context,\n\t\t// but we let the Parser detect and report about it because the Parser\n\t\t// is able to point to the context.\n\t}\n\n\t// Reset any potential simple keys on the current flow level.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// Simple keys are allowed after '-'.\n\tparser.simple_key_allowed = true\n\n\t// Consume the token.\n\tstart_mark := parser.mark\n\tskip(parser)\n\tend_mark := parser.mark\n\n\t// Create the BLOCK-ENTRY token and append it to the queue.\n\ttoken := yaml_token_t{\n\t\ttyp:        yaml_BLOCK_ENTRY_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the KEY token.\nfunc yaml_parser_fetch_key(parser *yaml_parser_t) bool {\n\n\t// In the block context, additional checks are required.\n\tif parser.flow_level == 0 {\n\t\t// Check if we are allowed to start a new key (not nessesary simple).\n\t\tif !parser.simple_key_allowed {\n\t\t\treturn yaml_parser_set_scanner_error(parser, \"\", parser.mark,\n\t\t\t\t\"mapping keys are not allowed in this context\")\n\t\t}\n\t\t// Add the BLOCK-MAPPING-START token if needed.\n\t\tif !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_MAPPING_START_TOKEN, parser.mark) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Reset any potential simple keys on the current flow level.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// Simple keys are allowed after '?' in the block context.\n\tparser.simple_key_allowed = parser.flow_level == 0\n\n\t// Consume the token.\n\tstart_mark := parser.mark\n\tskip(parser)\n\tend_mark := parser.mark\n\n\t// Create the KEY token and append it to the queue.\n\ttoken := yaml_token_t{\n\t\ttyp:        yaml_KEY_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the VALUE token.\nfunc yaml_parser_fetch_value(parser *yaml_parser_t) bool {\n\n\tsimple_key := &parser.simple_keys[len(parser.simple_keys)-1]\n\n\t// Have we found a simple key?\n\tif valid, ok := yaml_simple_key_is_valid(parser, simple_key); !ok {\n\t\treturn false\n\n\t} else if valid {\n\n\t\t// Create the KEY token and insert it into the queue.\n\t\ttoken := yaml_token_t{\n\t\t\ttyp:        yaml_KEY_TOKEN,\n\t\t\tstart_mark: simple_key.mark,\n\t\t\tend_mark:   simple_key.mark,\n\t\t}\n\t\tyaml_insert_token(parser, simple_key.token_number-parser.tokens_parsed, &token)\n\n\t\t// In the block context, we may need to add the BLOCK-MAPPING-START token.\n\t\tif !yaml_parser_roll_indent(parser, simple_key.mark.column,\n\t\t\tsimple_key.token_number,\n\t\t\tyaml_BLOCK_MAPPING_START_TOKEN, simple_key.mark) {\n\t\t\treturn false\n\t\t}\n\n\t\t// Remove the simple key.\n\t\tsimple_key.possible = false\n\t\tdelete(parser.simple_keys_by_tok, simple_key.token_number)\n\n\t\t// A simple key cannot follow another simple key.\n\t\tparser.simple_key_allowed = false\n\n\t} else {\n\t\t// The ':' indicator follows a complex key.\n\n\t\t// In the block context, extra checks are required.\n\t\tif parser.flow_level == 0 {\n\n\t\t\t// Check if we are allowed to start a complex value.\n\t\t\tif !parser.simple_key_allowed {\n\t\t\t\treturn yaml_parser_set_scanner_error(parser, \"\", parser.mark,\n\t\t\t\t\t\"mapping values are not allowed in this context\")\n\t\t\t}\n\n\t\t\t// Add the BLOCK-MAPPING-START token if needed.\n\t\t\tif !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_MAPPING_START_TOKEN, parser.mark) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\t// Simple keys after ':' are allowed in the block context.\n\t\tparser.simple_key_allowed = parser.flow_level == 0\n\t}\n\n\t// Consume the token.\n\tstart_mark := parser.mark\n\tskip(parser)\n\tend_mark := parser.mark\n\n\t// Create the VALUE token and append it to the queue.\n\ttoken := yaml_token_t{\n\t\ttyp:        yaml_VALUE_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the ALIAS or ANCHOR token.\nfunc yaml_parser_fetch_anchor(parser *yaml_parser_t, typ yaml_token_type_t) bool {\n\t// An anchor or an alias could be a simple key.\n\tif !yaml_parser_save_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// A simple key cannot follow an anchor or an alias.\n\tparser.simple_key_allowed = false\n\n\t// Create the ALIAS or ANCHOR token and append it to the queue.\n\tvar token yaml_token_t\n\tif !yaml_parser_scan_anchor(parser, &token, typ) {\n\t\treturn false\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the TAG token.\nfunc yaml_parser_fetch_tag(parser *yaml_parser_t) bool {\n\t// A tag could be a simple key.\n\tif !yaml_parser_save_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// A simple key cannot follow a tag.\n\tparser.simple_key_allowed = false\n\n\t// Create the TAG token and append it to the queue.\n\tvar token yaml_token_t\n\tif !yaml_parser_scan_tag(parser, &token) {\n\t\treturn false\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the SCALAR(...,literal) or SCALAR(...,folded) tokens.\nfunc yaml_parser_fetch_block_scalar(parser *yaml_parser_t, literal bool) bool {\n\t// Remove any potential simple keys.\n\tif !yaml_parser_remove_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// A simple key may follow a block scalar.\n\tparser.simple_key_allowed = true\n\n\t// Create the SCALAR token and append it to the queue.\n\tvar token yaml_token_t\n\tif !yaml_parser_scan_block_scalar(parser, &token, literal) {\n\t\treturn false\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the SCALAR(...,single-quoted) or SCALAR(...,double-quoted) tokens.\nfunc yaml_parser_fetch_flow_scalar(parser *yaml_parser_t, single bool) bool {\n\t// A plain scalar could be a simple key.\n\tif !yaml_parser_save_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// A simple key cannot follow a flow scalar.\n\tparser.simple_key_allowed = false\n\n\t// Create the SCALAR token and append it to the queue.\n\tvar token yaml_token_t\n\tif !yaml_parser_scan_flow_scalar(parser, &token, single) {\n\t\treturn false\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Produce the SCALAR(...,plain) token.\nfunc yaml_parser_fetch_plain_scalar(parser *yaml_parser_t) bool {\n\t// A plain scalar could be a simple key.\n\tif !yaml_parser_save_simple_key(parser) {\n\t\treturn false\n\t}\n\n\t// A simple key cannot follow a flow scalar.\n\tparser.simple_key_allowed = false\n\n\t// Create the SCALAR token and append it to the queue.\n\tvar token yaml_token_t\n\tif !yaml_parser_scan_plain_scalar(parser, &token) {\n\t\treturn false\n\t}\n\tyaml_insert_token(parser, -1, &token)\n\treturn true\n}\n\n// Eat whitespaces and comments until the next token is found.\nfunc yaml_parser_scan_to_next_token(parser *yaml_parser_t) bool {\n\n\tscan_mark := parser.mark\n\n\t// Until the next token is not found.\n\tfor {\n\t\t// Allow the BOM mark to start a line.\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t\tif parser.mark.column == 0 && is_bom(parser.buffer, parser.buffer_pos) {\n\t\t\tskip(parser)\n\t\t}\n\n\t\t// Eat whitespaces.\n\t\t// Tabs are allowed:\n\t\t//  - in the flow context\n\t\t//  - in the block context, but not at the beginning of the line or\n\t\t//  after '-', '?', or ':' (complex value).\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\n\t\tfor parser.buffer[parser.buffer_pos] == ' ' || ((parser.flow_level > 0 || !parser.simple_key_allowed) && parser.buffer[parser.buffer_pos] == '\\t') {\n\t\t\tskip(parser)\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\t// Check if we just had a line comment under a sequence entry that\n\t\t// looks more like a header to the following content. Similar to this:\n\t\t//\n\t\t// - # The comment\n\t\t//   - Some data\n\t\t//\n\t\t// If so, transform the line comment to a head comment and reposition.\n\t\tif len(parser.comments) > 0 && len(parser.tokens) > 1 {\n\t\t\ttokenA := parser.tokens[len(parser.tokens)-2]\n\t\t\ttokenB := parser.tokens[len(parser.tokens)-1]\n\t\t\tcomment := &parser.comments[len(parser.comments)-1]\n\t\t\tif tokenA.typ == yaml_BLOCK_SEQUENCE_START_TOKEN && tokenB.typ == yaml_BLOCK_ENTRY_TOKEN && len(comment.line) > 0 && !is_break(parser.buffer, parser.buffer_pos) {\n\t\t\t\t// If it was in the prior line, reposition so it becomes a\n\t\t\t\t// header of the follow up token. Otherwise, keep it in place\n\t\t\t\t// so it becomes a header of the former.\n\t\t\t\tcomment.head = comment.line\n\t\t\t\tcomment.line = nil\n\t\t\t\tif comment.start_mark.line == parser.mark.line-1 {\n\t\t\t\t\tcomment.token_mark = parser.mark\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Eat a comment until a line break.\n\t\tif parser.buffer[parser.buffer_pos] == '#' {\n\t\t\tif !yaml_parser_scan_comments(parser, scan_mark) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\t// If it is a line break, eat it.\n\t\tif is_break(parser.buffer, parser.buffer_pos) {\n\t\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tskip_line(parser)\n\n\t\t\t// In the block context, a new line may start a simple key.\n\t\t\tif parser.flow_level == 0 {\n\t\t\t\tparser.simple_key_allowed = true\n\t\t\t}\n\t\t} else {\n\t\t\tbreak // We have found a token.\n\t\t}\n\t}\n\n\treturn true\n}\n\n// Scan a YAML-DIRECTIVE or TAG-DIRECTIVE token.\n//\n// Scope:\n//      %YAML    1.1    # a comment \\n\n//      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n//      %TAG    !yaml!  tag:yaml.org,2002:  \\n\n//      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n//\nfunc yaml_parser_scan_directive(parser *yaml_parser_t, token *yaml_token_t) bool {\n\t// Eat '%'.\n\tstart_mark := parser.mark\n\tskip(parser)\n\n\t// Scan the directive name.\n\tvar name []byte\n\tif !yaml_parser_scan_directive_name(parser, start_mark, &name) {\n\t\treturn false\n\t}\n\n\t// Is it a YAML directive?\n\tif bytes.Equal(name, []byte(\"YAML\")) {\n\t\t// Scan the VERSION directive value.\n\t\tvar major, minor int8\n\t\tif !yaml_parser_scan_version_directive_value(parser, start_mark, &major, &minor) {\n\t\t\treturn false\n\t\t}\n\t\tend_mark := parser.mark\n\n\t\t// Create a VERSION-DIRECTIVE token.\n\t\t*token = yaml_token_t{\n\t\t\ttyp:        yaml_VERSION_DIRECTIVE_TOKEN,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   end_mark,\n\t\t\tmajor:      major,\n\t\t\tminor:      minor,\n\t\t}\n\n\t\t// Is it a TAG directive?\n\t} else if bytes.Equal(name, []byte(\"TAG\")) {\n\t\t// Scan the TAG directive value.\n\t\tvar handle, prefix []byte\n\t\tif !yaml_parser_scan_tag_directive_value(parser, start_mark, &handle, &prefix) {\n\t\t\treturn false\n\t\t}\n\t\tend_mark := parser.mark\n\n\t\t// Create a TAG-DIRECTIVE token.\n\t\t*token = yaml_token_t{\n\t\t\ttyp:        yaml_TAG_DIRECTIVE_TOKEN,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   end_mark,\n\t\t\tvalue:      handle,\n\t\t\tprefix:     prefix,\n\t\t}\n\n\t\t// Unknown directive.\n\t} else {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a directive\",\n\t\t\tstart_mark, \"found unknown directive name\")\n\t\treturn false\n\t}\n\n\t// Eat the rest of the line including any comments.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\n\tfor is_blank(parser.buffer, parser.buffer_pos) {\n\t\tskip(parser)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tif parser.buffer[parser.buffer_pos] == '#' {\n\t\t// [Go] Discard this inline comment for the time being.\n\t\t//if !yaml_parser_scan_line_comment(parser, start_mark) {\n\t\t//\treturn false\n\t\t//}\n\t\tfor !is_breakz(parser.buffer, parser.buffer_pos) {\n\t\t\tskip(parser)\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t}\n\n\t// Check if we are at the end of the line.\n\tif !is_breakz(parser.buffer, parser.buffer_pos) {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a directive\",\n\t\t\tstart_mark, \"did not find expected comment or line break\")\n\t\treturn false\n\t}\n\n\t// Eat a line break.\n\tif is_break(parser.buffer, parser.buffer_pos) {\n\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\treturn false\n\t\t}\n\t\tskip_line(parser)\n\t}\n\n\treturn true\n}\n\n// Scan the directive name.\n//\n// Scope:\n//      %YAML   1.1     # a comment \\n\n//       ^^^^\n//      %TAG    !yaml!  tag:yaml.org,2002:  \\n\n//       ^^^\n//\nfunc yaml_parser_scan_directive_name(parser *yaml_parser_t, start_mark yaml_mark_t, name *[]byte) bool {\n\t// Consume the directive name.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\n\tvar s []byte\n\tfor is_alpha(parser.buffer, parser.buffer_pos) {\n\t\ts = read(parser, s)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Check if the name is empty.\n\tif len(s) == 0 {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a directive\",\n\t\t\tstart_mark, \"could not find expected directive name\")\n\t\treturn false\n\t}\n\n\t// Check for an blank character after the name.\n\tif !is_blankz(parser.buffer, parser.buffer_pos) {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a directive\",\n\t\t\tstart_mark, \"found unexpected non-alphabetical character\")\n\t\treturn false\n\t}\n\t*name = s\n\treturn true\n}\n\n// Scan the value of VERSION-DIRECTIVE.\n//\n// Scope:\n//      %YAML   1.1     # a comment \\n\n//           ^^^^^^\nfunc yaml_parser_scan_version_directive_value(parser *yaml_parser_t, start_mark yaml_mark_t, major, minor *int8) bool {\n\t// Eat whitespaces.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tfor is_blank(parser.buffer, parser.buffer_pos) {\n\t\tskip(parser)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Consume the major version number.\n\tif !yaml_parser_scan_version_directive_number(parser, start_mark, major) {\n\t\treturn false\n\t}\n\n\t// Eat '.'.\n\tif parser.buffer[parser.buffer_pos] != '.' {\n\t\treturn yaml_parser_set_scanner_error(parser, \"while scanning a %YAML directive\",\n\t\t\tstart_mark, \"did not find expected digit or '.' character\")\n\t}\n\n\tskip(parser)\n\n\t// Consume the minor version number.\n\tif !yaml_parser_scan_version_directive_number(parser, start_mark, minor) {\n\t\treturn false\n\t}\n\treturn true\n}\n\nconst max_number_length = 2\n\n// Scan the version number of VERSION-DIRECTIVE.\n//\n// Scope:\n//      %YAML   1.1     # a comment \\n\n//              ^\n//      %YAML   1.1     # a comment \\n\n//                ^\nfunc yaml_parser_scan_version_directive_number(parser *yaml_parser_t, start_mark yaml_mark_t, number *int8) bool {\n\n\t// Repeat while the next character is digit.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tvar value, length int8\n\tfor is_digit(parser.buffer, parser.buffer_pos) {\n\t\t// Check if the number is too long.\n\t\tlength++\n\t\tif length > max_number_length {\n\t\t\treturn yaml_parser_set_scanner_error(parser, \"while scanning a %YAML directive\",\n\t\t\t\tstart_mark, \"found extremely long version number\")\n\t\t}\n\t\tvalue = value*10 + int8(as_digit(parser.buffer, parser.buffer_pos))\n\t\tskip(parser)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Check if the number was present.\n\tif length == 0 {\n\t\treturn yaml_parser_set_scanner_error(parser, \"while scanning a %YAML directive\",\n\t\t\tstart_mark, \"did not find expected version number\")\n\t}\n\t*number = value\n\treturn true\n}\n\n// Scan the value of a TAG-DIRECTIVE token.\n//\n// Scope:\n//      %TAG    !yaml!  tag:yaml.org,2002:  \\n\n//          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n//\nfunc yaml_parser_scan_tag_directive_value(parser *yaml_parser_t, start_mark yaml_mark_t, handle, prefix *[]byte) bool {\n\tvar handle_value, prefix_value []byte\n\n\t// Eat whitespaces.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\n\tfor is_blank(parser.buffer, parser.buffer_pos) {\n\t\tskip(parser)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Scan a handle.\n\tif !yaml_parser_scan_tag_handle(parser, true, start_mark, &handle_value) {\n\t\treturn false\n\t}\n\n\t// Expect a whitespace.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tif !is_blank(parser.buffer, parser.buffer_pos) {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a %TAG directive\",\n\t\t\tstart_mark, \"did not find expected whitespace\")\n\t\treturn false\n\t}\n\n\t// Eat whitespaces.\n\tfor is_blank(parser.buffer, parser.buffer_pos) {\n\t\tskip(parser)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Scan a prefix.\n\tif !yaml_parser_scan_tag_uri(parser, true, nil, start_mark, &prefix_value) {\n\t\treturn false\n\t}\n\n\t// Expect a whitespace or line break.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tif !is_blankz(parser.buffer, parser.buffer_pos) {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a %TAG directive\",\n\t\t\tstart_mark, \"did not find expected whitespace or line break\")\n\t\treturn false\n\t}\n\n\t*handle = handle_value\n\t*prefix = prefix_value\n\treturn true\n}\n\nfunc yaml_parser_scan_anchor(parser *yaml_parser_t, token *yaml_token_t, typ yaml_token_type_t) bool {\n\tvar s []byte\n\n\t// Eat the indicator character.\n\tstart_mark := parser.mark\n\tskip(parser)\n\n\t// Consume the value.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\n\tfor is_alpha(parser.buffer, parser.buffer_pos) {\n\t\ts = read(parser, s)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tend_mark := parser.mark\n\n\t/*\n\t * Check if length of the anchor is greater than 0 and it is followed by\n\t * a whitespace character or one of the indicators:\n\t *\n\t *      '?', ':', ',', ']', '}', '%', '@', '`'.\n\t */\n\n\tif len(s) == 0 ||\n\t\t!(is_blankz(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == '?' ||\n\t\t\tparser.buffer[parser.buffer_pos] == ':' || parser.buffer[parser.buffer_pos] == ',' ||\n\t\t\tparser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '}' ||\n\t\t\tparser.buffer[parser.buffer_pos] == '%' || parser.buffer[parser.buffer_pos] == '@' ||\n\t\t\tparser.buffer[parser.buffer_pos] == '`') {\n\t\tcontext := \"while scanning an alias\"\n\t\tif typ == yaml_ANCHOR_TOKEN {\n\t\t\tcontext = \"while scanning an anchor\"\n\t\t}\n\t\tyaml_parser_set_scanner_error(parser, context, start_mark,\n\t\t\t\"did not find expected alphabetic or numeric character\")\n\t\treturn false\n\t}\n\n\t// Create a token.\n\t*token = yaml_token_t{\n\t\ttyp:        typ,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t\tvalue:      s,\n\t}\n\n\treturn true\n}\n\n/*\n * Scan a TAG token.\n */\n\nfunc yaml_parser_scan_tag(parser *yaml_parser_t, token *yaml_token_t) bool {\n\tvar handle, suffix []byte\n\n\tstart_mark := parser.mark\n\n\t// Check if the tag is in the canonical form.\n\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\treturn false\n\t}\n\n\tif parser.buffer[parser.buffer_pos+1] == '<' {\n\t\t// Keep the handle as ''\n\n\t\t// Eat '!<'\n\t\tskip(parser)\n\t\tskip(parser)\n\n\t\t// Consume the tag value.\n\t\tif !yaml_parser_scan_tag_uri(parser, false, nil, start_mark, &suffix) {\n\t\t\treturn false\n\t\t}\n\n\t\t// Check for '>' and eat it.\n\t\tif parser.buffer[parser.buffer_pos] != '>' {\n\t\t\tyaml_parser_set_scanner_error(parser, \"while scanning a tag\",\n\t\t\t\tstart_mark, \"did not find the expected '>'\")\n\t\t\treturn false\n\t\t}\n\n\t\tskip(parser)\n\t} else {\n\t\t// The tag has either the '!suffix' or the '!handle!suffix' form.\n\n\t\t// First, try to scan a handle.\n\t\tif !yaml_parser_scan_tag_handle(parser, false, start_mark, &handle) {\n\t\t\treturn false\n\t\t}\n\n\t\t// Check if it is, indeed, handle.\n\t\tif handle[0] == '!' && len(handle) > 1 && handle[len(handle)-1] == '!' {\n\t\t\t// Scan the suffix now.\n\t\t\tif !yaml_parser_scan_tag_uri(parser, false, nil, start_mark, &suffix) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t} else {\n\t\t\t// It wasn't a handle after all.  Scan the rest of the tag.\n\t\t\tif !yaml_parser_scan_tag_uri(parser, false, handle, start_mark, &suffix) {\n\t\t\t\treturn false\n\t\t\t}\n\n\t\t\t// Set the handle to '!'.\n\t\t\thandle = []byte{'!'}\n\n\t\t\t// A special case: the '!' tag.  Set the handle to '' and the\n\t\t\t// suffix to '!'.\n\t\t\tif len(suffix) == 0 {\n\t\t\t\thandle, suffix = suffix, handle\n\t\t\t}\n\t\t}\n\t}\n\n\t// Check the character which ends the tag.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tif !is_blankz(parser.buffer, parser.buffer_pos) {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a tag\",\n\t\t\tstart_mark, \"did not find expected whitespace or line break\")\n\t\treturn false\n\t}\n\n\tend_mark := parser.mark\n\n\t// Create a token.\n\t*token = yaml_token_t{\n\t\ttyp:        yaml_TAG_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t\tvalue:      handle,\n\t\tsuffix:     suffix,\n\t}\n\treturn true\n}\n\n// Scan a tag handle.\nfunc yaml_parser_scan_tag_handle(parser *yaml_parser_t, directive bool, start_mark yaml_mark_t, handle *[]byte) bool {\n\t// Check the initial '!' character.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tif parser.buffer[parser.buffer_pos] != '!' {\n\t\tyaml_parser_set_scanner_tag_error(parser, directive,\n\t\t\tstart_mark, \"did not find expected '!'\")\n\t\treturn false\n\t}\n\n\tvar s []byte\n\n\t// Copy the '!' character.\n\ts = read(parser, s)\n\n\t// Copy all subsequent alphabetical and numerical characters.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tfor is_alpha(parser.buffer, parser.buffer_pos) {\n\t\ts = read(parser, s)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Check if the trailing character is '!' and copy it.\n\tif parser.buffer[parser.buffer_pos] == '!' {\n\t\ts = read(parser, s)\n\t} else {\n\t\t// It's either the '!' tag or not really a tag handle.  If it's a %TAG\n\t\t// directive, it's an error.  If it's a tag token, it must be a part of URI.\n\t\tif directive && string(s) != \"!\" {\n\t\t\tyaml_parser_set_scanner_tag_error(parser, directive,\n\t\t\t\tstart_mark, \"did not find expected '!'\")\n\t\t\treturn false\n\t\t}\n\t}\n\n\t*handle = s\n\treturn true\n}\n\n// Scan a tag.\nfunc yaml_parser_scan_tag_uri(parser *yaml_parser_t, directive bool, head []byte, start_mark yaml_mark_t, uri *[]byte) bool {\n\t//size_t length = head ? strlen((char *)head) : 0\n\tvar s []byte\n\thasTag := len(head) > 0\n\n\t// Copy the head if needed.\n\t//\n\t// Note that we don't copy the leading '!' character.\n\tif len(head) > 1 {\n\t\ts = append(s, head[1:]...)\n\t}\n\n\t// Scan the tag.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\n\t// The set of characters that may appear in URI is as follows:\n\t//\n\t//      '0'-'9', 'A'-'Z', 'a'-'z', '_', '-', ';', '/', '?', ':', '@', '&',\n\t//      '=', '+', '$', ',', '.', '!', '~', '*', '\\'', '(', ')', '[', ']',\n\t//      '%'.\n\t// [Go] TODO Convert this into more reasonable logic.\n\tfor is_alpha(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == ';' ||\n\t\tparser.buffer[parser.buffer_pos] == '/' || parser.buffer[parser.buffer_pos] == '?' ||\n\t\tparser.buffer[parser.buffer_pos] == ':' || parser.buffer[parser.buffer_pos] == '@' ||\n\t\tparser.buffer[parser.buffer_pos] == '&' || parser.buffer[parser.buffer_pos] == '=' ||\n\t\tparser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '$' ||\n\t\tparser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == '.' ||\n\t\tparser.buffer[parser.buffer_pos] == '!' || parser.buffer[parser.buffer_pos] == '~' ||\n\t\tparser.buffer[parser.buffer_pos] == '*' || parser.buffer[parser.buffer_pos] == '\\'' ||\n\t\tparser.buffer[parser.buffer_pos] == '(' || parser.buffer[parser.buffer_pos] == ')' ||\n\t\tparser.buffer[parser.buffer_pos] == '[' || parser.buffer[parser.buffer_pos] == ']' ||\n\t\tparser.buffer[parser.buffer_pos] == '%' {\n\t\t// Check if it is a URI-escape sequence.\n\t\tif parser.buffer[parser.buffer_pos] == '%' {\n\t\t\tif !yaml_parser_scan_uri_escapes(parser, directive, start_mark, &s) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t} else {\n\t\t\ts = read(parser, s)\n\t\t}\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t\thasTag = true\n\t}\n\n\tif !hasTag {\n\t\tyaml_parser_set_scanner_tag_error(parser, directive,\n\t\t\tstart_mark, \"did not find expected tag URI\")\n\t\treturn false\n\t}\n\t*uri = s\n\treturn true\n}\n\n// Decode an URI-escape sequence corresponding to a single UTF-8 character.\nfunc yaml_parser_scan_uri_escapes(parser *yaml_parser_t, directive bool, start_mark yaml_mark_t, s *[]byte) bool {\n\n\t// Decode the required number of characters.\n\tw := 1024\n\tfor w > 0 {\n\t\t// Check for a URI-escaped octet.\n\t\tif parser.unread < 3 && !yaml_parser_update_buffer(parser, 3) {\n\t\t\treturn false\n\t\t}\n\n\t\tif !(parser.buffer[parser.buffer_pos] == '%' &&\n\t\t\tis_hex(parser.buffer, parser.buffer_pos+1) &&\n\t\t\tis_hex(parser.buffer, parser.buffer_pos+2)) {\n\t\t\treturn yaml_parser_set_scanner_tag_error(parser, directive,\n\t\t\t\tstart_mark, \"did not find URI escaped octet\")\n\t\t}\n\n\t\t// Get the octet.\n\t\toctet := byte((as_hex(parser.buffer, parser.buffer_pos+1) << 4) + as_hex(parser.buffer, parser.buffer_pos+2))\n\n\t\t// If it is the leading octet, determine the length of the UTF-8 sequence.\n\t\tif w == 1024 {\n\t\t\tw = width(octet)\n\t\t\tif w == 0 {\n\t\t\t\treturn yaml_parser_set_scanner_tag_error(parser, directive,\n\t\t\t\t\tstart_mark, \"found an incorrect leading UTF-8 octet\")\n\t\t\t}\n\t\t} else {\n\t\t\t// Check if the trailing octet is correct.\n\t\t\tif octet&0xC0 != 0x80 {\n\t\t\t\treturn yaml_parser_set_scanner_tag_error(parser, directive,\n\t\t\t\t\tstart_mark, \"found an incorrect trailing UTF-8 octet\")\n\t\t\t}\n\t\t}\n\n\t\t// Copy the octet and move the pointers.\n\t\t*s = append(*s, octet)\n\t\tskip(parser)\n\t\tskip(parser)\n\t\tskip(parser)\n\t\tw--\n\t}\n\treturn true\n}\n\n// Scan a block scalar.\nfunc yaml_parser_scan_block_scalar(parser *yaml_parser_t, token *yaml_token_t, literal bool) bool {\n\t// Eat the indicator '|' or '>'.\n\tstart_mark := parser.mark\n\tskip(parser)\n\n\t// Scan the additional block scalar indicators.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\n\t// Check for a chomping indicator.\n\tvar chomping, increment int\n\tif parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '-' {\n\t\t// Set the chomping method and eat the indicator.\n\t\tif parser.buffer[parser.buffer_pos] == '+' {\n\t\t\tchomping = +1\n\t\t} else {\n\t\t\tchomping = -1\n\t\t}\n\t\tskip(parser)\n\n\t\t// Check for an indentation indicator.\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t\tif is_digit(parser.buffer, parser.buffer_pos) {\n\t\t\t// Check that the indentation is greater than 0.\n\t\t\tif parser.buffer[parser.buffer_pos] == '0' {\n\t\t\t\tyaml_parser_set_scanner_error(parser, \"while scanning a block scalar\",\n\t\t\t\t\tstart_mark, \"found an indentation indicator equal to 0\")\n\t\t\t\treturn false\n\t\t\t}\n\n\t\t\t// Get the indentation level and eat the indicator.\n\t\t\tincrement = as_digit(parser.buffer, parser.buffer_pos)\n\t\t\tskip(parser)\n\t\t}\n\n\t} else if is_digit(parser.buffer, parser.buffer_pos) {\n\t\t// Do the same as above, but in the opposite order.\n\n\t\tif parser.buffer[parser.buffer_pos] == '0' {\n\t\t\tyaml_parser_set_scanner_error(parser, \"while scanning a block scalar\",\n\t\t\t\tstart_mark, \"found an indentation indicator equal to 0\")\n\t\t\treturn false\n\t\t}\n\t\tincrement = as_digit(parser.buffer, parser.buffer_pos)\n\t\tskip(parser)\n\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t\tif parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '-' {\n\t\t\tif parser.buffer[parser.buffer_pos] == '+' {\n\t\t\t\tchomping = +1\n\t\t\t} else {\n\t\t\t\tchomping = -1\n\t\t\t}\n\t\t\tskip(parser)\n\t\t}\n\t}\n\n\t// Eat whitespaces and comments to the end of the line.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tfor is_blank(parser.buffer, parser.buffer_pos) {\n\t\tskip(parser)\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif parser.buffer[parser.buffer_pos] == '#' {\n\t\tif !yaml_parser_scan_line_comment(parser, start_mark) {\n\t\t\treturn false\n\t\t}\n\t\tfor !is_breakz(parser.buffer, parser.buffer_pos) {\n\t\t\tskip(parser)\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t}\n\n\t// Check if we are at the end of the line.\n\tif !is_breakz(parser.buffer, parser.buffer_pos) {\n\t\tyaml_parser_set_scanner_error(parser, \"while scanning a block scalar\",\n\t\t\tstart_mark, \"did not find expected comment or line break\")\n\t\treturn false\n\t}\n\n\t// Eat a line break.\n\tif is_break(parser.buffer, parser.buffer_pos) {\n\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\treturn false\n\t\t}\n\t\tskip_line(parser)\n\t}\n\n\tend_mark := parser.mark\n\n\t// Set the indentation level if it was specified.\n\tvar indent int\n\tif increment > 0 {\n\t\tif parser.indent >= 0 {\n\t\t\tindent = parser.indent + increment\n\t\t} else {\n\t\t\tindent = increment\n\t\t}\n\t}\n\n\t// Scan the leading line breaks and determine the indentation level if needed.\n\tvar s, leading_break, trailing_breaks []byte\n\tif !yaml_parser_scan_block_scalar_breaks(parser, &indent, &trailing_breaks, start_mark, &end_mark) {\n\t\treturn false\n\t}\n\n\t// Scan the block scalar content.\n\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\treturn false\n\t}\n\tvar leading_blank, trailing_blank bool\n\tfor parser.mark.column == indent && !is_z(parser.buffer, parser.buffer_pos) {\n\t\t// We are at the beginning of a non-empty line.\n\n\t\t// Is it a trailing whitespace?\n\t\ttrailing_blank = is_blank(parser.buffer, parser.buffer_pos)\n\n\t\t// Check if we need to fold the leading line break.\n\t\tif !literal && !leading_blank && !trailing_blank && len(leading_break) > 0 && leading_break[0] == '\\n' {\n\t\t\t// Do we need to join the lines by space?\n\t\t\tif len(trailing_breaks) == 0 {\n\t\t\t\ts = append(s, ' ')\n\t\t\t}\n\t\t} else {\n\t\t\ts = append(s, leading_break...)\n\t\t}\n\t\tleading_break = leading_break[:0]\n\n\t\t// Append the remaining line breaks.\n\t\ts = append(s, trailing_breaks...)\n\t\ttrailing_breaks = trailing_breaks[:0]\n\n\t\t// Is it a leading whitespace?\n\t\tleading_blank = is_blank(parser.buffer, parser.buffer_pos)\n\n\t\t// Consume the current line.\n\t\tfor !is_breakz(parser.buffer, parser.buffer_pos) {\n\t\t\ts = read(parser, s)\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\t// Consume the line break.\n\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\treturn false\n\t\t}\n\n\t\tleading_break = read_line(parser, leading_break)\n\n\t\t// Eat the following indentation spaces and line breaks.\n\t\tif !yaml_parser_scan_block_scalar_breaks(parser, &indent, &trailing_breaks, start_mark, &end_mark) {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Chomp the tail.\n\tif chomping != -1 {\n\t\ts = append(s, leading_break...)\n\t}\n\tif chomping == 1 {\n\t\ts = append(s, trailing_breaks...)\n\t}\n\n\t// Create a token.\n\t*token = yaml_token_t{\n\t\ttyp:        yaml_SCALAR_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t\tvalue:      s,\n\t\tstyle:      yaml_LITERAL_SCALAR_STYLE,\n\t}\n\tif !literal {\n\t\ttoken.style = yaml_FOLDED_SCALAR_STYLE\n\t}\n\treturn true\n}\n\n// Scan indentation spaces and line breaks for a block scalar.  Determine the\n// indentation level if needed.\nfunc yaml_parser_scan_block_scalar_breaks(parser *yaml_parser_t, indent *int, breaks *[]byte, start_mark yaml_mark_t, end_mark *yaml_mark_t) bool {\n\t*end_mark = parser.mark\n\n\t// Eat the indentation spaces and line breaks.\n\tmax_indent := 0\n\tfor {\n\t\t// Eat the indentation spaces.\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\t\tfor (*indent == 0 || parser.mark.column < *indent) && is_space(parser.buffer, parser.buffer_pos) {\n\t\t\tskip(parser)\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\tif parser.mark.column > max_indent {\n\t\t\tmax_indent = parser.mark.column\n\t\t}\n\n\t\t// Check for a tab character messing the indentation.\n\t\tif (*indent == 0 || parser.mark.column < *indent) && is_tab(parser.buffer, parser.buffer_pos) {\n\t\t\treturn yaml_parser_set_scanner_error(parser, \"while scanning a block scalar\",\n\t\t\t\tstart_mark, \"found a tab character where an indentation space is expected\")\n\t\t}\n\n\t\t// Have we found a non-empty line?\n\t\tif !is_break(parser.buffer, parser.buffer_pos) {\n\t\t\tbreak\n\t\t}\n\n\t\t// Consume the line break.\n\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\treturn false\n\t\t}\n\t\t// [Go] Should really be returning breaks instead.\n\t\t*breaks = read_line(parser, *breaks)\n\t\t*end_mark = parser.mark\n\t}\n\n\t// Determine the indentation level if needed.\n\tif *indent == 0 {\n\t\t*indent = max_indent\n\t\tif *indent < parser.indent+1 {\n\t\t\t*indent = parser.indent + 1\n\t\t}\n\t\tif *indent < 1 {\n\t\t\t*indent = 1\n\t\t}\n\t}\n\treturn true\n}\n\n// Scan a quoted scalar.\nfunc yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, single bool) bool {\n\t// Eat the left quote.\n\tstart_mark := parser.mark\n\tskip(parser)\n\n\t// Consume the content of the quoted scalar.\n\tvar s, leading_break, trailing_breaks, whitespaces []byte\n\tfor {\n\t\t// Check that there are no document indicators at the beginning of the line.\n\t\tif parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) {\n\t\t\treturn false\n\t\t}\n\n\t\tif parser.mark.column == 0 &&\n\t\t\t((parser.buffer[parser.buffer_pos+0] == '-' &&\n\t\t\t\tparser.buffer[parser.buffer_pos+1] == '-' &&\n\t\t\t\tparser.buffer[parser.buffer_pos+2] == '-') ||\n\t\t\t\t(parser.buffer[parser.buffer_pos+0] == '.' &&\n\t\t\t\t\tparser.buffer[parser.buffer_pos+1] == '.' &&\n\t\t\t\t\tparser.buffer[parser.buffer_pos+2] == '.')) &&\n\t\t\tis_blankz(parser.buffer, parser.buffer_pos+3) {\n\t\t\tyaml_parser_set_scanner_error(parser, \"while scanning a quoted scalar\",\n\t\t\t\tstart_mark, \"found unexpected document indicator\")\n\t\t\treturn false\n\t\t}\n\n\t\t// Check for EOF.\n\t\tif is_z(parser.buffer, parser.buffer_pos) {\n\t\t\tyaml_parser_set_scanner_error(parser, \"while scanning a quoted scalar\",\n\t\t\t\tstart_mark, \"found unexpected end of stream\")\n\t\t\treturn false\n\t\t}\n\n\t\t// Consume non-blank characters.\n\t\tleading_blanks := false\n\t\tfor !is_blankz(parser.buffer, parser.buffer_pos) {\n\t\t\tif single && parser.buffer[parser.buffer_pos] == '\\'' && parser.buffer[parser.buffer_pos+1] == '\\'' {\n\t\t\t\t// Is is an escaped single quote.\n\t\t\t\ts = append(s, '\\'')\n\t\t\t\tskip(parser)\n\t\t\t\tskip(parser)\n\n\t\t\t} else if single && parser.buffer[parser.buffer_pos] == '\\'' {\n\t\t\t\t// It is a right single quote.\n\t\t\t\tbreak\n\t\t\t} else if !single && parser.buffer[parser.buffer_pos] == '\"' {\n\t\t\t\t// It is a right double quote.\n\t\t\t\tbreak\n\n\t\t\t} else if !single && parser.buffer[parser.buffer_pos] == '\\\\' && is_break(parser.buffer, parser.buffer_pos+1) {\n\t\t\t\t// It is an escaped line break.\n\t\t\t\tif parser.unread < 3 && !yaml_parser_update_buffer(parser, 3) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tskip(parser)\n\t\t\t\tskip_line(parser)\n\t\t\t\tleading_blanks = true\n\t\t\t\tbreak\n\n\t\t\t} else if !single && parser.buffer[parser.buffer_pos] == '\\\\' {\n\t\t\t\t// It is an escape sequence.\n\t\t\t\tcode_length := 0\n\n\t\t\t\t// Check the escape character.\n\t\t\t\tswitch parser.buffer[parser.buffer_pos+1] {\n\t\t\t\tcase '0':\n\t\t\t\t\ts = append(s, 0)\n\t\t\t\tcase 'a':\n\t\t\t\t\ts = append(s, '\\x07')\n\t\t\t\tcase 'b':\n\t\t\t\t\ts = append(s, '\\x08')\n\t\t\t\tcase 't', '\\t':\n\t\t\t\t\ts = append(s, '\\x09')\n\t\t\t\tcase 'n':\n\t\t\t\t\ts = append(s, '\\x0A')\n\t\t\t\tcase 'v':\n\t\t\t\t\ts = append(s, '\\x0B')\n\t\t\t\tcase 'f':\n\t\t\t\t\ts = append(s, '\\x0C')\n\t\t\t\tcase 'r':\n\t\t\t\t\ts = append(s, '\\x0D')\n\t\t\t\tcase 'e':\n\t\t\t\t\ts = append(s, '\\x1B')\n\t\t\t\tcase ' ':\n\t\t\t\t\ts = append(s, '\\x20')\n\t\t\t\tcase '\"':\n\t\t\t\t\ts = append(s, '\"')\n\t\t\t\tcase '\\'':\n\t\t\t\t\ts = append(s, '\\'')\n\t\t\t\tcase '\\\\':\n\t\t\t\t\ts = append(s, '\\\\')\n\t\t\t\tcase 'N': // NEL (#x85)\n\t\t\t\t\ts = append(s, '\\xC2')\n\t\t\t\t\ts = append(s, '\\x85')\n\t\t\t\tcase '_': // #xA0\n\t\t\t\t\ts = append(s, '\\xC2')\n\t\t\t\t\ts = append(s, '\\xA0')\n\t\t\t\tcase 'L': // LS (#x2028)\n\t\t\t\t\ts = append(s, '\\xE2')\n\t\t\t\t\ts = append(s, '\\x80')\n\t\t\t\t\ts = append(s, '\\xA8')\n\t\t\t\tcase 'P': // PS (#x2029)\n\t\t\t\t\ts = append(s, '\\xE2')\n\t\t\t\t\ts = append(s, '\\x80')\n\t\t\t\t\ts = append(s, '\\xA9')\n\t\t\t\tcase 'x':\n\t\t\t\t\tcode_length = 2\n\t\t\t\tcase 'u':\n\t\t\t\t\tcode_length = 4\n\t\t\t\tcase 'U':\n\t\t\t\t\tcode_length = 8\n\t\t\t\tdefault:\n\t\t\t\t\tyaml_parser_set_scanner_error(parser, \"while parsing a quoted scalar\",\n\t\t\t\t\t\tstart_mark, \"found unknown escape character\")\n\t\t\t\t\treturn false\n\t\t\t\t}\n\n\t\t\t\tskip(parser)\n\t\t\t\tskip(parser)\n\n\t\t\t\t// Consume an arbitrary escape code.\n\t\t\t\tif code_length > 0 {\n\t\t\t\t\tvar value int\n\n\t\t\t\t\t// Scan the character value.\n\t\t\t\t\tif parser.unread < code_length && !yaml_parser_update_buffer(parser, code_length) {\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t\tfor k := 0; k < code_length; k++ {\n\t\t\t\t\t\tif !is_hex(parser.buffer, parser.buffer_pos+k) {\n\t\t\t\t\t\t\tyaml_parser_set_scanner_error(parser, \"while parsing a quoted scalar\",\n\t\t\t\t\t\t\t\tstart_mark, \"did not find expected hexdecimal number\")\n\t\t\t\t\t\t\treturn false\n\t\t\t\t\t\t}\n\t\t\t\t\t\tvalue = (value << 4) + as_hex(parser.buffer, parser.buffer_pos+k)\n\t\t\t\t\t}\n\n\t\t\t\t\t// Check the value and write the character.\n\t\t\t\t\tif (value >= 0xD800 && value <= 0xDFFF) || value > 0x10FFFF {\n\t\t\t\t\t\tyaml_parser_set_scanner_error(parser, \"while parsing a quoted scalar\",\n\t\t\t\t\t\t\tstart_mark, \"found invalid Unicode character escape code\")\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t\tif value <= 0x7F {\n\t\t\t\t\t\ts = append(s, byte(value))\n\t\t\t\t\t} else if value <= 0x7FF {\n\t\t\t\t\t\ts = append(s, byte(0xC0+(value>>6)))\n\t\t\t\t\t\ts = append(s, byte(0x80+(value&0x3F)))\n\t\t\t\t\t} else if value <= 0xFFFF {\n\t\t\t\t\t\ts = append(s, byte(0xE0+(value>>12)))\n\t\t\t\t\t\ts = append(s, byte(0x80+((value>>6)&0x3F)))\n\t\t\t\t\t\ts = append(s, byte(0x80+(value&0x3F)))\n\t\t\t\t\t} else {\n\t\t\t\t\t\ts = append(s, byte(0xF0+(value>>18)))\n\t\t\t\t\t\ts = append(s, byte(0x80+((value>>12)&0x3F)))\n\t\t\t\t\t\ts = append(s, byte(0x80+((value>>6)&0x3F)))\n\t\t\t\t\t\ts = append(s, byte(0x80+(value&0x3F)))\n\t\t\t\t\t}\n\n\t\t\t\t\t// Advance the pointer.\n\t\t\t\t\tfor k := 0; k < code_length; k++ {\n\t\t\t\t\t\tskip(parser)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t// It is a non-escaped non-blank character.\n\t\t\t\ts = read(parser, s)\n\t\t\t}\n\t\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\n\t\t// Check if we are at the end of the scalar.\n\t\tif single {\n\t\t\tif parser.buffer[parser.buffer_pos] == '\\'' {\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\tif parser.buffer[parser.buffer_pos] == '\"' {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\t// Consume blank characters.\n\t\tfor is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {\n\t\t\tif is_blank(parser.buffer, parser.buffer_pos) {\n\t\t\t\t// Consume a space or a tab character.\n\t\t\t\tif !leading_blanks {\n\t\t\t\t\twhitespaces = read(parser, whitespaces)\n\t\t\t\t} else {\n\t\t\t\t\tskip(parser)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\n\t\t\t\t// Check if it is a first line break.\n\t\t\t\tif !leading_blanks {\n\t\t\t\t\twhitespaces = whitespaces[:0]\n\t\t\t\t\tleading_break = read_line(parser, leading_break)\n\t\t\t\t\tleading_blanks = true\n\t\t\t\t} else {\n\t\t\t\t\ttrailing_breaks = read_line(parser, trailing_breaks)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\t// Join the whitespaces or fold line breaks.\n\t\tif leading_blanks {\n\t\t\t// Do we need to fold line breaks?\n\t\t\tif len(leading_break) > 0 && leading_break[0] == '\\n' {\n\t\t\t\tif len(trailing_breaks) == 0 {\n\t\t\t\t\ts = append(s, ' ')\n\t\t\t\t} else {\n\t\t\t\t\ts = append(s, trailing_breaks...)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\ts = append(s, leading_break...)\n\t\t\t\ts = append(s, trailing_breaks...)\n\t\t\t}\n\t\t\ttrailing_breaks = trailing_breaks[:0]\n\t\t\tleading_break = leading_break[:0]\n\t\t} else {\n\t\t\ts = append(s, whitespaces...)\n\t\t\twhitespaces = whitespaces[:0]\n\t\t}\n\t}\n\n\t// Eat the right quote.\n\tskip(parser)\n\tend_mark := parser.mark\n\n\t// Create a token.\n\t*token = yaml_token_t{\n\t\ttyp:        yaml_SCALAR_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t\tvalue:      s,\n\t\tstyle:      yaml_SINGLE_QUOTED_SCALAR_STYLE,\n\t}\n\tif !single {\n\t\ttoken.style = yaml_DOUBLE_QUOTED_SCALAR_STYLE\n\t}\n\treturn true\n}\n\n// Scan a plain scalar.\nfunc yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) bool {\n\n\tvar s, leading_break, trailing_breaks, whitespaces []byte\n\tvar leading_blanks bool\n\tvar indent = parser.indent + 1\n\n\tstart_mark := parser.mark\n\tend_mark := parser.mark\n\n\t// Consume the content of the plain scalar.\n\tfor {\n\t\t// Check for a document indicator.\n\t\tif parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) {\n\t\t\treturn false\n\t\t}\n\t\tif parser.mark.column == 0 &&\n\t\t\t((parser.buffer[parser.buffer_pos+0] == '-' &&\n\t\t\t\tparser.buffer[parser.buffer_pos+1] == '-' &&\n\t\t\t\tparser.buffer[parser.buffer_pos+2] == '-') ||\n\t\t\t\t(parser.buffer[parser.buffer_pos+0] == '.' &&\n\t\t\t\t\tparser.buffer[parser.buffer_pos+1] == '.' &&\n\t\t\t\t\tparser.buffer[parser.buffer_pos+2] == '.')) &&\n\t\t\tis_blankz(parser.buffer, parser.buffer_pos+3) {\n\t\t\tbreak\n\t\t}\n\n\t\t// Check for a comment.\n\t\tif parser.buffer[parser.buffer_pos] == '#' {\n\t\t\tbreak\n\t\t}\n\n\t\t// Consume non-blank characters.\n\t\tfor !is_blankz(parser.buffer, parser.buffer_pos) {\n\n\t\t\t// Check for indicators that may end a plain scalar.\n\t\t\tif (parser.buffer[parser.buffer_pos] == ':' && is_blankz(parser.buffer, parser.buffer_pos+1)) ||\n\t\t\t\t(parser.flow_level > 0 &&\n\t\t\t\t\t(parser.buffer[parser.buffer_pos] == ',' ||\n\t\t\t\t\t\tparser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == '[' ||\n\t\t\t\t\t\tparser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' ||\n\t\t\t\t\t\tparser.buffer[parser.buffer_pos] == '}')) {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\t// Check if we need to join whitespaces and breaks.\n\t\t\tif leading_blanks || len(whitespaces) > 0 {\n\t\t\t\tif leading_blanks {\n\t\t\t\t\t// Do we need to fold line breaks?\n\t\t\t\t\tif leading_break[0] == '\\n' {\n\t\t\t\t\t\tif len(trailing_breaks) == 0 {\n\t\t\t\t\t\t\ts = append(s, ' ')\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\ts = append(s, trailing_breaks...)\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\ts = append(s, leading_break...)\n\t\t\t\t\t\ts = append(s, trailing_breaks...)\n\t\t\t\t\t}\n\t\t\t\t\ttrailing_breaks = trailing_breaks[:0]\n\t\t\t\t\tleading_break = leading_break[:0]\n\t\t\t\t\tleading_blanks = false\n\t\t\t\t} else {\n\t\t\t\t\ts = append(s, whitespaces...)\n\t\t\t\t\twhitespaces = whitespaces[:0]\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// Copy the character.\n\t\t\ts = read(parser, s)\n\n\t\t\tend_mark = parser.mark\n\t\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\t// Is it the end?\n\t\tif !(is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos)) {\n\t\t\tbreak\n\t\t}\n\n\t\t// Consume blank characters.\n\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\treturn false\n\t\t}\n\n\t\tfor is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {\n\t\t\tif is_blank(parser.buffer, parser.buffer_pos) {\n\n\t\t\t\t// Check for tab characters that abuse indentation.\n\t\t\t\tif leading_blanks && parser.mark.column < indent && is_tab(parser.buffer, parser.buffer_pos) {\n\t\t\t\t\tyaml_parser_set_scanner_error(parser, \"while scanning a plain scalar\",\n\t\t\t\t\t\tstart_mark, \"found a tab character that violates indentation\")\n\t\t\t\t\treturn false\n\t\t\t\t}\n\n\t\t\t\t// Consume a space or a tab character.\n\t\t\t\tif !leading_blanks {\n\t\t\t\t\twhitespaces = read(parser, whitespaces)\n\t\t\t\t} else {\n\t\t\t\t\tskip(parser)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\n\t\t\t\t// Check if it is a first line break.\n\t\t\t\tif !leading_blanks {\n\t\t\t\t\twhitespaces = whitespaces[:0]\n\t\t\t\t\tleading_break = read_line(parser, leading_break)\n\t\t\t\t\tleading_blanks = true\n\t\t\t\t} else {\n\t\t\t\t\ttrailing_breaks = read_line(parser, trailing_breaks)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\n\t\t// Check indentation level.\n\t\tif parser.flow_level == 0 && parser.mark.column < indent {\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// Create a token.\n\t*token = yaml_token_t{\n\t\ttyp:        yaml_SCALAR_TOKEN,\n\t\tstart_mark: start_mark,\n\t\tend_mark:   end_mark,\n\t\tvalue:      s,\n\t\tstyle:      yaml_PLAIN_SCALAR_STYLE,\n\t}\n\n\t// Note that we change the 'simple_key_allowed' flag.\n\tif leading_blanks {\n\t\tparser.simple_key_allowed = true\n\t}\n\treturn true\n}\n\nfunc yaml_parser_scan_line_comment(parser *yaml_parser_t, token_mark yaml_mark_t) bool {\n\tif parser.newlines > 0 {\n\t\treturn true\n\t}\n\n\tvar start_mark yaml_mark_t\n\tvar text []byte\n\n\tfor peek := 0; peek < 512; peek++ {\n\t\tif parser.unread < peek+1 && !yaml_parser_update_buffer(parser, peek+1) {\n\t\t\tbreak\n\t\t}\n\t\tif is_blank(parser.buffer, parser.buffer_pos+peek) {\n\t\t\tcontinue\n\t\t}\n\t\tif parser.buffer[parser.buffer_pos+peek] == '#' {\n\t\t\tseen := parser.mark.index+peek\n\t\t\tfor {\n\t\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tif is_breakz(parser.buffer, parser.buffer_pos) {\n\t\t\t\t\tif parser.mark.index >= seen {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\t\t\t\treturn false\n\t\t\t\t\t}\n\t\t\t\t\tskip_line(parser)\n\t\t\t\t} else if parser.mark.index >= seen {\n\t\t\t\t\tif len(text) == 0 {\n\t\t\t\t\t\tstart_mark = parser.mark\n\t\t\t\t\t}\n\t\t\t\t\ttext = read(parser, text)\n\t\t\t\t} else {\n\t\t\t\t\tskip(parser)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tbreak\n\t}\n\tif len(text) > 0 {\n\t\tparser.comments = append(parser.comments, yaml_comment_t{\n\t\t\ttoken_mark: token_mark,\n\t\t\tstart_mark: start_mark,\n\t\t\tline: text,\n\t\t})\n\t}\n\treturn true\n}\n\nfunc yaml_parser_scan_comments(parser *yaml_parser_t, scan_mark yaml_mark_t) bool {\n\ttoken := parser.tokens[len(parser.tokens)-1]\n\n\tif token.typ == yaml_FLOW_ENTRY_TOKEN && len(parser.tokens) > 1 {\n\t\ttoken = parser.tokens[len(parser.tokens)-2]\n\t}\n\n\tvar token_mark = token.start_mark\n\tvar start_mark yaml_mark_t\n\tvar next_indent = parser.indent\n\tif next_indent < 0 {\n\t\tnext_indent = 0\n\t}\n\n\tvar recent_empty = false\n\tvar first_empty = parser.newlines <= 1\n\n\tvar line = parser.mark.line\n\tvar column = parser.mark.column\n\n\tvar text []byte\n\n\t// The foot line is the place where a comment must start to\n\t// still be considered as a foot of the prior content.\n\t// If there's some content in the currently parsed line, then\n\t// the foot is the line below it.\n\tvar foot_line = -1\n\tif scan_mark.line > 0 {\n\t\tfoot_line = parser.mark.line-parser.newlines+1\n\t\tif parser.newlines == 0 && parser.mark.column > 1 {\n\t\t\tfoot_line++\n\t\t}\n\t}\n\n\tvar peek = 0\n\tfor ; peek < 512; peek++ {\n\t\tif parser.unread < peek+1 && !yaml_parser_update_buffer(parser, peek+1) {\n\t\t\tbreak\n\t\t}\n\t\tcolumn++\n\t\tif is_blank(parser.buffer, parser.buffer_pos+peek) {\n\t\t\tcontinue\n\t\t}\n\t\tc := parser.buffer[parser.buffer_pos+peek]\n\t\tvar close_flow = parser.flow_level > 0 && (c == ']' || c == '}')\n\t\tif close_flow || is_breakz(parser.buffer, parser.buffer_pos+peek) {\n\t\t\t// Got line break or terminator.\n\t\t\tif close_flow || !recent_empty {\n\t\t\t\tif close_flow || first_empty && (start_mark.line == foot_line && token.typ != yaml_VALUE_TOKEN || start_mark.column-1 < next_indent) {\n\t\t\t\t\t// This is the first empty line and there were no empty lines before,\n\t\t\t\t\t// so this initial part of the comment is a foot of the prior token\n\t\t\t\t\t// instead of being a head for the following one. Split it up.\n\t\t\t\t\t// Alternatively, this might also be the last comment inside a flow\n\t\t\t\t\t// scope, so it must be a footer.\n\t\t\t\t\tif len(text) > 0 {\n\t\t\t\t\t\tif start_mark.column-1 < next_indent {\n\t\t\t\t\t\t\t// If dedented it's unrelated to the prior token.\n\t\t\t\t\t\t\ttoken_mark = start_mark\n\t\t\t\t\t\t}\n\t\t\t\t\t\tparser.comments = append(parser.comments, yaml_comment_t{\n\t\t\t\t\t\t\tscan_mark:  scan_mark,\n\t\t\t\t\t\t\ttoken_mark: token_mark,\n\t\t\t\t\t\t\tstart_mark: start_mark,\n\t\t\t\t\t\t\tend_mark:   yaml_mark_t{parser.mark.index + peek, line, column},\n\t\t\t\t\t\t\tfoot:       text,\n\t\t\t\t\t\t})\n\t\t\t\t\t\tscan_mark = yaml_mark_t{parser.mark.index + peek, line, column}\n\t\t\t\t\t\ttoken_mark = scan_mark\n\t\t\t\t\t\ttext = nil\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif len(text) > 0 && parser.buffer[parser.buffer_pos+peek] != 0 {\n\t\t\t\t\t\ttext = append(text, '\\n')\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !is_break(parser.buffer, parser.buffer_pos+peek) {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tfirst_empty = false\n\t\t\trecent_empty = true\n\t\t\tcolumn = 0\n\t\t\tline++\n\t\t\tcontinue\n\t\t}\n\n\t\tif len(text) > 0 && (close_flow || column-1 < next_indent && column != start_mark.column) {\n\t\t\t// The comment at the different indentation is a foot of the\n\t\t\t// preceding data rather than a head of the upcoming one.\n\t\t\tparser.comments = append(parser.comments, yaml_comment_t{\n\t\t\t\tscan_mark:  scan_mark,\n\t\t\t\ttoken_mark: token_mark,\n\t\t\t\tstart_mark: start_mark,\n\t\t\t\tend_mark:   yaml_mark_t{parser.mark.index + peek, line, column},\n\t\t\t\tfoot:       text,\n\t\t\t})\n\t\t\tscan_mark = yaml_mark_t{parser.mark.index + peek, line, column}\n\t\t\ttoken_mark = scan_mark\n\t\t\ttext = nil\n\t\t}\n\n\t\tif parser.buffer[parser.buffer_pos+peek] != '#' {\n\t\t\tbreak\n\t\t}\n\n\t\tif len(text) == 0 {\n\t\t\tstart_mark = yaml_mark_t{parser.mark.index + peek, line, column}\n\t\t} else {\n\t\t\ttext = append(text, '\\n')\n\t\t}\n\n\t\trecent_empty = false\n\n\t\t// Consume until after the consumed comment line.\n\t\tseen := parser.mark.index+peek\n\t\tfor {\n\t\t\tif parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tif is_breakz(parser.buffer, parser.buffer_pos) {\n\t\t\t\tif parser.mark.index >= seen {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tif parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) {\n\t\t\t\t\treturn false\n\t\t\t\t}\n\t\t\t\tskip_line(parser)\n\t\t\t} else if parser.mark.index >= seen {\n\t\t\t\ttext = read(parser, text)\n\t\t\t} else {\n\t\t\t\tskip(parser)\n\t\t\t}\n\t\t}\n\n\t\tpeek = 0\n\t\tcolumn = 0\n\t\tline = parser.mark.line\n\t\tnext_indent = parser.indent\n\t\tif next_indent < 0 {\n\t\t\tnext_indent = 0\n\t\t}\n\t}\n\n\tif len(text) > 0 {\n\t\tparser.comments = append(parser.comments, yaml_comment_t{\n\t\t\tscan_mark:  scan_mark,\n\t\t\ttoken_mark: start_mark,\n\t\t\tstart_mark: start_mark,\n\t\t\tend_mark:   yaml_mark_t{parser.mark.index + peek - 1, line, column},\n\t\t\thead:       text,\n\t\t})\n\t}\n\treturn true\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/sorter.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage yaml\n\nimport (\n\t\"reflect\"\n\t\"unicode\"\n)\n\ntype keyList []reflect.Value\n\nfunc (l keyList) Len() int      { return len(l) }\nfunc (l keyList) Swap(i, j int) { l[i], l[j] = l[j], l[i] }\nfunc (l keyList) Less(i, j int) bool {\n\ta := l[i]\n\tb := l[j]\n\tak := a.Kind()\n\tbk := b.Kind()\n\tfor (ak == reflect.Interface || ak == reflect.Ptr) && !a.IsNil() {\n\t\ta = a.Elem()\n\t\tak = a.Kind()\n\t}\n\tfor (bk == reflect.Interface || bk == reflect.Ptr) && !b.IsNil() {\n\t\tb = b.Elem()\n\t\tbk = b.Kind()\n\t}\n\taf, aok := keyFloat(a)\n\tbf, bok := keyFloat(b)\n\tif aok && bok {\n\t\tif af != bf {\n\t\t\treturn af < bf\n\t\t}\n\t\tif ak != bk {\n\t\t\treturn ak < bk\n\t\t}\n\t\treturn numLess(a, b)\n\t}\n\tif ak != reflect.String || bk != reflect.String {\n\t\treturn ak < bk\n\t}\n\tar, br := []rune(a.String()), []rune(b.String())\n\tdigits := false\n\tfor i := 0; i < len(ar) && i < len(br); i++ {\n\t\tif ar[i] == br[i] {\n\t\t\tdigits = unicode.IsDigit(ar[i])\n\t\t\tcontinue\n\t\t}\n\t\tal := unicode.IsLetter(ar[i])\n\t\tbl := unicode.IsLetter(br[i])\n\t\tif al && bl {\n\t\t\treturn ar[i] < br[i]\n\t\t}\n\t\tif al || bl {\n\t\t\tif digits {\n\t\t\t\treturn al\n\t\t\t} else {\n\t\t\t\treturn bl\n\t\t\t}\n\t\t}\n\t\tvar ai, bi int\n\t\tvar an, bn int64\n\t\tif ar[i] == '0' || br[i] == '0' {\n\t\t\tfor j := i - 1; j >= 0 && unicode.IsDigit(ar[j]); j-- {\n\t\t\t\tif ar[j] != '0' {\n\t\t\t\t\tan = 1\n\t\t\t\t\tbn = 1\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tfor ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ {\n\t\t\tan = an*10 + int64(ar[ai]-'0')\n\t\t}\n\t\tfor bi = i; bi < len(br) && unicode.IsDigit(br[bi]); bi++ {\n\t\t\tbn = bn*10 + int64(br[bi]-'0')\n\t\t}\n\t\tif an != bn {\n\t\t\treturn an < bn\n\t\t}\n\t\tif ai != bi {\n\t\t\treturn ai < bi\n\t\t}\n\t\treturn ar[i] < br[i]\n\t}\n\treturn len(ar) < len(br)\n}\n\n// keyFloat returns a float value for v if it is a number/bool\n// and whether it is a number/bool or not.\nfunc keyFloat(v reflect.Value) (f float64, ok bool) {\n\tswitch v.Kind() {\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\treturn float64(v.Int()), true\n\tcase reflect.Float32, reflect.Float64:\n\t\treturn v.Float(), true\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:\n\t\treturn float64(v.Uint()), true\n\tcase reflect.Bool:\n\t\tif v.Bool() {\n\t\t\treturn 1, true\n\t\t}\n\t\treturn 0, true\n\t}\n\treturn 0, false\n}\n\n// numLess returns whether a < b.\n// a and b must necessarily have the same kind.\nfunc numLess(a, b reflect.Value) bool {\n\tswitch a.Kind() {\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\treturn a.Int() < b.Int()\n\tcase reflect.Float32, reflect.Float64:\n\t\treturn a.Float() < b.Float()\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:\n\t\treturn a.Uint() < b.Uint()\n\tcase reflect.Bool:\n\t\treturn !a.Bool() && b.Bool()\n\t}\n\tpanic(\"not a number\")\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/writerc.go",
    "content": "// \n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n// \n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n// \n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n// \n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\n// Set the writer error and return false.\nfunc yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool {\n\temitter.error = yaml_WRITER_ERROR\n\temitter.problem = problem\n\treturn false\n}\n\n// Flush the output buffer.\nfunc yaml_emitter_flush(emitter *yaml_emitter_t) bool {\n\tif emitter.write_handler == nil {\n\t\tpanic(\"write handler not set\")\n\t}\n\n\t// Check if the buffer is empty.\n\tif emitter.buffer_pos == 0 {\n\t\treturn true\n\t}\n\n\tif err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {\n\t\treturn yaml_emitter_set_writer_error(emitter, \"write error: \"+err.Error())\n\t}\n\temitter.buffer_pos = 0\n\treturn true\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/yaml.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package yaml implements YAML support for the Go language.\n//\n// Source code and other details for the project are available at GitHub:\n//\n//   https://github.com/go-yaml/yaml\n//\npackage yaml\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"reflect\"\n\t\"strings\"\n\t\"sync\"\n\t\"unicode/utf8\"\n)\n\n// The Unmarshaler interface may be implemented by types to customize their\n// behavior when being unmarshaled from a YAML document.\ntype Unmarshaler interface {\n\tUnmarshalYAML(value *Node) error\n}\n\ntype obsoleteUnmarshaler interface {\n\tUnmarshalYAML(unmarshal func(interface{}) error) error\n}\n\n// The Marshaler interface may be implemented by types to customize their\n// behavior when being marshaled into a YAML document. The returned value\n// is marshaled in place of the original value implementing Marshaler.\n//\n// If an error is returned by MarshalYAML, the marshaling procedure stops\n// and returns with the provided error.\ntype Marshaler interface {\n\tMarshalYAML() (interface{}, error)\n}\n\n// Unmarshal decodes the first document found within the in byte slice\n// and assigns decoded values into the out value.\n//\n// Maps and pointers (to a struct, string, int, etc) are accepted as out\n// values. If an internal pointer within a struct is not initialized,\n// the yaml package will initialize it if necessary for unmarshalling\n// the provided data. The out parameter must not be nil.\n//\n// The type of the decoded values should be compatible with the respective\n// values in out. If one or more values cannot be decoded due to a type\n// mismatches, decoding continues partially until the end of the YAML\n// content, and a *yaml.TypeError is returned with details for all\n// missed values.\n//\n// Struct fields are only unmarshalled if they are exported (have an\n// upper case first letter), and are unmarshalled using the field name\n// lowercased as the default key. Custom keys may be defined via the\n// \"yaml\" name in the field tag: the content preceding the first comma\n// is used as the key, and the following comma-separated options are\n// used to tweak the marshalling process (see Marshal).\n// Conflicting names result in a runtime error.\n//\n// For example:\n//\n//     type T struct {\n//         F int `yaml:\"a,omitempty\"`\n//         B int\n//     }\n//     var t T\n//     yaml.Unmarshal([]byte(\"a: 1\\nb: 2\"), &t)\n//\n// See the documentation of Marshal for the format of tags and a list of\n// supported tag options.\n//\nfunc Unmarshal(in []byte, out interface{}) (err error) {\n\treturn unmarshal(in, out, false)\n}\n\n// A Decoder reads and decodes YAML values from an input stream.\ntype Decoder struct {\n\tparser      *parser\n\tknownFields bool\n}\n\n// NewDecoder returns a new decoder that reads from r.\n//\n// The decoder introduces its own buffering and may read\n// data from r beyond the YAML values requested.\nfunc NewDecoder(r io.Reader) *Decoder {\n\treturn &Decoder{\n\t\tparser: newParserFromReader(r),\n\t}\n}\n\n// KnownFields ensures that the keys in decoded mappings to\n// exist as fields in the struct being decoded into.\nfunc (dec *Decoder) KnownFields(enable bool) {\n\tdec.knownFields = enable\n}\n\n// Decode reads the next YAML-encoded value from its input\n// and stores it in the value pointed to by v.\n//\n// See the documentation for Unmarshal for details about the\n// conversion of YAML into a Go value.\nfunc (dec *Decoder) Decode(v interface{}) (err error) {\n\td := newDecoder()\n\td.knownFields = dec.knownFields\n\tdefer handleErr(&err)\n\tnode := dec.parser.parse()\n\tif node == nil {\n\t\treturn io.EOF\n\t}\n\tout := reflect.ValueOf(v)\n\tif out.Kind() == reflect.Ptr && !out.IsNil() {\n\t\tout = out.Elem()\n\t}\n\td.unmarshal(node, out)\n\tif len(d.terrors) > 0 {\n\t\treturn &TypeError{d.terrors}\n\t}\n\treturn nil\n}\n\n// Decode decodes the node and stores its data into the value pointed to by v.\n//\n// See the documentation for Unmarshal for details about the\n// conversion of YAML into a Go value.\nfunc (n *Node) Decode(v interface{}) (err error) {\n\td := newDecoder()\n\tdefer handleErr(&err)\n\tout := reflect.ValueOf(v)\n\tif out.Kind() == reflect.Ptr && !out.IsNil() {\n\t\tout = out.Elem()\n\t}\n\td.unmarshal(n, out)\n\tif len(d.terrors) > 0 {\n\t\treturn &TypeError{d.terrors}\n\t}\n\treturn nil\n}\n\nfunc unmarshal(in []byte, out interface{}, strict bool) (err error) {\n\tdefer handleErr(&err)\n\td := newDecoder()\n\tp := newParser(in)\n\tdefer p.destroy()\n\tnode := p.parse()\n\tif node != nil {\n\t\tv := reflect.ValueOf(out)\n\t\tif v.Kind() == reflect.Ptr && !v.IsNil() {\n\t\t\tv = v.Elem()\n\t\t}\n\t\td.unmarshal(node, v)\n\t}\n\tif len(d.terrors) > 0 {\n\t\treturn &TypeError{d.terrors}\n\t}\n\treturn nil\n}\n\n// Marshal serializes the value provided into a YAML document. The structure\n// of the generated document will reflect the structure of the value itself.\n// Maps and pointers (to struct, string, int, etc) are accepted as the in value.\n//\n// Struct fields are only marshalled if they are exported (have an upper case\n// first letter), and are marshalled using the field name lowercased as the\n// default key. Custom keys may be defined via the \"yaml\" name in the field\n// tag: the content preceding the first comma is used as the key, and the\n// following comma-separated options are used to tweak the marshalling process.\n// Conflicting names result in a runtime error.\n//\n// The field tag format accepted is:\n//\n//     `(...) yaml:\"[<key>][,<flag1>[,<flag2>]]\" (...)`\n//\n// The following flags are currently supported:\n//\n//     omitempty    Only include the field if it's not set to the zero\n//                  value for the type or to empty slices or maps.\n//                  Zero valued structs will be omitted if all their public\n//                  fields are zero, unless they implement an IsZero\n//                  method (see the IsZeroer interface type), in which\n//                  case the field will be excluded if IsZero returns true.\n//\n//     flow         Marshal using a flow style (useful for structs,\n//                  sequences and maps).\n//\n//     inline       Inline the field, which must be a struct or a map,\n//                  causing all of its fields or keys to be processed as if\n//                  they were part of the outer struct. For maps, keys must\n//                  not conflict with the yaml keys of other struct fields.\n//\n// In addition, if the key is \"-\", the field is ignored.\n//\n// For example:\n//\n//     type T struct {\n//         F int `yaml:\"a,omitempty\"`\n//         B int\n//     }\n//     yaml.Marshal(&T{B: 2}) // Returns \"b: 2\\n\"\n//     yaml.Marshal(&T{F: 1}} // Returns \"a: 1\\nb: 0\\n\"\n//\nfunc Marshal(in interface{}) (out []byte, err error) {\n\tdefer handleErr(&err)\n\te := newEncoder()\n\tdefer e.destroy()\n\te.marshalDoc(\"\", reflect.ValueOf(in))\n\te.finish()\n\tout = e.out\n\treturn\n}\n\n// An Encoder writes YAML values to an output stream.\ntype Encoder struct {\n\tencoder *encoder\n}\n\n// NewEncoder returns a new encoder that writes to w.\n// The Encoder should be closed after use to flush all data\n// to w.\nfunc NewEncoder(w io.Writer) *Encoder {\n\treturn &Encoder{\n\t\tencoder: newEncoderWithWriter(w),\n\t}\n}\n\n// Encode writes the YAML encoding of v to the stream.\n// If multiple items are encoded to the stream, the\n// second and subsequent document will be preceded\n// with a \"---\" document separator, but the first will not.\n//\n// See the documentation for Marshal for details about the conversion of Go\n// values to YAML.\nfunc (e *Encoder) Encode(v interface{}) (err error) {\n\tdefer handleErr(&err)\n\te.encoder.marshalDoc(\"\", reflect.ValueOf(v))\n\treturn nil\n}\n\n// Encode encodes value v and stores its representation in n.\n//\n// See the documentation for Marshal for details about the\n// conversion of Go values into YAML.\nfunc (n *Node) Encode(v interface{}) (err error) {\n\tdefer handleErr(&err)\n\te := newEncoder()\n\tdefer e.destroy()\n\te.marshalDoc(\"\", reflect.ValueOf(v))\n\te.finish()\n\tp := newParser(e.out)\n\tp.textless = true\n\tdefer p.destroy()\n\tdoc := p.parse()\n\t*n = *doc.Content[0]\n\treturn nil\n}\n\n// SetIndent changes the used indentation used when encoding.\nfunc (e *Encoder) SetIndent(spaces int) {\n\tif spaces < 0 {\n\t\tpanic(\"yaml: cannot indent to a negative number of spaces\")\n\t}\n\te.encoder.indent = spaces\n}\n\n// Close closes the encoder by writing any remaining data.\n// It does not write a stream terminating string \"...\".\nfunc (e *Encoder) Close() (err error) {\n\tdefer handleErr(&err)\n\te.encoder.finish()\n\treturn nil\n}\n\nfunc handleErr(err *error) {\n\tif v := recover(); v != nil {\n\t\tif e, ok := v.(yamlError); ok {\n\t\t\t*err = e.err\n\t\t} else {\n\t\t\tpanic(v)\n\t\t}\n\t}\n}\n\ntype yamlError struct {\n\terr error\n}\n\nfunc fail(err error) {\n\tpanic(yamlError{err})\n}\n\nfunc failf(format string, args ...interface{}) {\n\tpanic(yamlError{fmt.Errorf(\"yaml: \"+format, args...)})\n}\n\n// A TypeError is returned by Unmarshal when one or more fields in\n// the YAML document cannot be properly decoded into the requested\n// types. When this error is returned, the value is still\n// unmarshaled partially.\ntype TypeError struct {\n\tErrors []string\n}\n\nfunc (e *TypeError) Error() string {\n\treturn fmt.Sprintf(\"yaml: unmarshal errors:\\n  %s\", strings.Join(e.Errors, \"\\n  \"))\n}\n\ntype Kind uint32\n\nconst (\n\tDocumentNode Kind = 1 << iota\n\tSequenceNode\n\tMappingNode\n\tScalarNode\n\tAliasNode\n)\n\ntype Style uint32\n\nconst (\n\tTaggedStyle Style = 1 << iota\n\tDoubleQuotedStyle\n\tSingleQuotedStyle\n\tLiteralStyle\n\tFoldedStyle\n\tFlowStyle\n)\n\n// Node represents an element in the YAML document hierarchy. While documents\n// are typically encoded and decoded into higher level types, such as structs\n// and maps, Node is an intermediate representation that allows detailed\n// control over the content being decoded or encoded.\n//\n// It's worth noting that although Node offers access into details such as\n// line numbers, colums, and comments, the content when re-encoded will not\n// have its original textual representation preserved. An effort is made to\n// render the data plesantly, and to preserve comments near the data they\n// describe, though.\n//\n// Values that make use of the Node type interact with the yaml package in the\n// same way any other type would do, by encoding and decoding yaml data\n// directly or indirectly into them.\n//\n// For example:\n//\n//     var person struct {\n//             Name    string\n//             Address yaml.Node\n//     }\n//     err := yaml.Unmarshal(data, &person)\n// \n// Or by itself:\n//\n//     var person Node\n//     err := yaml.Unmarshal(data, &person)\n//\ntype Node struct {\n\t// Kind defines whether the node is a document, a mapping, a sequence,\n\t// a scalar value, or an alias to another node. The specific data type of\n\t// scalar nodes may be obtained via the ShortTag and LongTag methods.\n\tKind  Kind\n\n\t// Style allows customizing the apperance of the node in the tree.\n\tStyle Style\n\n\t// Tag holds the YAML tag defining the data type for the value.\n\t// When decoding, this field will always be set to the resolved tag,\n\t// even when it wasn't explicitly provided in the YAML content.\n\t// When encoding, if this field is unset the value type will be\n\t// implied from the node properties, and if it is set, it will only\n\t// be serialized into the representation if TaggedStyle is used or\n\t// the implicit tag diverges from the provided one.\n\tTag string\n\n\t// Value holds the unescaped and unquoted represenation of the value.\n\tValue string\n\n\t// Anchor holds the anchor name for this node, which allows aliases to point to it.\n\tAnchor string\n\n\t// Alias holds the node that this alias points to. Only valid when Kind is AliasNode.\n\tAlias *Node\n\n\t// Content holds contained nodes for documents, mappings, and sequences.\n\tContent []*Node\n\n\t// HeadComment holds any comments in the lines preceding the node and\n\t// not separated by an empty line.\n\tHeadComment string\n\n\t// LineComment holds any comments at the end of the line where the node is in.\n\tLineComment string\n\n\t// FootComment holds any comments following the node and before empty lines.\n\tFootComment string\n\n\t// Line and Column hold the node position in the decoded YAML text.\n\t// These fields are not respected when encoding the node.\n\tLine   int\n\tColumn int\n}\n\n// IsZero returns whether the node has all of its fields unset.\nfunc (n *Node) IsZero() bool {\n\treturn n.Kind == 0 && n.Style == 0 && n.Tag == \"\" && n.Value == \"\" && n.Anchor == \"\" && n.Alias == nil && n.Content == nil &&\n\t\tn.HeadComment == \"\" && n.LineComment == \"\" && n.FootComment == \"\" && n.Line == 0 && n.Column == 0\n}\n\n\n// LongTag returns the long form of the tag that indicates the data type for\n// the node. If the Tag field isn't explicitly defined, one will be computed\n// based on the node properties.\nfunc (n *Node) LongTag() string {\n\treturn longTag(n.ShortTag())\n}\n\n// ShortTag returns the short form of the YAML tag that indicates data type for\n// the node. If the Tag field isn't explicitly defined, one will be computed\n// based on the node properties.\nfunc (n *Node) ShortTag() string {\n\tif n.indicatedString() {\n\t\treturn strTag\n\t}\n\tif n.Tag == \"\" || n.Tag == \"!\" {\n\t\tswitch n.Kind {\n\t\tcase MappingNode:\n\t\t\treturn mapTag\n\t\tcase SequenceNode:\n\t\t\treturn seqTag\n\t\tcase AliasNode:\n\t\t\tif n.Alias != nil {\n\t\t\t\treturn n.Alias.ShortTag()\n\t\t\t}\n\t\tcase ScalarNode:\n\t\t\ttag, _ := resolve(\"\", n.Value)\n\t\t\treturn tag\n\t\tcase 0:\n\t\t\t// Special case to make the zero value convenient.\n\t\t\tif n.IsZero() {\n\t\t\t\treturn nullTag\n\t\t\t}\n\t\t}\n\t\treturn \"\"\n\t}\n\treturn shortTag(n.Tag)\n}\n\nfunc (n *Node) indicatedString() bool {\n\treturn n.Kind == ScalarNode &&\n\t\t(shortTag(n.Tag) == strTag ||\n\t\t\t(n.Tag == \"\" || n.Tag == \"!\") && n.Style&(SingleQuotedStyle|DoubleQuotedStyle|LiteralStyle|FoldedStyle) != 0)\n}\n\n// SetString is a convenience function that sets the node to a string value\n// and defines its style in a pleasant way depending on its content.\nfunc (n *Node) SetString(s string) {\n\tn.Kind = ScalarNode\n\tif utf8.ValidString(s) {\n\t\tn.Value = s\n\t\tn.Tag = strTag\n\t} else {\n\t\tn.Value = encodeBase64(s)\n\t\tn.Tag = binaryTag\n\t}\n\tif strings.Contains(n.Value, \"\\n\") {\n\t\tn.Style = LiteralStyle\n\t}\n}\n\n// --------------------------------------------------------------------------\n// Maintain a mapping of keys to structure field indexes\n\n// The code in this section was copied from mgo/bson.\n\n// structInfo holds details for the serialization of fields of\n// a given struct.\ntype structInfo struct {\n\tFieldsMap  map[string]fieldInfo\n\tFieldsList []fieldInfo\n\n\t// InlineMap is the number of the field in the struct that\n\t// contains an ,inline map, or -1 if there's none.\n\tInlineMap int\n\n\t// InlineUnmarshalers holds indexes to inlined fields that\n\t// contain unmarshaler values.\n\tInlineUnmarshalers [][]int\n}\n\ntype fieldInfo struct {\n\tKey       string\n\tNum       int\n\tOmitEmpty bool\n\tFlow      bool\n\t// Id holds the unique field identifier, so we can cheaply\n\t// check for field duplicates without maintaining an extra map.\n\tId int\n\n\t// Inline holds the field index if the field is part of an inlined struct.\n\tInline []int\n}\n\nvar structMap = make(map[reflect.Type]*structInfo)\nvar fieldMapMutex sync.RWMutex\nvar unmarshalerType reflect.Type\n\nfunc init() {\n\tvar v Unmarshaler\n\tunmarshalerType = reflect.ValueOf(&v).Elem().Type()\n}\n\nfunc getStructInfo(st reflect.Type) (*structInfo, error) {\n\tfieldMapMutex.RLock()\n\tsinfo, found := structMap[st]\n\tfieldMapMutex.RUnlock()\n\tif found {\n\t\treturn sinfo, nil\n\t}\n\n\tn := st.NumField()\n\tfieldsMap := make(map[string]fieldInfo)\n\tfieldsList := make([]fieldInfo, 0, n)\n\tinlineMap := -1\n\tinlineUnmarshalers := [][]int(nil)\n\tfor i := 0; i != n; i++ {\n\t\tfield := st.Field(i)\n\t\tif field.PkgPath != \"\" && !field.Anonymous {\n\t\t\tcontinue // Private field\n\t\t}\n\n\t\tinfo := fieldInfo{Num: i}\n\n\t\ttag := field.Tag.Get(\"yaml\")\n\t\tif tag == \"\" && strings.Index(string(field.Tag), \":\") < 0 {\n\t\t\ttag = string(field.Tag)\n\t\t}\n\t\tif tag == \"-\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tinline := false\n\t\tfields := strings.Split(tag, \",\")\n\t\tif len(fields) > 1 {\n\t\t\tfor _, flag := range fields[1:] {\n\t\t\t\tswitch flag {\n\t\t\t\tcase \"omitempty\":\n\t\t\t\t\tinfo.OmitEmpty = true\n\t\t\t\tcase \"flow\":\n\t\t\t\t\tinfo.Flow = true\n\t\t\t\tcase \"inline\":\n\t\t\t\t\tinline = true\n\t\t\t\tdefault:\n\t\t\t\t\treturn nil, errors.New(fmt.Sprintf(\"unsupported flag %q in tag %q of type %s\", flag, tag, st))\n\t\t\t\t}\n\t\t\t}\n\t\t\ttag = fields[0]\n\t\t}\n\n\t\tif inline {\n\t\t\tswitch field.Type.Kind() {\n\t\t\tcase reflect.Map:\n\t\t\t\tif inlineMap >= 0 {\n\t\t\t\t\treturn nil, errors.New(\"multiple ,inline maps in struct \" + st.String())\n\t\t\t\t}\n\t\t\t\tif field.Type.Key() != reflect.TypeOf(\"\") {\n\t\t\t\t\treturn nil, errors.New(\"option ,inline needs a map with string keys in struct \" + st.String())\n\t\t\t\t}\n\t\t\t\tinlineMap = info.Num\n\t\t\tcase reflect.Struct, reflect.Ptr:\n\t\t\t\tftype := field.Type\n\t\t\t\tfor ftype.Kind() == reflect.Ptr {\n\t\t\t\t\tftype = ftype.Elem()\n\t\t\t\t}\n\t\t\t\tif ftype.Kind() != reflect.Struct {\n\t\t\t\t\treturn nil, errors.New(\"option ,inline may only be used on a struct or map field\")\n\t\t\t\t}\n\t\t\t\tif reflect.PtrTo(ftype).Implements(unmarshalerType) {\n\t\t\t\t\tinlineUnmarshalers = append(inlineUnmarshalers, []int{i})\n\t\t\t\t} else {\n\t\t\t\t\tsinfo, err := getStructInfo(ftype)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn nil, err\n\t\t\t\t\t}\n\t\t\t\t\tfor _, index := range sinfo.InlineUnmarshalers {\n\t\t\t\t\t\tinlineUnmarshalers = append(inlineUnmarshalers, append([]int{i}, index...))\n\t\t\t\t\t}\n\t\t\t\t\tfor _, finfo := range sinfo.FieldsList {\n\t\t\t\t\t\tif _, found := fieldsMap[finfo.Key]; found {\n\t\t\t\t\t\t\tmsg := \"duplicated key '\" + finfo.Key + \"' in struct \" + st.String()\n\t\t\t\t\t\t\treturn nil, errors.New(msg)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif finfo.Inline == nil {\n\t\t\t\t\t\t\tfinfo.Inline = []int{i, finfo.Num}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfinfo.Inline = append([]int{i}, finfo.Inline...)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tfinfo.Id = len(fieldsList)\n\t\t\t\t\t\tfieldsMap[finfo.Key] = finfo\n\t\t\t\t\t\tfieldsList = append(fieldsList, finfo)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\treturn nil, errors.New(\"option ,inline may only be used on a struct or map field\")\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tif tag != \"\" {\n\t\t\tinfo.Key = tag\n\t\t} else {\n\t\t\tinfo.Key = strings.ToLower(field.Name)\n\t\t}\n\n\t\tif _, found = fieldsMap[info.Key]; found {\n\t\t\tmsg := \"duplicated key '\" + info.Key + \"' in struct \" + st.String()\n\t\t\treturn nil, errors.New(msg)\n\t\t}\n\n\t\tinfo.Id = len(fieldsList)\n\t\tfieldsList = append(fieldsList, info)\n\t\tfieldsMap[info.Key] = info\n\t}\n\n\tsinfo = &structInfo{\n\t\tFieldsMap:          fieldsMap,\n\t\tFieldsList:         fieldsList,\n\t\tInlineMap:          inlineMap,\n\t\tInlineUnmarshalers: inlineUnmarshalers,\n\t}\n\n\tfieldMapMutex.Lock()\n\tstructMap[st] = sinfo\n\tfieldMapMutex.Unlock()\n\treturn sinfo, nil\n}\n\n// IsZeroer is used to check whether an object is zero to\n// determine whether it should be omitted when marshaling\n// with the omitempty flag. One notable implementation\n// is time.Time.\ntype IsZeroer interface {\n\tIsZero() bool\n}\n\nfunc isZero(v reflect.Value) bool {\n\tkind := v.Kind()\n\tif z, ok := v.Interface().(IsZeroer); ok {\n\t\tif (kind == reflect.Ptr || kind == reflect.Interface) && v.IsNil() {\n\t\t\treturn true\n\t\t}\n\t\treturn z.IsZero()\n\t}\n\tswitch kind {\n\tcase reflect.String:\n\t\treturn len(v.String()) == 0\n\tcase reflect.Interface, reflect.Ptr:\n\t\treturn v.IsNil()\n\tcase reflect.Slice:\n\t\treturn v.Len() == 0\n\tcase reflect.Map:\n\t\treturn v.Len() == 0\n\tcase reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:\n\t\treturn v.Int() == 0\n\tcase reflect.Float32, reflect.Float64:\n\t\treturn v.Float() == 0\n\tcase reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:\n\t\treturn v.Uint() == 0\n\tcase reflect.Bool:\n\t\treturn !v.Bool()\n\tcase reflect.Struct:\n\t\tvt := v.Type()\n\t\tfor i := v.NumField() - 1; i >= 0; i-- {\n\t\t\tif vt.Field(i).PkgPath != \"\" {\n\t\t\t\tcontinue // Private field\n\t\t\t}\n\t\t\tif !isZero(v.Field(i)) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\t}\n\treturn false\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/yamlh.go",
    "content": "//\n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n//\n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n//\n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n//\n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\nimport (\n\t\"fmt\"\n\t\"io\"\n)\n\n// The version directive data.\ntype yaml_version_directive_t struct {\n\tmajor int8 // The major version number.\n\tminor int8 // The minor version number.\n}\n\n// The tag directive data.\ntype yaml_tag_directive_t struct {\n\thandle []byte // The tag handle.\n\tprefix []byte // The tag prefix.\n}\n\ntype yaml_encoding_t int\n\n// The stream encoding.\nconst (\n\t// Let the parser choose the encoding.\n\tyaml_ANY_ENCODING yaml_encoding_t = iota\n\n\tyaml_UTF8_ENCODING    // The default UTF-8 encoding.\n\tyaml_UTF16LE_ENCODING // The UTF-16-LE encoding with BOM.\n\tyaml_UTF16BE_ENCODING // The UTF-16-BE encoding with BOM.\n)\n\ntype yaml_break_t int\n\n// Line break types.\nconst (\n\t// Let the parser choose the break type.\n\tyaml_ANY_BREAK yaml_break_t = iota\n\n\tyaml_CR_BREAK   // Use CR for line breaks (Mac style).\n\tyaml_LN_BREAK   // Use LN for line breaks (Unix style).\n\tyaml_CRLN_BREAK // Use CR LN for line breaks (DOS style).\n)\n\ntype yaml_error_type_t int\n\n// Many bad things could happen with the parser and emitter.\nconst (\n\t// No error is produced.\n\tyaml_NO_ERROR yaml_error_type_t = iota\n\n\tyaml_MEMORY_ERROR   // Cannot allocate or reallocate a block of memory.\n\tyaml_READER_ERROR   // Cannot read or decode the input stream.\n\tyaml_SCANNER_ERROR  // Cannot scan the input stream.\n\tyaml_PARSER_ERROR   // Cannot parse the input stream.\n\tyaml_COMPOSER_ERROR // Cannot compose a YAML document.\n\tyaml_WRITER_ERROR   // Cannot write to the output stream.\n\tyaml_EMITTER_ERROR  // Cannot emit a YAML stream.\n)\n\n// The pointer position.\ntype yaml_mark_t struct {\n\tindex  int // The position index.\n\tline   int // The position line.\n\tcolumn int // The position column.\n}\n\n// Node Styles\n\ntype yaml_style_t int8\n\ntype yaml_scalar_style_t yaml_style_t\n\n// Scalar styles.\nconst (\n\t// Let the emitter choose the style.\n\tyaml_ANY_SCALAR_STYLE yaml_scalar_style_t = 0\n\n\tyaml_PLAIN_SCALAR_STYLE         yaml_scalar_style_t = 1 << iota // The plain scalar style.\n\tyaml_SINGLE_QUOTED_SCALAR_STYLE                                 // The single-quoted scalar style.\n\tyaml_DOUBLE_QUOTED_SCALAR_STYLE                                 // The double-quoted scalar style.\n\tyaml_LITERAL_SCALAR_STYLE                                       // The literal scalar style.\n\tyaml_FOLDED_SCALAR_STYLE                                        // The folded scalar style.\n)\n\ntype yaml_sequence_style_t yaml_style_t\n\n// Sequence styles.\nconst (\n\t// Let the emitter choose the style.\n\tyaml_ANY_SEQUENCE_STYLE yaml_sequence_style_t = iota\n\n\tyaml_BLOCK_SEQUENCE_STYLE // The block sequence style.\n\tyaml_FLOW_SEQUENCE_STYLE  // The flow sequence style.\n)\n\ntype yaml_mapping_style_t yaml_style_t\n\n// Mapping styles.\nconst (\n\t// Let the emitter choose the style.\n\tyaml_ANY_MAPPING_STYLE yaml_mapping_style_t = iota\n\n\tyaml_BLOCK_MAPPING_STYLE // The block mapping style.\n\tyaml_FLOW_MAPPING_STYLE  // The flow mapping style.\n)\n\n// Tokens\n\ntype yaml_token_type_t int\n\n// Token types.\nconst (\n\t// An empty token.\n\tyaml_NO_TOKEN yaml_token_type_t = iota\n\n\tyaml_STREAM_START_TOKEN // A STREAM-START token.\n\tyaml_STREAM_END_TOKEN   // A STREAM-END token.\n\n\tyaml_VERSION_DIRECTIVE_TOKEN // A VERSION-DIRECTIVE token.\n\tyaml_TAG_DIRECTIVE_TOKEN     // A TAG-DIRECTIVE token.\n\tyaml_DOCUMENT_START_TOKEN    // A DOCUMENT-START token.\n\tyaml_DOCUMENT_END_TOKEN      // A DOCUMENT-END token.\n\n\tyaml_BLOCK_SEQUENCE_START_TOKEN // A BLOCK-SEQUENCE-START token.\n\tyaml_BLOCK_MAPPING_START_TOKEN  // A BLOCK-SEQUENCE-END token.\n\tyaml_BLOCK_END_TOKEN            // A BLOCK-END token.\n\n\tyaml_FLOW_SEQUENCE_START_TOKEN // A FLOW-SEQUENCE-START token.\n\tyaml_FLOW_SEQUENCE_END_TOKEN   // A FLOW-SEQUENCE-END token.\n\tyaml_FLOW_MAPPING_START_TOKEN  // A FLOW-MAPPING-START token.\n\tyaml_FLOW_MAPPING_END_TOKEN    // A FLOW-MAPPING-END token.\n\n\tyaml_BLOCK_ENTRY_TOKEN // A BLOCK-ENTRY token.\n\tyaml_FLOW_ENTRY_TOKEN  // A FLOW-ENTRY token.\n\tyaml_KEY_TOKEN         // A KEY token.\n\tyaml_VALUE_TOKEN       // A VALUE token.\n\n\tyaml_ALIAS_TOKEN  // An ALIAS token.\n\tyaml_ANCHOR_TOKEN // An ANCHOR token.\n\tyaml_TAG_TOKEN    // A TAG token.\n\tyaml_SCALAR_TOKEN // A SCALAR token.\n)\n\nfunc (tt yaml_token_type_t) String() string {\n\tswitch tt {\n\tcase yaml_NO_TOKEN:\n\t\treturn \"yaml_NO_TOKEN\"\n\tcase yaml_STREAM_START_TOKEN:\n\t\treturn \"yaml_STREAM_START_TOKEN\"\n\tcase yaml_STREAM_END_TOKEN:\n\t\treturn \"yaml_STREAM_END_TOKEN\"\n\tcase yaml_VERSION_DIRECTIVE_TOKEN:\n\t\treturn \"yaml_VERSION_DIRECTIVE_TOKEN\"\n\tcase yaml_TAG_DIRECTIVE_TOKEN:\n\t\treturn \"yaml_TAG_DIRECTIVE_TOKEN\"\n\tcase yaml_DOCUMENT_START_TOKEN:\n\t\treturn \"yaml_DOCUMENT_START_TOKEN\"\n\tcase yaml_DOCUMENT_END_TOKEN:\n\t\treturn \"yaml_DOCUMENT_END_TOKEN\"\n\tcase yaml_BLOCK_SEQUENCE_START_TOKEN:\n\t\treturn \"yaml_BLOCK_SEQUENCE_START_TOKEN\"\n\tcase yaml_BLOCK_MAPPING_START_TOKEN:\n\t\treturn \"yaml_BLOCK_MAPPING_START_TOKEN\"\n\tcase yaml_BLOCK_END_TOKEN:\n\t\treturn \"yaml_BLOCK_END_TOKEN\"\n\tcase yaml_FLOW_SEQUENCE_START_TOKEN:\n\t\treturn \"yaml_FLOW_SEQUENCE_START_TOKEN\"\n\tcase yaml_FLOW_SEQUENCE_END_TOKEN:\n\t\treturn \"yaml_FLOW_SEQUENCE_END_TOKEN\"\n\tcase yaml_FLOW_MAPPING_START_TOKEN:\n\t\treturn \"yaml_FLOW_MAPPING_START_TOKEN\"\n\tcase yaml_FLOW_MAPPING_END_TOKEN:\n\t\treturn \"yaml_FLOW_MAPPING_END_TOKEN\"\n\tcase yaml_BLOCK_ENTRY_TOKEN:\n\t\treturn \"yaml_BLOCK_ENTRY_TOKEN\"\n\tcase yaml_FLOW_ENTRY_TOKEN:\n\t\treturn \"yaml_FLOW_ENTRY_TOKEN\"\n\tcase yaml_KEY_TOKEN:\n\t\treturn \"yaml_KEY_TOKEN\"\n\tcase yaml_VALUE_TOKEN:\n\t\treturn \"yaml_VALUE_TOKEN\"\n\tcase yaml_ALIAS_TOKEN:\n\t\treturn \"yaml_ALIAS_TOKEN\"\n\tcase yaml_ANCHOR_TOKEN:\n\t\treturn \"yaml_ANCHOR_TOKEN\"\n\tcase yaml_TAG_TOKEN:\n\t\treturn \"yaml_TAG_TOKEN\"\n\tcase yaml_SCALAR_TOKEN:\n\t\treturn \"yaml_SCALAR_TOKEN\"\n\t}\n\treturn \"<unknown token>\"\n}\n\n// The token structure.\ntype yaml_token_t struct {\n\t// The token type.\n\ttyp yaml_token_type_t\n\n\t// The start/end of the token.\n\tstart_mark, end_mark yaml_mark_t\n\n\t// The stream encoding (for yaml_STREAM_START_TOKEN).\n\tencoding yaml_encoding_t\n\n\t// The alias/anchor/scalar value or tag/tag directive handle\n\t// (for yaml_ALIAS_TOKEN, yaml_ANCHOR_TOKEN, yaml_SCALAR_TOKEN, yaml_TAG_TOKEN, yaml_TAG_DIRECTIVE_TOKEN).\n\tvalue []byte\n\n\t// The tag suffix (for yaml_TAG_TOKEN).\n\tsuffix []byte\n\n\t// The tag directive prefix (for yaml_TAG_DIRECTIVE_TOKEN).\n\tprefix []byte\n\n\t// The scalar style (for yaml_SCALAR_TOKEN).\n\tstyle yaml_scalar_style_t\n\n\t// The version directive major/minor (for yaml_VERSION_DIRECTIVE_TOKEN).\n\tmajor, minor int8\n}\n\n// Events\n\ntype yaml_event_type_t int8\n\n// Event types.\nconst (\n\t// An empty event.\n\tyaml_NO_EVENT yaml_event_type_t = iota\n\n\tyaml_STREAM_START_EVENT   // A STREAM-START event.\n\tyaml_STREAM_END_EVENT     // A STREAM-END event.\n\tyaml_DOCUMENT_START_EVENT // A DOCUMENT-START event.\n\tyaml_DOCUMENT_END_EVENT   // A DOCUMENT-END event.\n\tyaml_ALIAS_EVENT          // An ALIAS event.\n\tyaml_SCALAR_EVENT         // A SCALAR event.\n\tyaml_SEQUENCE_START_EVENT // A SEQUENCE-START event.\n\tyaml_SEQUENCE_END_EVENT   // A SEQUENCE-END event.\n\tyaml_MAPPING_START_EVENT  // A MAPPING-START event.\n\tyaml_MAPPING_END_EVENT    // A MAPPING-END event.\n\tyaml_TAIL_COMMENT_EVENT\n)\n\nvar eventStrings = []string{\n\tyaml_NO_EVENT:             \"none\",\n\tyaml_STREAM_START_EVENT:   \"stream start\",\n\tyaml_STREAM_END_EVENT:     \"stream end\",\n\tyaml_DOCUMENT_START_EVENT: \"document start\",\n\tyaml_DOCUMENT_END_EVENT:   \"document end\",\n\tyaml_ALIAS_EVENT:          \"alias\",\n\tyaml_SCALAR_EVENT:         \"scalar\",\n\tyaml_SEQUENCE_START_EVENT: \"sequence start\",\n\tyaml_SEQUENCE_END_EVENT:   \"sequence end\",\n\tyaml_MAPPING_START_EVENT:  \"mapping start\",\n\tyaml_MAPPING_END_EVENT:    \"mapping end\",\n\tyaml_TAIL_COMMENT_EVENT:   \"tail comment\",\n}\n\nfunc (e yaml_event_type_t) String() string {\n\tif e < 0 || int(e) >= len(eventStrings) {\n\t\treturn fmt.Sprintf(\"unknown event %d\", e)\n\t}\n\treturn eventStrings[e]\n}\n\n// The event structure.\ntype yaml_event_t struct {\n\n\t// The event type.\n\ttyp yaml_event_type_t\n\n\t// The start and end of the event.\n\tstart_mark, end_mark yaml_mark_t\n\n\t// The document encoding (for yaml_STREAM_START_EVENT).\n\tencoding yaml_encoding_t\n\n\t// The version directive (for yaml_DOCUMENT_START_EVENT).\n\tversion_directive *yaml_version_directive_t\n\n\t// The list of tag directives (for yaml_DOCUMENT_START_EVENT).\n\ttag_directives []yaml_tag_directive_t\n\n\t// The comments\n\thead_comment []byte\n\tline_comment []byte\n\tfoot_comment []byte\n\ttail_comment []byte\n\n\t// The anchor (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_ALIAS_EVENT).\n\tanchor []byte\n\n\t// The tag (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).\n\ttag []byte\n\n\t// The scalar value (for yaml_SCALAR_EVENT).\n\tvalue []byte\n\n\t// Is the document start/end indicator implicit, or the tag optional?\n\t// (for yaml_DOCUMENT_START_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_SCALAR_EVENT).\n\timplicit bool\n\n\t// Is the tag optional for any non-plain style? (for yaml_SCALAR_EVENT).\n\tquoted_implicit bool\n\n\t// The style (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT).\n\tstyle yaml_style_t\n}\n\nfunc (e *yaml_event_t) scalar_style() yaml_scalar_style_t     { return yaml_scalar_style_t(e.style) }\nfunc (e *yaml_event_t) sequence_style() yaml_sequence_style_t { return yaml_sequence_style_t(e.style) }\nfunc (e *yaml_event_t) mapping_style() yaml_mapping_style_t   { return yaml_mapping_style_t(e.style) }\n\n// Nodes\n\nconst (\n\tyaml_NULL_TAG      = \"tag:yaml.org,2002:null\"      // The tag !!null with the only possible value: null.\n\tyaml_BOOL_TAG      = \"tag:yaml.org,2002:bool\"      // The tag !!bool with the values: true and false.\n\tyaml_STR_TAG       = \"tag:yaml.org,2002:str\"       // The tag !!str for string values.\n\tyaml_INT_TAG       = \"tag:yaml.org,2002:int\"       // The tag !!int for integer values.\n\tyaml_FLOAT_TAG     = \"tag:yaml.org,2002:float\"     // The tag !!float for float values.\n\tyaml_TIMESTAMP_TAG = \"tag:yaml.org,2002:timestamp\" // The tag !!timestamp for date and time values.\n\n\tyaml_SEQ_TAG = \"tag:yaml.org,2002:seq\" // The tag !!seq is used to denote sequences.\n\tyaml_MAP_TAG = \"tag:yaml.org,2002:map\" // The tag !!map is used to denote mapping.\n\n\t// Not in original libyaml.\n\tyaml_BINARY_TAG = \"tag:yaml.org,2002:binary\"\n\tyaml_MERGE_TAG  = \"tag:yaml.org,2002:merge\"\n\n\tyaml_DEFAULT_SCALAR_TAG   = yaml_STR_TAG // The default scalar tag is !!str.\n\tyaml_DEFAULT_SEQUENCE_TAG = yaml_SEQ_TAG // The default sequence tag is !!seq.\n\tyaml_DEFAULT_MAPPING_TAG  = yaml_MAP_TAG // The default mapping tag is !!map.\n)\n\ntype yaml_node_type_t int\n\n// Node types.\nconst (\n\t// An empty node.\n\tyaml_NO_NODE yaml_node_type_t = iota\n\n\tyaml_SCALAR_NODE   // A scalar node.\n\tyaml_SEQUENCE_NODE // A sequence node.\n\tyaml_MAPPING_NODE  // A mapping node.\n)\n\n// An element of a sequence node.\ntype yaml_node_item_t int\n\n// An element of a mapping node.\ntype yaml_node_pair_t struct {\n\tkey   int // The key of the element.\n\tvalue int // The value of the element.\n}\n\n// The node structure.\ntype yaml_node_t struct {\n\ttyp yaml_node_type_t // The node type.\n\ttag []byte           // The node tag.\n\n\t// The node data.\n\n\t// The scalar parameters (for yaml_SCALAR_NODE).\n\tscalar struct {\n\t\tvalue  []byte              // The scalar value.\n\t\tlength int                 // The length of the scalar value.\n\t\tstyle  yaml_scalar_style_t // The scalar style.\n\t}\n\n\t// The sequence parameters (for YAML_SEQUENCE_NODE).\n\tsequence struct {\n\t\titems_data []yaml_node_item_t    // The stack of sequence items.\n\t\tstyle      yaml_sequence_style_t // The sequence style.\n\t}\n\n\t// The mapping parameters (for yaml_MAPPING_NODE).\n\tmapping struct {\n\t\tpairs_data  []yaml_node_pair_t   // The stack of mapping pairs (key, value).\n\t\tpairs_start *yaml_node_pair_t    // The beginning of the stack.\n\t\tpairs_end   *yaml_node_pair_t    // The end of the stack.\n\t\tpairs_top   *yaml_node_pair_t    // The top of the stack.\n\t\tstyle       yaml_mapping_style_t // The mapping style.\n\t}\n\n\tstart_mark yaml_mark_t // The beginning of the node.\n\tend_mark   yaml_mark_t // The end of the node.\n\n}\n\n// The document structure.\ntype yaml_document_t struct {\n\n\t// The document nodes.\n\tnodes []yaml_node_t\n\n\t// The version directive.\n\tversion_directive *yaml_version_directive_t\n\n\t// The list of tag directives.\n\ttag_directives_data  []yaml_tag_directive_t\n\ttag_directives_start int // The beginning of the tag directives list.\n\ttag_directives_end   int // The end of the tag directives list.\n\n\tstart_implicit int // Is the document start indicator implicit?\n\tend_implicit   int // Is the document end indicator implicit?\n\n\t// The start/end of the document.\n\tstart_mark, end_mark yaml_mark_t\n}\n\n// The prototype of a read handler.\n//\n// The read handler is called when the parser needs to read more bytes from the\n// source. The handler should write not more than size bytes to the buffer.\n// The number of written bytes should be set to the size_read variable.\n//\n// [in,out]   data        A pointer to an application data specified by\n//                        yaml_parser_set_input().\n// [out]      buffer      The buffer to write the data from the source.\n// [in]       size        The size of the buffer.\n// [out]      size_read   The actual number of bytes read from the source.\n//\n// On success, the handler should return 1.  If the handler failed,\n// the returned value should be 0. On EOF, the handler should set the\n// size_read to 0 and return 1.\ntype yaml_read_handler_t func(parser *yaml_parser_t, buffer []byte) (n int, err error)\n\n// This structure holds information about a potential simple key.\ntype yaml_simple_key_t struct {\n\tpossible     bool        // Is a simple key possible?\n\trequired     bool        // Is a simple key required?\n\ttoken_number int         // The number of the token.\n\tmark         yaml_mark_t // The position mark.\n}\n\n// The states of the parser.\ntype yaml_parser_state_t int\n\nconst (\n\tyaml_PARSE_STREAM_START_STATE yaml_parser_state_t = iota\n\n\tyaml_PARSE_IMPLICIT_DOCUMENT_START_STATE           // Expect the beginning of an implicit document.\n\tyaml_PARSE_DOCUMENT_START_STATE                    // Expect DOCUMENT-START.\n\tyaml_PARSE_DOCUMENT_CONTENT_STATE                  // Expect the content of a document.\n\tyaml_PARSE_DOCUMENT_END_STATE                      // Expect DOCUMENT-END.\n\tyaml_PARSE_BLOCK_NODE_STATE                        // Expect a block node.\n\tyaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE // Expect a block node or indentless sequence.\n\tyaml_PARSE_FLOW_NODE_STATE                         // Expect a flow node.\n\tyaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE        // Expect the first entry of a block sequence.\n\tyaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE              // Expect an entry of a block sequence.\n\tyaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE         // Expect an entry of an indentless sequence.\n\tyaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE           // Expect the first key of a block mapping.\n\tyaml_PARSE_BLOCK_MAPPING_KEY_STATE                 // Expect a block mapping key.\n\tyaml_PARSE_BLOCK_MAPPING_VALUE_STATE               // Expect a block mapping value.\n\tyaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE         // Expect the first entry of a flow sequence.\n\tyaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE               // Expect an entry of a flow sequence.\n\tyaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE   // Expect a key of an ordered mapping.\n\tyaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE // Expect a value of an ordered mapping.\n\tyaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE   // Expect the and of an ordered mapping entry.\n\tyaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE            // Expect the first key of a flow mapping.\n\tyaml_PARSE_FLOW_MAPPING_KEY_STATE                  // Expect a key of a flow mapping.\n\tyaml_PARSE_FLOW_MAPPING_VALUE_STATE                // Expect a value of a flow mapping.\n\tyaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE          // Expect an empty value of a flow mapping.\n\tyaml_PARSE_END_STATE                               // Expect nothing.\n)\n\nfunc (ps yaml_parser_state_t) String() string {\n\tswitch ps {\n\tcase yaml_PARSE_STREAM_START_STATE:\n\t\treturn \"yaml_PARSE_STREAM_START_STATE\"\n\tcase yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE:\n\t\treturn \"yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE\"\n\tcase yaml_PARSE_DOCUMENT_START_STATE:\n\t\treturn \"yaml_PARSE_DOCUMENT_START_STATE\"\n\tcase yaml_PARSE_DOCUMENT_CONTENT_STATE:\n\t\treturn \"yaml_PARSE_DOCUMENT_CONTENT_STATE\"\n\tcase yaml_PARSE_DOCUMENT_END_STATE:\n\t\treturn \"yaml_PARSE_DOCUMENT_END_STATE\"\n\tcase yaml_PARSE_BLOCK_NODE_STATE:\n\t\treturn \"yaml_PARSE_BLOCK_NODE_STATE\"\n\tcase yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE:\n\t\treturn \"yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE\"\n\tcase yaml_PARSE_FLOW_NODE_STATE:\n\t\treturn \"yaml_PARSE_FLOW_NODE_STATE\"\n\tcase yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE:\n\t\treturn \"yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE\"\n\tcase yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE:\n\t\treturn \"yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE\"\n\tcase yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE:\n\t\treturn \"yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE\"\n\tcase yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE:\n\t\treturn \"yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE\"\n\tcase yaml_PARSE_BLOCK_MAPPING_KEY_STATE:\n\t\treturn \"yaml_PARSE_BLOCK_MAPPING_KEY_STATE\"\n\tcase yaml_PARSE_BLOCK_MAPPING_VALUE_STATE:\n\t\treturn \"yaml_PARSE_BLOCK_MAPPING_VALUE_STATE\"\n\tcase yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE:\n\t\treturn \"yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE\"\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE:\n\t\treturn \"yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE\"\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE:\n\t\treturn \"yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE\"\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE:\n\t\treturn \"yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE\"\n\tcase yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE:\n\t\treturn \"yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE\"\n\tcase yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE:\n\t\treturn \"yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE\"\n\tcase yaml_PARSE_FLOW_MAPPING_KEY_STATE:\n\t\treturn \"yaml_PARSE_FLOW_MAPPING_KEY_STATE\"\n\tcase yaml_PARSE_FLOW_MAPPING_VALUE_STATE:\n\t\treturn \"yaml_PARSE_FLOW_MAPPING_VALUE_STATE\"\n\tcase yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE:\n\t\treturn \"yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE\"\n\tcase yaml_PARSE_END_STATE:\n\t\treturn \"yaml_PARSE_END_STATE\"\n\t}\n\treturn \"<unknown parser state>\"\n}\n\n// This structure holds aliases data.\ntype yaml_alias_data_t struct {\n\tanchor []byte      // The anchor.\n\tindex  int         // The node id.\n\tmark   yaml_mark_t // The anchor mark.\n}\n\n// The parser structure.\n//\n// All members are internal. Manage the structure using the\n// yaml_parser_ family of functions.\ntype yaml_parser_t struct {\n\n\t// Error handling\n\n\terror yaml_error_type_t // Error type.\n\n\tproblem string // Error description.\n\n\t// The byte about which the problem occurred.\n\tproblem_offset int\n\tproblem_value  int\n\tproblem_mark   yaml_mark_t\n\n\t// The error context.\n\tcontext      string\n\tcontext_mark yaml_mark_t\n\n\t// Reader stuff\n\n\tread_handler yaml_read_handler_t // Read handler.\n\n\tinput_reader io.Reader // File input data.\n\tinput        []byte    // String input data.\n\tinput_pos    int\n\n\teof bool // EOF flag\n\n\tbuffer     []byte // The working buffer.\n\tbuffer_pos int    // The current position of the buffer.\n\n\tunread int // The number of unread characters in the buffer.\n\n\tnewlines int // The number of line breaks since last non-break/non-blank character\n\n\traw_buffer     []byte // The raw buffer.\n\traw_buffer_pos int    // The current position of the buffer.\n\n\tencoding yaml_encoding_t // The input encoding.\n\n\toffset int         // The offset of the current position (in bytes).\n\tmark   yaml_mark_t // The mark of the current position.\n\n\t// Comments\n\n\thead_comment []byte // The current head comments\n\tline_comment []byte // The current line comments\n\tfoot_comment []byte // The current foot comments\n\ttail_comment []byte // Foot comment that happens at the end of a block.\n\tstem_comment []byte // Comment in item preceding a nested structure (list inside list item, etc)\n\n\tcomments      []yaml_comment_t // The folded comments for all parsed tokens\n\tcomments_head int\n\n\t// Scanner stuff\n\n\tstream_start_produced bool // Have we started to scan the input stream?\n\tstream_end_produced   bool // Have we reached the end of the input stream?\n\n\tflow_level int // The number of unclosed '[' and '{' indicators.\n\n\ttokens          []yaml_token_t // The tokens queue.\n\ttokens_head     int            // The head of the tokens queue.\n\ttokens_parsed   int            // The number of tokens fetched from the queue.\n\ttoken_available bool           // Does the tokens queue contain a token ready for dequeueing.\n\n\tindent  int   // The current indentation level.\n\tindents []int // The indentation levels stack.\n\n\tsimple_key_allowed bool                // May a simple key occur at the current position?\n\tsimple_keys        []yaml_simple_key_t // The stack of simple keys.\n\tsimple_keys_by_tok map[int]int         // possible simple_key indexes indexed by token_number\n\n\t// Parser stuff\n\n\tstate          yaml_parser_state_t    // The current parser state.\n\tstates         []yaml_parser_state_t  // The parser states stack.\n\tmarks          []yaml_mark_t          // The stack of marks.\n\ttag_directives []yaml_tag_directive_t // The list of TAG directives.\n\n\t// Dumper stuff\n\n\taliases []yaml_alias_data_t // The alias data.\n\n\tdocument *yaml_document_t // The currently parsed document.\n}\n\ntype yaml_comment_t struct {\n\n\tscan_mark  yaml_mark_t // Position where scanning for comments started\n\ttoken_mark yaml_mark_t // Position after which tokens will be associated with this comment\n\tstart_mark yaml_mark_t // Position of '#' comment mark\n\tend_mark   yaml_mark_t // Position where comment terminated\n\n\thead []byte\n\tline []byte\n\tfoot []byte\n}\n\n// Emitter Definitions\n\n// The prototype of a write handler.\n//\n// The write handler is called when the emitter needs to flush the accumulated\n// characters to the output.  The handler should write @a size bytes of the\n// @a buffer to the output.\n//\n// @param[in,out]   data        A pointer to an application data specified by\n//                              yaml_emitter_set_output().\n// @param[in]       buffer      The buffer with bytes to be written.\n// @param[in]       size        The size of the buffer.\n//\n// @returns On success, the handler should return @c 1.  If the handler failed,\n// the returned value should be @c 0.\n//\ntype yaml_write_handler_t func(emitter *yaml_emitter_t, buffer []byte) error\n\ntype yaml_emitter_state_t int\n\n// The emitter states.\nconst (\n\t// Expect STREAM-START.\n\tyaml_EMIT_STREAM_START_STATE yaml_emitter_state_t = iota\n\n\tyaml_EMIT_FIRST_DOCUMENT_START_STATE       // Expect the first DOCUMENT-START or STREAM-END.\n\tyaml_EMIT_DOCUMENT_START_STATE             // Expect DOCUMENT-START or STREAM-END.\n\tyaml_EMIT_DOCUMENT_CONTENT_STATE           // Expect the content of a document.\n\tyaml_EMIT_DOCUMENT_END_STATE               // Expect DOCUMENT-END.\n\tyaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE   // Expect the first item of a flow sequence.\n\tyaml_EMIT_FLOW_SEQUENCE_TRAIL_ITEM_STATE   // Expect the next item of a flow sequence, with the comma already written out\n\tyaml_EMIT_FLOW_SEQUENCE_ITEM_STATE         // Expect an item of a flow sequence.\n\tyaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE     // Expect the first key of a flow mapping.\n\tyaml_EMIT_FLOW_MAPPING_TRAIL_KEY_STATE     // Expect the next key of a flow mapping, with the comma already written out\n\tyaml_EMIT_FLOW_MAPPING_KEY_STATE           // Expect a key of a flow mapping.\n\tyaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE  // Expect a value for a simple key of a flow mapping.\n\tyaml_EMIT_FLOW_MAPPING_VALUE_STATE         // Expect a value of a flow mapping.\n\tyaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE  // Expect the first item of a block sequence.\n\tyaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE        // Expect an item of a block sequence.\n\tyaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE    // Expect the first key of a block mapping.\n\tyaml_EMIT_BLOCK_MAPPING_KEY_STATE          // Expect the key of a block mapping.\n\tyaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a block mapping.\n\tyaml_EMIT_BLOCK_MAPPING_VALUE_STATE        // Expect a value of a block mapping.\n\tyaml_EMIT_END_STATE                        // Expect nothing.\n)\n\n// The emitter structure.\n//\n// All members are internal.  Manage the structure using the @c yaml_emitter_\n// family of functions.\ntype yaml_emitter_t struct {\n\n\t// Error handling\n\n\terror   yaml_error_type_t // Error type.\n\tproblem string            // Error description.\n\n\t// Writer stuff\n\n\twrite_handler yaml_write_handler_t // Write handler.\n\n\toutput_buffer *[]byte   // String output data.\n\toutput_writer io.Writer // File output data.\n\n\tbuffer     []byte // The working buffer.\n\tbuffer_pos int    // The current position of the buffer.\n\n\traw_buffer     []byte // The raw buffer.\n\traw_buffer_pos int    // The current position of the buffer.\n\n\tencoding yaml_encoding_t // The stream encoding.\n\n\t// Emitter stuff\n\n\tcanonical   bool         // If the output is in the canonical style?\n\tbest_indent int          // The number of indentation spaces.\n\tbest_width  int          // The preferred width of the output lines.\n\tunicode     bool         // Allow unescaped non-ASCII characters?\n\tline_break  yaml_break_t // The preferred line break.\n\n\tstate  yaml_emitter_state_t   // The current emitter state.\n\tstates []yaml_emitter_state_t // The stack of states.\n\n\tevents      []yaml_event_t // The event queue.\n\tevents_head int            // The head of the event queue.\n\n\tindents []int // The stack of indentation levels.\n\n\ttag_directives []yaml_tag_directive_t // The list of tag directives.\n\n\tindent int // The current indentation level.\n\n\tflow_level int // The current flow level.\n\n\troot_context       bool // Is it the document root context?\n\tsequence_context   bool // Is it a sequence context?\n\tmapping_context    bool // Is it a mapping context?\n\tsimple_key_context bool // Is it a simple mapping key context?\n\n\tline       int  // The current line.\n\tcolumn     int  // The current column.\n\twhitespace bool // If the last character was a whitespace?\n\tindention  bool // If the last character was an indentation character (' ', '-', '?', ':')?\n\topen_ended bool // If an explicit document end is required?\n\n\tspace_above bool // Is there's an empty line above?\n\tfoot_indent int  // The indent used to write the foot comment above, or -1 if none.\n\n\t// Anchor analysis.\n\tanchor_data struct {\n\t\tanchor []byte // The anchor value.\n\t\talias  bool   // Is it an alias?\n\t}\n\n\t// Tag analysis.\n\ttag_data struct {\n\t\thandle []byte // The tag handle.\n\t\tsuffix []byte // The tag suffix.\n\t}\n\n\t// Scalar analysis.\n\tscalar_data struct {\n\t\tvalue                 []byte              // The scalar value.\n\t\tmultiline             bool                // Does the scalar contain line breaks?\n\t\tflow_plain_allowed    bool                // Can the scalar be expessed in the flow plain style?\n\t\tblock_plain_allowed   bool                // Can the scalar be expressed in the block plain style?\n\t\tsingle_quoted_allowed bool                // Can the scalar be expressed in the single quoted style?\n\t\tblock_allowed         bool                // Can the scalar be expressed in the literal or folded styles?\n\t\tstyle                 yaml_scalar_style_t // The output style.\n\t}\n\n\t// Comments\n\thead_comment []byte\n\tline_comment []byte\n\tfoot_comment []byte\n\ttail_comment []byte\n\n\tkey_line_comment []byte\n\n\t// Dumper stuff\n\n\topened bool // If the stream was already opened?\n\tclosed bool // If the stream was already closed?\n\n\t// The information associated with the document nodes.\n\tanchors *struct {\n\t\treferences int  // The number of references.\n\t\tanchor     int  // The anchor id.\n\t\tserialized bool // If the node has been emitted?\n\t}\n\n\tlast_anchor_id int // The last assigned anchor id.\n\n\tdocument *yaml_document_t // The currently emitted document.\n}\n"
  },
  {
    "path": "vendor/gopkg.in/yaml.v3/yamlprivateh.go",
    "content": "// \n// Copyright (c) 2011-2019 Canonical Ltd\n// Copyright (c) 2006-2010 Kirill Simonov\n// \n// Permission is hereby granted, free of charge, to any person obtaining a copy of\n// this software and associated documentation files (the \"Software\"), to deal in\n// the Software without restriction, including without limitation the rights to\n// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\n// of the Software, and to permit persons to whom the Software is furnished to do\n// so, subject to the following conditions:\n// \n// The above copyright notice and this permission notice shall be included in all\n// copies or substantial portions of the Software.\n// \n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n// SOFTWARE.\n\npackage yaml\n\nconst (\n\t// The size of the input raw buffer.\n\tinput_raw_buffer_size = 512\n\n\t// The size of the input buffer.\n\t// It should be possible to decode the whole raw buffer.\n\tinput_buffer_size = input_raw_buffer_size * 3\n\n\t// The size of the output buffer.\n\toutput_buffer_size = 128\n\n\t// The size of the output raw buffer.\n\t// It should be possible to encode the whole output buffer.\n\toutput_raw_buffer_size = (output_buffer_size*2 + 2)\n\n\t// The size of other stacks and queues.\n\tinitial_stack_size  = 16\n\tinitial_queue_size  = 16\n\tinitial_string_size = 16\n)\n\n// Check if the character at the specified position is an alphabetical\n// character, a digit, '_', or '-'.\nfunc is_alpha(b []byte, i int) bool {\n\treturn b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'Z' || b[i] >= 'a' && b[i] <= 'z' || b[i] == '_' || b[i] == '-'\n}\n\n// Check if the character at the specified position is a digit.\nfunc is_digit(b []byte, i int) bool {\n\treturn b[i] >= '0' && b[i] <= '9'\n}\n\n// Get the value of a digit.\nfunc as_digit(b []byte, i int) int {\n\treturn int(b[i]) - '0'\n}\n\n// Check if the character at the specified position is a hex-digit.\nfunc is_hex(b []byte, i int) bool {\n\treturn b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'F' || b[i] >= 'a' && b[i] <= 'f'\n}\n\n// Get the value of a hex-digit.\nfunc as_hex(b []byte, i int) int {\n\tbi := b[i]\n\tif bi >= 'A' && bi <= 'F' {\n\t\treturn int(bi) - 'A' + 10\n\t}\n\tif bi >= 'a' && bi <= 'f' {\n\t\treturn int(bi) - 'a' + 10\n\t}\n\treturn int(bi) - '0'\n}\n\n// Check if the character is ASCII.\nfunc is_ascii(b []byte, i int) bool {\n\treturn b[i] <= 0x7F\n}\n\n// Check if the character at the start of the buffer can be printed unescaped.\nfunc is_printable(b []byte, i int) bool {\n\treturn ((b[i] == 0x0A) || // . == #x0A\n\t\t(b[i] >= 0x20 && b[i] <= 0x7E) || // #x20 <= . <= #x7E\n\t\t(b[i] == 0xC2 && b[i+1] >= 0xA0) || // #0xA0 <= . <= #xD7FF\n\t\t(b[i] > 0xC2 && b[i] < 0xED) ||\n\t\t(b[i] == 0xED && b[i+1] < 0xA0) ||\n\t\t(b[i] == 0xEE) ||\n\t\t(b[i] == 0xEF && // #xE000 <= . <= #xFFFD\n\t\t\t!(b[i+1] == 0xBB && b[i+2] == 0xBF) && // && . != #xFEFF\n\t\t\t!(b[i+1] == 0xBF && (b[i+2] == 0xBE || b[i+2] == 0xBF))))\n}\n\n// Check if the character at the specified position is NUL.\nfunc is_z(b []byte, i int) bool {\n\treturn b[i] == 0x00\n}\n\n// Check if the beginning of the buffer is a BOM.\nfunc is_bom(b []byte, i int) bool {\n\treturn b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF\n}\n\n// Check if the character at the specified position is space.\nfunc is_space(b []byte, i int) bool {\n\treturn b[i] == ' '\n}\n\n// Check if the character at the specified position is tab.\nfunc is_tab(b []byte, i int) bool {\n\treturn b[i] == '\\t'\n}\n\n// Check if the character at the specified position is blank (space or tab).\nfunc is_blank(b []byte, i int) bool {\n\t//return is_space(b, i) || is_tab(b, i)\n\treturn b[i] == ' ' || b[i] == '\\t'\n}\n\n// Check if the character at the specified position is a line break.\nfunc is_break(b []byte, i int) bool {\n\treturn (b[i] == '\\r' || // CR (#xD)\n\t\tb[i] == '\\n' || // LF (#xA)\n\t\tb[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9) // PS (#x2029)\n}\n\nfunc is_crlf(b []byte, i int) bool {\n\treturn b[i] == '\\r' && b[i+1] == '\\n'\n}\n\n// Check if the character is a line break or NUL.\nfunc is_breakz(b []byte, i int) bool {\n\t//return is_break(b, i) || is_z(b, i)\n\treturn (\n\t\t// is_break:\n\t\tb[i] == '\\r' || // CR (#xD)\n\t\tb[i] == '\\n' || // LF (#xA)\n\t\tb[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)\n\t\t// is_z:\n\t\tb[i] == 0)\n}\n\n// Check if the character is a line break, space, or NUL.\nfunc is_spacez(b []byte, i int) bool {\n\t//return is_space(b, i) || is_breakz(b, i)\n\treturn (\n\t\t// is_space:\n\t\tb[i] == ' ' ||\n\t\t// is_breakz:\n\t\tb[i] == '\\r' || // CR (#xD)\n\t\tb[i] == '\\n' || // LF (#xA)\n\t\tb[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)\n\t\tb[i] == 0)\n}\n\n// Check if the character is a line break, space, tab, or NUL.\nfunc is_blankz(b []byte, i int) bool {\n\t//return is_blank(b, i) || is_breakz(b, i)\n\treturn (\n\t\t// is_blank:\n\t\tb[i] == ' ' || b[i] == '\\t' ||\n\t\t// is_breakz:\n\t\tb[i] == '\\r' || // CR (#xD)\n\t\tb[i] == '\\n' || // LF (#xA)\n\t\tb[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028)\n\t\tb[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029)\n\t\tb[i] == 0)\n}\n\n// Determine the width of the character.\nfunc width(b byte) int {\n\t// Don't replace these by a switch without first\n\t// confirming that it is being inlined.\n\tif b&0x80 == 0x00 {\n\t\treturn 1\n\t}\n\tif b&0xE0 == 0xC0 {\n\t\treturn 2\n\t}\n\tif b&0xF0 == 0xE0 {\n\t\treturn 3\n\t}\n\tif b&0xF8 == 0xF0 {\n\t\treturn 4\n\t}\n\treturn 0\n\n}\n"
  },
  {
    "path": "vendor/modules.txt",
    "content": "# github.com/davecgh/go-spew v1.1.1\n## explicit\ngithub.com/davecgh/go-spew/spew\n# github.com/go-chi/chi v4.1.2+incompatible\n## explicit\ngithub.com/go-chi/chi\ngithub.com/go-chi/chi/middleware\n# github.com/go-rel/postgres v0.8.0\n## explicit; go 1.17\ngithub.com/go-rel/postgres\n# github.com/go-rel/rel v0.39.0\n## explicit; go 1.19\ngithub.com/go-rel/rel\ngithub.com/go-rel/rel/where\n# github.com/go-rel/reltest v0.11.0\n## explicit; go 1.19\ngithub.com/go-rel/reltest\n# github.com/go-rel/sql v0.12.0\n## explicit; go 1.16\ngithub.com/go-rel/sql\ngithub.com/go-rel/sql/builder\n# github.com/goware/cors v1.1.1\n## explicit\ngithub.com/goware/cors\n# github.com/jinzhu/inflection v1.0.0\n## explicit\ngithub.com/jinzhu/inflection\n# github.com/lib/pq v1.10.9\n## explicit; go 1.13\ngithub.com/lib/pq\ngithub.com/lib/pq/oid\ngithub.com/lib/pq/scram\n# github.com/pmezard/go-difflib v1.0.0\n## explicit\ngithub.com/pmezard/go-difflib/difflib\n# github.com/serenize/snaker v0.0.0-20201027110005-a7ad2135616e\n## explicit\ngithub.com/serenize/snaker\n# github.com/stretchr/objx v0.5.0\n## explicit; go 1.12\ngithub.com/stretchr/objx\n# github.com/stretchr/testify v1.8.3\n## explicit; go 1.20\ngithub.com/stretchr/testify/assert\ngithub.com/stretchr/testify/mock\n# go.uber.org/atomic v1.10.0\n## explicit; go 1.18\ngo.uber.org/atomic\n# go.uber.org/multierr v1.8.0\n## explicit; go 1.14\ngo.uber.org/multierr\n# go.uber.org/zap v1.24.0\n## explicit; go 1.19\ngo.uber.org/zap\ngo.uber.org/zap/buffer\ngo.uber.org/zap/internal\ngo.uber.org/zap/internal/bufferpool\ngo.uber.org/zap/internal/color\ngo.uber.org/zap/internal/exit\ngo.uber.org/zap/zapcore\n# gopkg.in/yaml.v3 v3.0.1\n## explicit\ngopkg.in/yaml.v3\n"
  }
]