[
  {
    "path": ".gitignore",
    "content": "# Binaries for programs and plugins\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\n\n# Test binary, build with `go test -c`\n*.test\n\n# Output of the go coverage tool, specifically when used with LiteIDE\n*.out\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 Stefan Prodan\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Istio service mesh guides \n\n![istio](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/istio-gcp-overview.png)\n\n[Istio GKE setup](/docs/istio/00-index.md)\n\n* [Prerequisites - client tools](/docs/istio/01-prerequisites.md)\n* [GKE cluster setup](/docs/istio/02-gke-setup.md)\n* [Cloud DNS setup](/docs/istio/03-clouddns-setup.md)\n* [Install Istio with Helm](/docs/istio/04-istio-setup.md)\n* [Configure Istio Gateway with Let's Encrypt wildcard certificate](/docs/istio/05-letsencrypt-setup.md)\n* [Expose services outside the service mesh](/docs/istio/06-grafana-config.md)\n\n[Progressive delivery walkthrough](docs/apps/00-index.md)\n\n* [Automated canary deployments with Flagger](/docs/apps/01-canary-flagger.md)\n* [A/B testing for a micro-service stack with Helm](/docs/apps/02-ab-testing-helm.md)\n\n[OpenFaaS service mesh walkthrough](docs/openfaas/00-index.md)\n\n* [Configure OpenFaaS mutual TLS](/docs/openfaas/01-mtls-config.md)\n* [Configure OpenFaaS access policies](/docs/openfaas/02-mixer-rules.md)\n* [Install OpenFaaS with Helm](/docs/openfaas/03-openfaas-setup.md)\n* [Configure OpenFaaS Gateway to receive external traffic](/docs/openfaas/04-gateway-config.md)\n* [Canary deployments for OpenFaaS functions](/docs/openfaas/05-canary.md)\n"
  },
  {
    "path": "docs/apps/00-index.md",
    "content": "# Progressive delivery walkthrough\n\nThis guide shows you how to route traffic between different versions of a service and how to automate canary deployments.\n\n![flagger-overview](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/diagrams/flagger-overview.png)\n\nAt the end of this guide you will be deploying a series of micro-services with the following characteristics:\n\n* A/B testing for frontend services\n* Source/Destination based routing for backend services\n* Progressive deployments gated by Prometheus \n\n### Labs\n\n* [Automated canary deployments with Flagger](01-canary-flagger.md)\n* [A/B testing for a micro-service stack with Helm](02-ab-testing-helm.md)\n"
  },
  {
    "path": "docs/apps/01-canary-flagger.md",
    "content": "# Automated canary deployments with Flagger\n\n[Flagger](https://github.com/stefanprodan/flagger) is a Kubernetes operator that automates the promotion of\ncanary deployments using Istio routing for traffic shifting and Prometheus metrics for canary analysis.\n\n### Install Flagger\n\nDeploy Flagger in the `istio-system` namespace using Helm:\n\n```bash\n# add the Helm repository\nhelm repo add flagger https://flagger.app\n\n# install or upgrade\nhelm upgrade -i flagger flagger/flagger \\\n--namespace=istio-system \\\n--set metricsServer=http://prometheus.istio-system:9090\n```\n\nFlagger is compatible with Kubernetes >1.11.0 and Istio >1.0.0.\n\n![flagger-overview](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/diagrams/flagger-canary-overview.png)\n\nFlagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA) and creates a series of objects \n(Kubernetes deployments, ClusterIP services and Istio virtual services) to drive the canary analysis and promotion.\n\nA canary deployment is triggered by changes in any of the following objects:\n\n* Deployment PodSpec (container image, command, ports, env, resources, etc)\n* ConfigMaps mounted as volumes or mapped to environment variables\n* Secrets mounted as volumes or mapped to environment variables\n\nGated canary promotion stages:\n\n* scan for canary deployments\n* check Istio virtual service routes are mapped to primary and canary ClusterIP services\n* check primary and canary deployments status\n    * halt advancement if a rolling update is underway\n    * halt advancement if pods are unhealthy\n* increase canary traffic weight percentage from 0% to 5% (step weight)\n* call webhooks and check results\n* check canary HTTP request success rate and latency\n    * halt advancement if any metric is under the specified threshold\n    * increment the failed checks counter\n* check if the number of failed checks reached the threshold\n    * route all traffic to primary\n    * scale to zero the canary deployment and mark it as failed\n    * wait for the canary deployment to be updated and start over\n* increase canary traffic weight by 5% (step weight) till it reaches 50% (max weight) \n    * halt advancement while canary request success rate is under the threshold\n    * halt advancement while canary request duration P99 is over the threshold\n    * halt advancement if the primary or canary deployment becomes unhealthy \n    * halt advancement while canary deployment is being scaled up/down by HPA\n* promote canary to primary\n    * copy ConfigMaps and Secrets from canary to primary\n    * copy canary deployment spec template over primary\n* wait for primary rolling update to finish\n    * halt advancement if pods are unhealthy\n* route all traffic to primary\n* scale to zero the canary deployment\n* mark rollout as finished\n* wait for the canary deployment to be updated and start over\n\nYou can change the canary analysis _max weight_ and the _step weight_ percentage in the Flagger's custom resource.\n\n### Automated canary analysis and promotion\n\nCreate a test namespace with Istio sidecar injection enabled:\n\n```bash\nexport REPO=https://raw.githubusercontent.com/weaveworks/flagger/master\n\nkubectl apply -f ${REPO}/artifacts/namespaces/test.yaml\n```\n\nCreate a deployment and a horizontal pod autoscaler:\n\n```bash\nkubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml\nkubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml\n```\n\nDeploy the load testing service to generate traffic during the canary analysis:\n\n```bash\nkubectl -n test apply -f ${REPO}/artifacts/loadtester/deployment.yaml\nkubectl -n test apply -f ${REPO}/artifacts/loadtester/service.yaml\n```\n\nCreate a canary custom resource (replace example.com with your own domain):\n\n```yaml\napiVersion: flagger.app/v1alpha3\nkind: Canary\nmetadata:\n  name: podinfo\n  namespace: test\nspec:\n  # deployment reference\n  targetRef:\n    apiVersion: apps/v1\n    kind: Deployment\n    name: podinfo\n  # the maximum time in seconds for the canary deployment\n  # to make progress before it is rollback (default 600s)\n  progressDeadlineSeconds: 60\n  # HPA reference (optional)\n  autoscalerRef:\n    apiVersion: autoscaling/v2beta1\n    kind: HorizontalPodAutoscaler\n    name: podinfo\n  service:\n    # container port\n    port: 9898\n    trafficPolicy:\n      tls:\n        # use ISTIO_MUTUAL when mTLS is enabled\n        mode: DISABLE\n    # Istio gateways (optional)\n    gateways:\n    - public-gateway.istio-system.svc.cluster.local\n    - mesh\n    # Istio virtual service host names (optional)\n    hosts:\n    - app.example.com\n  canaryAnalysis:\n    # schedule interval (default 60s)\n    interval: 1m\n    # max number of failed metric checks before rollback\n    threshold: 5\n    # max traffic percentage routed to canary\n    # percentage (0-100)\n    maxWeight: 50\n    # canary increment step\n    # percentage (0-100)\n    stepWeight: 10\n    metrics:\n    - name: request-success-rate\n      # minimum req success rate (non 5xx responses)\n      # percentage (0-100)\n      threshold: 99\n      interval: 1m\n    - name: request-duration\n      # maximum req duration P99\n      # milliseconds\n      threshold: 500\n      interval: 30s\n    # generate traffic during analysis\n    webhooks:\n      - name: load-test\n        url: http://flagger-loadtester.test/\n        timeout: 5s\n        metadata:\n          cmd: \"hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/\"\n```\n\nSave the above resource as podinfo-canary.yaml and then apply it:\n\n```bash\nkubectl apply -f ./podinfo-canary.yaml\n```\n\nAfter a couple of seconds Flagger will create the canary objects:\n\n```bash\n# applied \ndeployment.apps/podinfo\nhorizontalpodautoscaler.autoscaling/podinfo\ncanary.flagger.app/podinfo\n\n# generated \ndeployment.apps/podinfo-primary\nhorizontalpodautoscaler.autoscaling/podinfo-primary\nservice/podinfo\nservice/podinfo-canary\nservice/podinfo-primary\nvirtualservice.networking.istio.io/podinfo\n```\n\n![flagger-canary-steps](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/diagrams/flagger-canary-steps.png)\n\nTrigger a canary deployment by updating the container image:\n\n```bash\nkubectl -n test set image deployment/podinfo \\\npodinfod=quay.io/stefanprodan/podinfo:1.4.1\n```\n\nFlagger detects that the deployment revision changed and starts a new rollout:\n\n```\nkubectl -n test describe canary/podinfo\n\nStatus:\n  Canary Revision:  19871136\n  Failed Checks:    0\n  State:            finished\nEvents:\n  Type     Reason  Age   From     Message\n  ----     ------  ----  ----     -------\n  Normal   Synced  3m    flagger  New revision detected podinfo.test\n  Normal   Synced  3m    flagger  Scaling up podinfo.test\n  Warning  Synced  3m    flagger  Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available\n  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 5\n  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 10\n  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 15\n  Normal   Synced  2m    flagger  Advance podinfo.test canary weight 20\n  Normal   Synced  2m    flagger  Advance podinfo.test canary weight 25\n  Normal   Synced  1m    flagger  Advance podinfo.test canary weight 30\n  Normal   Synced  1m    flagger  Advance podinfo.test canary weight 35\n  Normal   Synced  55s   flagger  Advance podinfo.test canary weight 40\n  Normal   Synced  45s   flagger  Advance podinfo.test canary weight 45\n  Normal   Synced  35s   flagger  Advance podinfo.test canary weight 50\n  Normal   Synced  25s   flagger  Copying podinfo.test template spec to podinfo-primary.test\n  Warning  Synced  15s   flagger  Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available\n  Normal   Synced  5s    flagger  Promotion completed! Scaling down podinfo.test\n```\n\n**Note** that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis.\n\nYou can monitor all canaries with:\n\n```bash\nwatch kubectl get canaries --all-namespaces\n\nNAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME\ntest        podinfo   Progressing   15       2019-01-16T14:05:07Z\nprod        frontend  Succeeded     0        2019-01-15T16:15:07Z\nprod        backend   Failed        0        2019-01-14T17:05:07Z\n```\n\n### Automated rollback\n\nDuring the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses the rollout.\n\nCreate a tester pod and exec into it:\n\n```bash\nkubectl -n test run tester --image=quay.io/stefanprodan/podinfo:1.2.1 -- ./podinfo --port=9898\nkubectl -n test exec -it tester-xx-xx sh\n```\n\nGenerate HTTP 500 errors:\n\n```bash\nwatch curl http://podinfo-canary:9898/status/500\n```\n\nGenerate latency:\n\n```bash\nwatch curl http://podinfo-canary:9898/delay/1\n```\n\nWhen the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary,\nthe canary is scaled to zero and the rollout is marked as failed. \n\n```\nkubectl -n test describe canary/podinfo\n\nStatus:\n  Canary Revision:  16695041\n  Failed Checks:    10\n  State:            failed\nEvents:\n  Type     Reason  Age   From     Message\n  ----     ------  ----  ----     -------\n  Normal   Synced  3m    flagger  Starting canary deployment for podinfo.test\n  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 5\n  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 10\n  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 15\n  Normal   Synced  3m    flagger  Halt podinfo.test advancement success rate 69.17% < 99%\n  Normal   Synced  2m    flagger  Halt podinfo.test advancement success rate 61.39% < 99%\n  Normal   Synced  2m    flagger  Halt podinfo.test advancement success rate 55.06% < 99%\n  Normal   Synced  2m    flagger  Halt podinfo.test advancement success rate 47.00% < 99%\n  Normal   Synced  2m    flagger  (combined from similar events): Halt podinfo.test advancement success rate 38.08% < 99%\n  Warning  Synced  1m    flagger  Rolling back podinfo.test failed checks threshold reached 10\n  Warning  Synced  1m    flagger  Canary failed! Scaling down podinfo.test\n```\n\n### Monitoring\n\nFlagger comes with a Grafana dashboard made for canary analysis.\n\nInstall Grafana with Helm:\n\n```bash\nhelm upgrade -i flagger-grafana flagger/grafana \\\n--namespace=istio-system \\\n--set url=http://prometheus.istio-system:9090\n```\n\nThe dashboard shows the RED and USE metrics for the primary and canary workloads:\n\n![flagger-grafana](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/screens/grafana-canary-analysis.png)\n\nThe canary errors and latency spikes have been recorded as Kubernetes events and logged by Flagger in json format:\n\n```\nkubectl -n istio-system logs deployment/flagger --tail=100 | jq .msg\n\nStarting canary deployment for podinfo.test\nAdvance podinfo.test canary weight 5\nAdvance podinfo.test canary weight 10\nAdvance podinfo.test canary weight 15\nAdvance podinfo.test canary weight 20\nAdvance podinfo.test canary weight 25\nAdvance podinfo.test canary weight 30\nAdvance podinfo.test canary weight 35\nHalt podinfo.test advancement success rate 98.69% < 99%\nAdvance podinfo.test canary weight 40\nHalt podinfo.test advancement request duration 1.515s > 500ms\nAdvance podinfo.test canary weight 45\nAdvance podinfo.test canary weight 50\nCopying podinfo.test template spec to podinfo-primary.test\nHalt podinfo-primary.test advancement waiting for rollout to finish: 1 old replicas are pending termination\nScaling down podinfo.test\nPromotion completed! podinfo.test\n```\n### Alerting\n\nFlagger can be configured to send Slack notifications:\n\n```bash\nhelm upgrade -i flagger flagger/flagger \\\n--namespace=istio-system \\\n--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \\\n--set slack.channel=general \\\n--set slack.user=flagger\n```\n\nOnce configured with a Slack incoming webhook, Flagger will post messages when a canary deployment has been initialized,\nwhen a new revision has been detected and if the canary analysis failed or succeeded.\n\n![flagger-slack](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/screens/slack-canary-notifications.png)\n\nA canary deployment will be rolled back if the progress deadline exceeded or if the analysis \nreached the maximum number of failed checks:\n\n![flagger-slack-errors](https://raw.githubusercontent.com/stefanprodan/flagger/master/docs/screens/slack-canary-failed.png)\n\nBesides Slack, you can use Alertmanager to trigger alerts when a canary deployment failed:\n\n```yaml\n  - alert: canary_rollback\n    expr: flagger_canary_status > 1\n    for: 1m\n    labels:\n      severity: warning\n    annotations:\n      summary: \"Canary failed\"\n      description: \"Workload {{ $labels.name }} namespace {{ $labels.namespace }}\"\n```\n\nNext: [A/B Testing with Helm](02-ab-testing-helm.md)\n\n"
  },
  {
    "path": "docs/apps/02-ab-testing-helm.md",
    "content": "# A/B testing with Istio and Helm\n\nTo experiment with different traffic routing techniques \nI've created a Helm chart for [podinfo](https://github.com/stefanprodan/k8s-podinfo) that lets you chain multiple \nservices and wraps all the Istio objects needs for A/B testing and canary deployments.\n\nUsing the podinfo chart you will be installing three microservices: frontend, backend and data store. \nEach of these services can have two versions running in parallel, the versions are called blue and green.\nThe assumption is that for the frontend you'll be running A/B testing based on the user agent HTTP header. \nThe green frontend is not backwards compatible with the blue backend so you'll route all requests from the green \nfrontend to the green backend. For the data store you'll be running performance testing. Both backend versions are \ncompatible with the blue and green data store so you'll be splitting the traffic between blue and green data stores \nand compare the requests latency and error rate to determine if the green store performs \nbetter than the blue one.\n\n### Deploy the blue version\n\nAdd the podinfo Helm repository:\n\n```bash\nhelm repo add sp https://stefanprodan.github.io/k8s-podinfo\n```\n\nCreate a namespace with Istio sidecar injection enabled:\n\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  labels:\n    istio-injection: enabled\n  name: demo\n```\n\nSave the above resource as demo.yaml and then apply it:\n\n```bash\nkubectl apply -f ./demo.yaml\n```\n\n![initial-state](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/routing-initial-state.png)\n\nCreate a frontend release exposed outside the service mesh on the podinfo sub-domain (replace `example.com` with your domain):\n\n```yaml\nhost: podinfo.example.com\nexposeHost: true\n\nblue:\n  replicas: 2\n  tag: \"1.2.0\"\n  message: \"Greetings from the blue frontend\"\n  backend: http://backend:9898/api/echo\n\ngreen:\n  # disabled (all traffic goes to blue)\n  replicas: 0\n```\n\nSave the above resource as frontend.yaml and then install it:\n\n```bash\nhelm install --name frontend sp/podinfo-istio \\\n--namespace demo \\\n-f ./frontend.yaml\n```\n\nCreate a backend release:\n\n```yaml\nhost: backend\n\nblue:\n  replicas: 2\n  tag: \"1.2.0\"\n  backend: http://store:9898/api/echo\n\ngreen:\n  # disabled (all traffic goes to blue)\n  replicas: 0\n```\n\nSave the above resource as backend.yaml and then install it:\n\n```bash\nhelm install --name backend sp/podinfo-istio \\\n--namespace demo \\\n-f ./backend.yaml\n```\n\nCreate a store release:\n\n```yaml\nhost: store\n\nblue:\n  replicas: 2\n  tag: \"1.2.0\"\n  weight: 100\n\ngreen:\n  # disabled (all traffic goes to blue)\n  replicas: 0\n```\n\nSave the above resource as store.yaml and then install it:\n\n```bash\nhelm install --name store sp/podinfo-istio \\\n--namespace demo \\\n-f ./store.yaml\n```\n\nOpen `https://podinfo.exmaple.com` in your browser, you should see a greetings message from the blue version.\nClicking on the ping button will make a call that spans across all microservices.\n\nAccess Jaeger dashboard using port forwarding:\n\n```bash\nkubectl -n istio-system port-forward deployment/istio-tracing 16686:16686 \n```\n\nNavigate to `http://localhost:16686` and select `store` from the service dropdown. You should see a trace for each ping.\n\n![jaeger-trace](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/jaeger-trace-list.png)\n\nIstio tracing is able to capture the ping call spanning across all microservices because podinfo forwards the Zipkin HTTP \nheaders. When a HTTP request reaches the Istio Gateway, Envoy will inject a series of headers used for tracing. When podinfo \ncalls a backend service, will copy the headers from the incoming HTTP request:\n\n```go\nfunc copyTracingHeaders(from *http.Request, to *http.Request) {\n\theaders := []string{\n\t\t\"x-request-id\",\n\t\t\"x-b3-traceid\",\n\t\t\"x-b3-spanid\",\n\t\t\"x-b3-parentspanid\",\n\t\t\"x-b3-sampled\",\n\t\t\"x-b3-flags\",\n\t\t\"x-ot-span-context\",\n\t}\n\n\tfor i := range headers {\n\t\theaderValue := from.Header.Get(headers[i])\n\t\tif len(headerValue) > 0 {\n\t\t\tto.Header.Set(headers[i], headerValue)\n\t\t}\n\t}\n}\n```\n\n### Deploy the green version\n\n![desired-state](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/routing-desired-state.png)\n\nChange the frontend definition to route traffic coming from Safari users to the green deployment:\n\n```yaml\nhost: podinfo.example.com\nexposeHost: true\n\nblue:\n  replicas: 2\n  tag: \"1.2.0\"\n  message: \"Greetings from the blue frontend\"\n  backend: http://backend:9898/api/echo\n\ngreen:\n  replicas: 2\n  tag: \"1.2.1\"\n  routing:\n    # target Safari\n    - match:\n      - headers:\n          user-agent:\n            regex: \"^(?!.*Chrome).*Safari.*\"\n    # target API clients by version\n    - match:\n      - headers:\n          x-api-version:\n            regex: \"^(v{0,1})1\\\\.2\\\\.([1-9]).*\"\n  message: \"Greetings from the green frontend\"\n  backend: http://backend:9898/api/echo\n```\n\nSave the above resource and apply it:\n\n```bash\nhelm upgrade --install frontend sp/podinfo-istio \\\n--namespace demo \\\n-f ./frontend.yaml\n```\n\nChange the backend definition to receive traffic based on source labels. The blue frontend will be routed to the blue\nbackend and the green frontend to the green backend:\n\n```yaml\nhost: backend\n\nblue:\n  replicas: 2\n  tag: \"1.2.0\"\n  backend: http://store:9898/api/echo\n\ngreen:\n  replicas: 2\n  tag: \"1.2.1\"\n  routing:\n    # target green callers\n    - match:\n      - sourceLabels:\n          color: green\n  backend: http://store:9898/api/echo\n```\n\nSave the above resource and apply it:\n\n```bash\nhelm upgrade --install backend sp/podinfo-istio \\\n--namespace demo \\\n-f ./backend.yaml\n```\n\nChange the store definition to route 80% of the traffic to the blue deployment and 20% to the green one:\n\n```yaml\nhost: store\n\n# load balance 80/20 between blue and green\nblue:\n  replicas: 2\n  tag: \"1.2.0\"\n  weight: 80\n\ngreen:\n  replicas: 1\n  tag: \"1.2.1\"\n```\n\nSave the above resource and apply it:\n\n```bash\nhelm upgrade --install store sp/podinfo-istio \\\n--namespace demo \\\n-f ./store.yaml\n```\n\n### Restrict access with Mixer rules\n\nLet's assume the frontend service has a vulnerability and a bad actor can execute arbitrary commands in the frontend container.\nIf someone gains access to the frontend service, from there he/she can issue API calls to the backend and data store service.\n\nIn order to simulate this you can exec into the frontend container and curl the data store API:\n\n```bash\nkubectl -n demo exec -it frontend-blue-675b4dff4b-xhg9d -c podinfod sh\n\n~ $ curl -v http://store:9898\n* Connected to store (10.31.250.154) port 9898 (#0)\n```\n\nThere is no reason why the frontend service should have access to the data store, only the backend service should be \nable to issue API calls to the store service. With Istio you can define access rules and restrict access based on source and \ndestination.\n\nLet's create an Istio config that denies access to the data store unless the caller is the backend service:\n\n```yaml\napiVersion: config.istio.io/v1alpha2\nkind: denier\nmetadata:\n  name: denyhandler\n  namespace: demo\nspec:\n  status:\n    code: 7\n    message: Not allowed\n---\napiVersion: config.istio.io/v1alpha2\nkind: checknothing\nmetadata:\n  name: denyrequest\n  namespace: demo\nspec:\n---\napiVersion: config.istio.io/v1alpha2\nkind: rule\nmetadata:\n  name: denystore\n  namespace: demo\nspec:\n  match:  destination.labels[\"app\"] == \"store\" && source.labels[\"app\"] != \"backend\"\n  actions:\n  - handler: denyhandler.denier\n    instances: [ denyrequest.checknothing ]\n```\n\nSave the above resource as demo-rules.yaml and then apply it:\n\n```bash\nkubectl apply -f ./demo-rules.yaml\n```\n\nNow if you try to call the data store from the frontend container Istio Mixer will deny access:\n\n```bash\nkubectl -n demo exec -it frontend-blue-675b4dff4b-xhg9d -c podinfod sh\n\n~ $ watch curl -s http://store:9898\nPERMISSION_DENIED:denyhandler.denier.demo:Not allowed\n```\n\nThe permission denied error can be observed in Grafana. Open the Istio Workload dashboard, select the demo namespace and\npodinfo-blue workload from the dropdown, scroll to outbound services and you'll see the HTTP 403 errors:\n\n![grafana-403](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/grafana-403-errors.png)\n\nOnce you have the Mixer rules in place you could create an alert for HTTP 403 errors with Prometheus and Alertmanager \nto be notified about suspicious activities inside the service mesh.\n\n"
  },
  {
    "path": "docs/istio/00-index.md",
    "content": "# Istio GKE setup\n\nThis guide walks you through setting up Istio on Google Kubernetes Engine.\n\n![istio](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/istio-gcp-overview.png)\n\nAt the end of this guide you will be running Istio with the following characteristics:\n\n* secure Istio ingress gateway with Let’s Encrypt TLS\n* encrypted communication between Kubernetes workloads with Istio mutual TLS\n* Jaeger tracing \n* Prometheus and Grafana monitoring\n* canary deployments, A/B testing and traffic mirroring capabilities\n\n### Labs\n\n* [Prerequisites - client tools](01-prerequisites.md)\n* [GKE cluster setup](02-gke-setup.md)\n* [Cloud DNS setup](03-clouddns-setup.md)\n* [Install Istio with Helm](04-istio-setup.md)\n* [Configure Istio Gateway with Let's Encrypt wildcard certificate](05-letsencrypt-setup.md)\n* [Expose services outside the service mesh](06-grafana-config.md)\n"
  },
  {
    "path": "docs/istio/01-prerequisites.md",
    "content": "# Prerequisites\n\nYou will be creating a cluster on Google’s Kubernetes Engine (GKE), \nif you don’t have an account you can sign up [here](https://cloud.google.com/free/) for free credits.\n\nLogin into GCP, create a project and enable billing for it. \n\nInstall the [gcloud](https://cloud.google.com/sdk/) command line utility and configure your project with `gcloud init`.\n\nSet the default project (replace `PROJECT_ID` with your own project):\n\n```bash\ngcloud config set project PROJECT_ID\n```\n\nSet the default compute region and zone:\n\n```bash\ngcloud config set compute/region europe-west3\ngcloud config set compute/zone europe-west3-a\n```\n\nEnable the Kubernetes and Cloud DNS services for your project:\n\n```bash\ngcloud services enable container.googleapis.com\ngcloud services enable dns.googleapis.com\n```\n\nInstall the `kubectl` command-line tool:\n\n```bash\ngcloud components install kubectl\n```\n\nInstall the `helm` command-line tool:\n\n```bash\nbrew install kubernetes-helm\n```\n\nCreate Tiller service account:\n\n```bash\nkubectl --namespace kube-system create sa tiller\nkubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller\n```\n\nInstall Tiller:\n\n```bash\nhelm init --service-account tiller --upgrade --wait\n```\n\nNext: [GKE cluster setup](02-gke-setup.md)\n"
  },
  {
    "path": "docs/istio/02-gke-setup.md",
    "content": "# GKE cluster setup\n\nCreate a cluster with three nodes using the latest Kubernetes version:\n\n```bash\nk8s_version=$(gcloud container get-server-config --format=json \\\n| jq -r '.validMasterVersions[0]')\n\ngcloud container clusters create istio \\\n--cluster-version=${k8s_version} \\\n--zone=europe-west3-a \\\n--num-nodes=3 \\\n--machine-type=n1-highcpu-4 \\\n--preemptible \\\n--no-enable-cloud-logging \\\n--disk-size=50 \\\n--enable-autorepair \\\n--scopes=gke-default\n```\n\nThe above command will create a default node pool consisting of `n1-highcpu-4` (vCPU: 4, RAM 3.60GB, DISK: 30GB) preemptible VMs.\nPreemptible VMs are up to 80% cheaper than regular instances and are terminated and replaced after a maximum of 24 hours.\n\nSet up credentials for `kubectl`:\n\n```bash\ngcloud container clusters get-credentials istio -z=europe-west3-a\n```\n\nCreate a cluster admin role binding:\n\n```bash\nkubectl create clusterrolebinding \"cluster-admin-$(whoami)\" \\\n--clusterrole=cluster-admin \\\n--user=\"$(gcloud config get-value core/account)\"\n```\n\nValidate your setup with:\n\n```bash\nkubectl get nodes -o wide\n```\n\nNext: [Cloud DNS setup](03-clouddns-setup.md)\n"
  },
  {
    "path": "docs/istio/03-clouddns-setup.md",
    "content": "# Cloud DNS setup\n\nYou will need an internet domain and access to the registrar to change the name servers to Google Cloud DNS.\n\nCreate a managed zone named `istio` in Cloud DNS (replace `example.com` with your domain):\n\n```bash\ngcloud dns managed-zones create \\\n--dns-name=\"example.com.\" \\\n--description=\"Istio zone\" \"istio\"\n```\n\nLook up your zone's name servers:\n\n```bash\ngcloud dns managed-zones describe istio\n```\n\nUpdate your registrar's name server records with the records returned by the above command.\n\nWait for the name servers to change (replace `example.com` with your domain):\n\n```bash\nwatch dig +short NS example.com\n```\n\nCreate a static IP address named `istio-gateway-ip` in the same region as your GKE cluster:\n\n```bash\ngcloud compute addresses create istio-gateway-ip --region europe-west3\n```\n\nFind the static IP address:\n\n```bash\ngcloud compute addresses describe istio-gateway-ip --region europe-west3\n```\n\nCreate the following DNS records (replace `example.com` with your domain and set your Istio Gateway IP):\n\n```bash\nDOMAIN=\"example.com\"\nGATEWAYIP=\"35.198.98.90\"\n\ngcloud dns record-sets transaction start --zone=istio\n\ngcloud dns record-sets transaction add --zone=istio \\\n--name=\"${DOMAIN}\" --ttl=300 --type=A ${GATEWAYIP}\n\ngcloud dns record-sets transaction add --zone=istio \\\n--name=\"www.${DOMAIN}\" --ttl=300 --type=A ${GATEWAYIP}\n\ngcloud dns record-sets transaction add --zone=istio \\\n--name=\"*.${DOMAIN}\" --ttl=300 --type=A ${GATEWAYIP}\n\ngcloud dns record-sets transaction execute --zone istio\n```\n\nVerify that the wildcard DNS is working (replace `example.com` with your domain):\n\n```bash\nwatch host test.example.com\n```\n\nNext: [Install Istio with Helm](04-istio-setup.md)\n"
  },
  {
    "path": "docs/istio/04-istio-setup.md",
    "content": "# Install Istio with Helm\n\nAdd Istio Helm repository:\n\n```bash\nexport ISTIO_VER=\"1.2.3\"\n\nhelm repo add istio.io https://storage.googleapis.com/istio-release/releases/${ISTIO_VER}/charts\n```\n\nInstalling the Istio custom resource definitions:\n\n```bash\nhelm upgrade -i istio-init istio.io/istio-init --wait --namespace istio-system\n```\n\nWait for Istio CRDs to be deployed:\n\n```bash\nkubectl -n istio-system wait --for=condition=complete job/istio-init-crd-10\nkubectl -n istio-system wait --for=condition=complete job/istio-init-crd-11\nkubectl -n istio-system wait --for=condition=complete job/istio-init-crd-12\n```\n\nCreate a secret for Grafana credentials:\n\n```bash\n# generate a random password\nPASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)\n\nkubectl -n istio-system create secret generic grafana \\\n--from-literal=username=admin \\\n--from-literal=passphrase=\"$PASSWORD\"\n```\n\nConfigure Istio with Prometheus, Jaeger, and cert-manager and set your load balancer IP:\n\n```yaml\n# ingress configuration\ngateways:\n  enabled: true\n  istio-ingressgateway:\n    type: LoadBalancer\n    loadBalancerIP: \"35.198.98.90\"\n    autoscaleEnabled: true\n    autoscaleMax: 2\n    \n# common settings\nglobal:\n  # sidecar settings\n  proxy:\n    resources:\n      requests:\n        cpu: 10m\n        memory: 64Mi\n      limits:\n        cpu: 2000m\n        memory: 256Mi\n  controlPlaneSecurityEnabled: false\n  mtls:\n    enabled: false\n  useMCP: true\n\n# pilot configuration\npilot:\n  enabled: true\n  autoscaleEnabled: true\n  sidecar: true\n  resources:\n    requests:\n      cpu: 10m\n      memory: 128Mi\n\n# sidecar-injector webhook configuration\nsidecarInjectorWebhook:\n  enabled: true\n\n# security configuration\nsecurity:\n  enabled: true\n\n# galley configuration\ngalley:\n  enabled: true\n\n# mixer configuration\nmixer:\n  policy:\n    enabled: false\n    replicaCount: 1\n    autoscaleEnabled: true\n  telemetry:\n    enabled: true\n    replicaCount: 1\n    autoscaleEnabled: true\n  resources:\n    requests:\n      cpu: 10m\n      memory: 128Mi\n\n# addon prometheus configuration\nprometheus:\n  enabled: true\n  scrapeInterval: 5s\n\n# addon jaeger tracing configuration\ntracing:\n  enabled: true\n\n# addon grafana configuration\ngrafana:\n  enabled: true\n  security:\n    enabled: true\n```\n\nSave the above file as `my-istio.yaml` and install Istio with Helm:\n\n```bash\nhelm upgrade --install istio istio.io/istio \\\n--namespace=istio-system \\\n-f ./my-istio.yaml\n```\n\nVerify that Istio workloads are running:\n\n```bash\nwatch kubectl -n istio-system get pods\n```\n\nNext: [Configure Istio Gateway with Let's Encrypt wildcard certificate](05-letsencrypt-setup.md)\n"
  },
  {
    "path": "docs/istio/05-letsencrypt-setup.md",
    "content": "# Configure Istio Gateway with Let's Encrypt wildcard certificate\n\n![istio-letsencrypt](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/istio-cert-manager-gcp.png)\n\nCreate a Istio Gateway in istio-system namespace with HTTPS redirect:\n\n```yaml\napiVersion: networking.istio.io/v1alpha3\nkind: Gateway\nmetadata:\n  name: public-gateway\n  namespace: istio-system\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - port:\n      number: 80\n      name: http\n      protocol: HTTP\n    hosts:\n    - \"*\"\n    tls:\n      httpsRedirect: true\n  - port:\n      number: 443\n      name: https\n      protocol: HTTPS\n    hosts:\n    - \"*\"\n    tls:\n      mode: SIMPLE\n      privateKey: /etc/istio/ingressgateway-certs/tls.key\n      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt\n```\n\nSave the above resource as istio-gateway.yaml and then apply it:\n\n```bash\nkubectl apply -f ./istio-gateway.yaml\n```\n\nCreate a service account with Cloud DNS admin role (replace `my-gcp-project` with your project ID):\n\n```bash\nGCP_PROJECT=my-gcp-project\n\ngcloud iam service-accounts create dns-admin \\\n--display-name=dns-admin \\\n--project=${GCP_PROJECT}\n\ngcloud iam service-accounts keys create ./gcp-dns-admin.json \\\n--iam-account=dns-admin@${GCP_PROJECT}.iam.gserviceaccount.com \\\n--project=${GCP_PROJECT}\n\ngcloud projects add-iam-policy-binding ${GCP_PROJECT} \\\n--member=serviceAccount:dns-admin@${GCP_PROJECT}.iam.gserviceaccount.com \\\n--role=roles/dns.admin\n```\n\nCreate a Kubernetes secret with the GCP Cloud DNS admin key:\n\n```bash\nkubectl create secret generic cert-manager-credentials \\\n--from-file=./gcp-dns-admin.json \\\n--namespace=istio-system\n```\n\nInstall cert-manager's CRDs:\n\n```bash\nCERT_REPO=https://raw.githubusercontent.com/jetstack/cert-manager\n\nkubectl apply -f ${CERT_REPO}/release-0.7/deploy/manifests/00-crds.yaml\n```\n\nCreate the cert-manager namespace and disable resource validation:\n\n```bash\nkubectl create namespace cert-manager\n\nkubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true\n```\n\nInstall cert-manager with Helm:\n\n```bash\nhelm repo add jetstack https://charts.jetstack.io && \\\nhelm repo update && \\\nhelm upgrade -i cert-manager \\\n--namespace cert-manager \\\n--version v0.7.0 \\\njetstack/cert-manager\n```\n\nCreate a letsencrypt issuer for CloudDNS (replace `email@example.com` with a valid email address and `my-gcp-project` with your project ID):\n\n```yaml\napiVersion: certmanager.k8s.io/v1alpha1\nkind: Issuer\nmetadata:\n  name: letsencrypt-prod\n  namespace: istio-system\nspec:\n  acme:\n    server: https://acme-v02.api.letsencrypt.org/directory\n    email: email@example.com\n    privateKeySecretRef:\n      name: letsencrypt-prod\n    dns01:\n      providers:\n      - name: cloud-dns\n        clouddns:\n          serviceAccountSecretRef:\n            name: cert-manager-credentials\n            key: gcp-dns-admin.json\n          project: my-gcp-project\n```\n\nSave the above resource as letsencrypt-issuer.yaml and then apply it:\n\n```bash\nkubectl apply -f ./letsencrypt-issuer.yaml\n```\n\nCreate a wildcard certificate (replace `example.com` with your domain):\n\n```yaml\napiVersion: certmanager.k8s.io/v1alpha1\nkind: Certificate\nmetadata:\n  name: istio-gateway\n  namespace: istio-system\nspec:\n  secretName: istio-ingressgateway-certs\n  issuerRef:\n    name: letsencrypt-prod\n  commonName: \"*.example.com\"\n  acme:\n    config:\n    - dns01:\n        provider: cloud-dns\n      domains:\n      - \"*.example.com\"\n      - \"example.com\"\n```\n\nSave the above resource as of-cert.yaml and then apply it:\n\n```bash\nkubectl apply -f ./of-cert.yaml\n```\n\nIn a couple of minutes cert-manager should fetch a wildcard certificate from letsencrypt.org:\n\n```text\nkubectl -n istio-system describe certificate istio-gateway\n\nEvents:\n  Type    Reason         Age    From          Message\n  ----    ------         ----   ----          -------\n  Normal  CertIssued     1m52s  cert-manager  Certificate issued successfully\n```\n\nRecreate Istio ingress gateway pods:\n\n```bash\nkubectl -n istio-system delete pods -l istio=ingressgateway\n```\n\nNote that Istio gateway doesn't reload the certificates from the TLS secret on cert-manager renewal.\nSince the GKE cluster is made out of preemptible VMs the gateway pods will be replaced once every 24h, if your not using\npreemptible nodes then you need to manually kill the gateway pods every two months before the certificate expires.\n\nNext: [Expose services outside the service mesh](06-grafana-config.md)\n"
  },
  {
    "path": "docs/istio/06-grafana-config.md",
    "content": "# Expose Grafana outside the service mesh\n\nIn order to expose services via the Istio Gateway you have to create a Virtual Service attached to Istio Gateway.\n\nCreate a virtual service in `istio-system` namespace for Grafana (replace `example.com` with your domain):\n\n```yaml\napiVersion: networking.istio.io/v1alpha3\nkind: VirtualService\nmetadata:\n  name: grafana\n  namespace: istio-system\nspec:\n  hosts:\n  - \"grafana.example.com\"\n  gateways:\n  - public-gateway.istio-system.svc.cluster.local\n  http:\n  - route:\n    - destination:\n        host: grafana\n    timeout: 30s\n```\n\nSave the above resource as grafana-virtual-service.yaml and then apply it:\n\n```bash\nkubectl apply -f ./grafana-virtual-service.yaml\n```\n\nNavigate to `http://grafana.example.com` in your browser and you should be redirected to the HTTPS version.\n\nCheck that HTTP2 is enabled:\n\n```bash\ncurl -I --http2 https://grafana.example.com\n\nHTTP/2 200 \ncontent-type: text/html; charset=UTF-8\nx-envoy-upstream-service-time: 3\nserver: envoy\n```\n\nNext: [A/B testing and canary deployments demo](/docs/apps/00-index.md)\n"
  },
  {
    "path": "docs/openfaas/00-index.md",
    "content": "# OpenFaaS service mesh walkthrough\n\nThis guide walks you through setting up OpenFaaS with Istio on Google Kubernetes Engine.\n\n![openfaas-istio](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/openfaas-istio-diagram.png)\n\nAt the end of this guide you will be running OpenFaaS with the following characteristics:\n\n* secure OpenFaaS ingress with Let’s Encrypt TLS and authentication\n* encrypted communication between OpenFaaS core services and functions with Istio mutual TLS\n* isolated functions with Istio Mixer rules\n* Jaeger tracing and Prometheus monitoring for function calls\n* canary deployments for OpenFaaS functions \n\n### Labs\n\n* [Configure OpenFaaS mutual TLS](01-mtls-config.md)\n* [Configure OpenFaaS access policies](02-mixer-rules.md)\n* [Install OpenFaaS with Helm](03-openfaas-setup.md)\n* [Configure OpenFaaS Gateway to receive external traffic](04-gateway-config.md)\n* [Canary deployments for OpenFaaS functions](05-canary.md)\n"
  },
  {
    "path": "docs/openfaas/01-mtls-config.md",
    "content": "# Configure OpenFaaS mutual TLS\n\nAn OpenFaaS instance is composed out of two namespaces: one for the core services and one for functions. \nIn order to secure the communication between core services and functions we need to enable mutual TLS on both namespaces.\n\nCreate the OpenFaaS namespaces with Istio sidecar injection enabled:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml\n```\n\nEnable mTLS on `openfaas` namespace:\n\n```yaml\napiVersion: authentication.istio.io/v1alpha1\nkind: Policy\nmetadata:\n  name: default\n  namespace: openfaas\nspec:\n  peers:\n  - mtls: {}\n---\napiVersion: networking.istio.io/v1alpha3\nkind: DestinationRule\nmetadata:\n  name: default\n  namespace: openfaas\nspec:\n  host: \"*.openfaas.svc.cluster.local\"\n  trafficPolicy:\n    tls:\n      mode: ISTIO_MUTUAL\n```\n\nSave the above resource as of-mtls.yaml and then apply it:\n\n```bash\nkubectl apply -f ./of-mtls.yaml\n```\n\nAllow plaintext traffic to NATS:\n\n```yaml\napiVersion: networking.istio.io/v1alpha3\nkind: DestinationRule\nmetadata:\n    name: \"nats-no-mtls\"\n    namespace: openfaas\nspec:\n    host: \"nats.openfaas.svc.cluster.local\"\n    trafficPolicy:\n        tls:\n            mode: DISABLE\n```\n\nSave the above resource as of-nats-no-mtls.yaml and then apply it:\n\n```bash\nkubectl apply -f ./of-nats-no-mtls.yaml\n```\n\nEnable mTLS on `openfaas-fn` namespace:\n\n```yaml\napiVersion: authentication.istio.io/v1alpha1\nkind: Policy\nmetadata:\n  name: default\n  namespace: openfaas-fn\nspec:\n  peers:\n  - mtls: {}\n---\napiVersion: networking.istio.io/v1alpha3\nkind: DestinationRule\nmetadata:\n  name: default\n  namespace: openfaas-fn\nspec:\n  host: \"*.openfaas-fn.svc.cluster.local\"\n  trafficPolicy:\n    tls:\n      mode: ISTIO_MUTUAL\n```\n\nSave the above resource as of-functions-mtls.yaml and then apply it:\n\n```bash\nkubectl apply -f ./of-functions-mtls.yaml\n```\n\nNext: [Configure OpenFaaS access policies](02-mixer-rules.md)\n"
  },
  {
    "path": "docs/openfaas/02-mixer-rules.md",
    "content": "# Configure OpenFaaS access policies\n\nKubernetes namespaces alone offer only a logical separation between workloads.\nTo prohibit functions from calling each other or from reaching \nthe OpenFaaS core services we need to create Istio Mixer rules.\n\nDeny access to OpenFaaS core services from the `openfaas-fn` namespace except for system functions:\n\n```yaml\napiVersion: config.istio.io/v1alpha2\nkind: denier\nmetadata:\n  name: denyhandler\n  namespace: openfaas\nspec:\n  status:\n    code: 7\n    message: Not allowed\n---\napiVersion: config.istio.io/v1alpha2\nkind: checknothing\nmetadata:\n  name: denyrequest\n  namespace: openfaas\nspec:\n---\napiVersion: config.istio.io/v1alpha2\nkind: rule\nmetadata:\n  name: denyopenfaasfn\n  namespace: openfaas\nspec:\n  match: destination.namespace == \"openfaas\" && source.namespace == \"openfaas-fn\" && source.labels[\"role\"] != \"openfaas-system\"\n  actions:\n  - handler: denyhandler.denier\n    instances: [ denyrequest.checknothing ]\n```\n\nSave the above resources as of-rules.yaml and then apply it:\n\n```bash\nkubectl apply -f ./of-rules.yaml\n```\n\nDeny access to functions except for OpenFaaS core services:\n\n```yaml\napiVersion: config.istio.io/v1alpha2\nkind: denier\nmetadata:\n  name: denyhandler\n  namespace: openfaas-fn\nspec:\n  status:\n    code: 7\n    message: Not allowed\n---\napiVersion: config.istio.io/v1alpha2\nkind: checknothing\nmetadata:\n  name: denyrequest\n  namespace: openfaas-fn\nspec:\n---\napiVersion: config.istio.io/v1alpha2\nkind: rule\nmetadata:\n  name: denyopenfaasfn\n  namespace: openfaas-fn\nspec:\n  match: destination.namespace == \"openfaas-fn\" && source.namespace != \"openfaas\" && source.labels[\"role\"] != \"openfaas-system\"\n  actions:\n  - handler: denyhandler.denier\n    instances: [ denyrequest.checknothing ]\n```\n\nSave the above resources as of-functions-rules.yaml and then apply it:\n\n```bash\nkubectl apply -f ./of-functions-rules.yaml\n```\n\nNext: [Install OpenFaaS with Helm](03-openfaas-setup.md)\n"
  },
  {
    "path": "docs/openfaas/03-openfaas-setup.md",
    "content": "# Install OpenFaaS with Helm\n\nBefore installing OpenFaaS you need to provide the basic authentication credential for the OpenFaaS gateway.\n\nCreate a secret named `basic-auth` in the `openfaas` namespace:\n\n```bash\n# generate a random password\npassword=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1)\n\nkubectl -n openfaas create secret generic basic-auth \\\n--from-literal=basic-auth-user=admin \\\n--from-literal=basic-auth-password=$password\n```\n\nAdd the OpenFaaS `helm` repository:\n\n```bash\nhelm repo add openfaas https://openfaas.github.io/faas-netes/\n```\n\nInstall OpenFaaS with Helm:\n\n```bash\nhelm upgrade --install openfaas openfaas/openfaas \\\n--namespace openfaas \\\n--set functionNamespace=openfaas-fn \\\n--set operator.create=true \\\n--set securityContext=true \\\n--set basic_auth=true \\\n--set exposeServices=false \\\n--set operator.createCRD=true\n```\n\nVerify that OpenFaaS workloads are running:\n\n```bash\nkubectl -n openfaas get pods\n```\n\nNext: [Configure OpenFaaS Gateway to receive external traffic](04-gateway-config.md)\n"
  },
  {
    "path": "docs/openfaas/04-gateway-config.md",
    "content": "# Configure OpenFaaS Gateway to receive external traffic\n\nCreate an Istio virtual service for OpenFaaS Gateway (replace `example.com` with your domain):\n\n```yaml\napiVersion: networking.istio.io/v1alpha3\nkind: VirtualService\nmetadata:\n  name: gateway\n  namespace: openfaas\nspec:\n  hosts:\n  - \"openfaas.example.com\"\n  gateways:\n  - public-gateway.istio-system.svc.cluster.local\n  http:\n  - route:\n    - destination:\n        host: gateway\n    timeout: 30s\n```\n\nSave the above resource as of-virtual-service.yaml and then apply it:\n\n```bash\nkubectl apply -f ./of-virtual-service.yaml\n```\n\nWait for OpenFaaS Gateway to come online:\n\n```bash\nwatch curl -v https://openfaas.example.com/healthz\n```\n\nSave your credentials in faas-cli store:\n\n```bash\necho $password | faas-cli login -g https://openfaas.example.com -u admin --password-stdin\n```\n\nNext: [Canary deployments for OpenFaaS functions](05-canary.md)\n"
  },
  {
    "path": "docs/openfaas/05-canary.md",
    "content": "# Canary deployments for OpenFaaS functions\n\n![openfaas-canary](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/openfaas-istio-canary.png)\n\nCreate a general available release for the `env` function version 1.0.0:\n\n```yaml\napiVersion: openfaas.com/v1alpha2\nkind: Function\nmetadata:\n  name: env\n  namespace: openfaas-fn\nspec:\n  name: env\n  image: stefanprodan/of-env:1.0.0\n  resources:\n    requests:\n      memory: \"32Mi\"\n      cpu: \"10m\"\n  limits:\n    memory: \"64Mi\"\n    cpu: \"100m\"\n```\n\nSave the above resources as env-ga.yaml and then apply it:\n\n```bash\nkubectl apply -f ./env-ga.yaml\n```\n\nCreate a canary release for version 1.1.0:\n\n```yaml\napiVersion: openfaas.com/v1alpha2\nkind: Function\nmetadata:\n  name: env-canary\n  namespace: openfaas-fn\nspec:\n  name: env-canary\n  image: stefanprodan/of-env:1.1.0\n  resources:\n    requests:\n      memory: \"32Mi\"\n      cpu: \"10m\"\n  limits:\n    memory: \"64Mi\"\n    cpu: \"100m\"\n```\n\nSave the above resources as env-canary.yaml and then apply it:\n\n```bash\nkubectl apply -f ./env-canary.yaml\n```\n\nCreate an Istio virtual service with 10% traffic going to canary:\n\n```yaml\napiVersion: networking.istio.io/v1alpha3\nkind: VirtualService\nmetadata:\n  name: env\n  namespace: openfaas-fn\nspec:\n  hosts:\n  - env\n  http:\n  - route:\n    - destination:\n        host: env\n      weight: 90\n    - destination:\n        host: env-canary\n      weight: 10\n    timeout: 30s\n```\n\nSave the above resources as env-virtual-service.yaml and then apply it:\n\n```bash\nkubectl apply -f ./env-virtual-service.yaml\n```\n\nTest traffic routing (one in ten calls should hit the canary release):\n\n```bash\n while true; do sleep 1; curl -sS https://openfaas.example.com/function/env | grep HOSTNAME; done\n\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-59bf48fb9d-cjsjw\nHOSTNAME=env-canary-5dffdf4458-4vnn2\n```\n\nAccess Jaeger dashboard using port forwarding:\n\n```bash\nkubectl -n istio-system port-forward deployment/istio-tracing 16686:16686\n```\n\nTracing the general available release:\n\n![ga-trace](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/openfaas-istio-ga-trace.png)\n\nTracing the canary release:\n\n![canary-trace](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/openfaas-istio-canary-trace.png)\n\nMonitor ga vs canary success rate and latency with Grafana:\n\n![canary-prom](https://github.com/stefanprodan/istio-gke/blob/master/docs/screens/openfaas-istio-canary-prom.png)\n"
  }
]