[
  {
    "path": ".gitignore",
    "content": ".kube*\n.DS_Store*\n"
  },
  {
    "path": "AGENDA.md",
    "content": "# Schedule\n\nThe timeframes are only estimates and may vary according to how the class is progressing\n\n_Each part should last for 50 minutes and will be followed by a 10 minutes break._\n\n## DAY 1\n\n### Part I\nKubernetes intro (history, lineage, web resources)\nMinikube (installation, basic usage, relation to other k8s deployment)\n\n### Part II\nUsing kubectl (interact with your Kubernetes cluster, introduce basic primitives: pods, deployments, replica set, services)\nAPI resources and specification (json/yaml manifests)\n\n### Part III\nLabels (the why and how about labels)\nServices (how to expose applications to internet, service types, DNS)\n\n## DAY 2\n\n### Part I\nScaling, rolling updates and rollbacks\nIngress controllers (another way to expose apps using Ingress resources)\n\n### Part II\nVolumes (define volumes in Pods)\nDaemonSets (for admins who want to run system daemons via k8s)\n\n### Part III\nThird-party resources (and why they're important)\nPython client (custom controller 101, write a basic controller in Python)\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Kubernetes Training and Cookbook\n\nThis repository contains instructions and examples for the O'Reilly Live Online Training for [Kubernetes](https://kubernetes.io).\nDates are listed in the [O'Reilly Live Online](https://www.safaribooksonline.com/live-training/) training schedule.\n\nIt also contains [examples and scripts](./cookbook) used in the Kubernetes [Cookbook](http://shop.oreilly.com/product/0636920064947.do).\n\n## Prerequisites\n\nIn this training we will use [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/) to run a local Kubernetes instance. We will access this local Kubernetes with the client called `kubectl`.\n\n* Install [minikube](https://github.com/kubernetes/minikube/releases)\n* Install [kubectl](https://kubernetes.io/docs/user-guide/prereqs/)\n\nVerify your installation:\n\n```\n$ minikube version\nminikube version: v0.16.0\n\n$ minikube start\n\n$ kubectl version\nClient Version: version.Info{Major:\"1\", Minor:\"5\", GitVersion:\"v1.5.2\", GitCommit:\"08e099554f3c31f6e6f07b448ab3ed78d0520507\", GitTreeState:\"clean\", BuildDate:\"2017-01-12T04:57:25Z\", GoVersion:\"go1.7.4\", Compiler:\"gc\", Platform:\"darwin/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"5\", GitVersion:\"v1.5.2\", GitCommit:\"08e099554f3c31f6e6f07b448ab3ed78d0520507\", GitTreeState:\"clean\", BuildDate:\"1970-01-01T00:00:00Z\", GoVersion:\"go1.7.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n```\n\nIf you are impatient, you can now start playing with Kubernetes:\n\n* Create a deployment with `kubectl run ghost --image=ghost`\n* Do you see a running _Pod_ : `kubectl get pods`\n* Check `kubectl --help` what can you do ?\n\n## Links\n\n* Kubernetes [website](https://kubernetes.io)\n* Official Kubernetes [Documentation](https://kubernetes.io/docs/)\n* Research paper describing [_Borg_](https://research.google.com/pubs/pub43438.html)\n* Kubernetes YouTube [channel](https://www.youtube.com/channel/UCZ2bu0qutTOM0tHYa_jkIwg/featured)\n* Cloud Native Computing Foundation Youtube [channel](https://www.youtube.com/channel/UCvqbFHwN-nwalWPjPUKpvTA/feed)\n\n## Instructor\n\n*Sebastien Goasguen* is a twenty year open source veteran. A member of the Apache Software Foundation, he worked on Apache CloudStack and Libcloud for several years before diving into the container world. He is the founder of [Skippbox](http://www.skippbox.com), a Kubernetes startup that develops open source tools for Kubernetes users. An avid blogger he enjoys spreading the word about new cutting edge technologies and also trains developers and sysadmins on all things Docker and Kubernetes. Sebastien is the author of the O’Reilly Docker Cookbook and 60 Recipes for Apache CloudStack.\n\n## Code of Conduct\n\nSince this is an official O'Reilly Training, we will adhere to the [O'Reilly conferences Code of Conduct](http://www.oreilly.com/conferences/code-of-conduct.html).\n\n_\"At O'Reilly, we assume that most people are intelligent and well-intended, and we're not inclined to tell people what to do. However, we want every O'Reilly conference to be a safe and productive environment for everyone. To that end, this code of conduct spells out the behavior we support and don't support at conferences.\"_\n\n## Trademark\n\nKubernetes is a registered trademark of the [Linux Foundation](https://www.linuxfoundation.org/trademark-usage).\n"
  },
  {
    "path": "history/03012019/oreilly.txt",
    "content": "  517  kubectl get pods\n  518  kubectl logs redis-dff85b6f4-4sq4l \n  519  kubectl exec -ti redis-dff85b6f4-4sq4l -- /bin/sh\n  520  vi pod.yaml\n  521  kubectl create -f pod.yaml \n  522  kubectl get pods\n  523  more pod.yaml \n  524  kubectl get pods\n  525  kubectl delete pods oreilly\n  526  kubectl delete pods redis-dff85b6f4-4sq4l\n  527  kubectl get pods\n  528  kubectl delete pods redis-dff85b6f4-w6k7c\n  529  kubectl get pods\n  530  clear\n  531  ls -l\n  532  clear\n  533  kubectl get pods\n  534  clear\n  535  kubectl get pods\n  536  more pod.yaml \n  537  kubectl create -f pod.yaml \n  538  kubectl get pods\n  539  kubectl get pods\n  540  kubectl delete pods oreilly\n  541  kubectl get pods\n  542  clear\n  543  vi rs.yaml\n  544  kubectl create -f rs.yaml \n  545  vi rs.yaml\n  546  kubectl create -f rs.yaml \n  547  more rs.yaml \n  548  kubectl get pods\n  549  kubectl delete pods oreilly-j78mg\n  550  kubectl get pods\n  551  kubectl get pods\n  552  more rs.yaml \n  553  more pod.yaml \n  554  kubectl get pods\n  555  kubectl get rs\n  556  kubectl get replicaset\n  557  more rs.yaml \n  558  kubectl get pods --show-labels\n  559  kubectl get pods -l tough=demo\n  560  git remote -v\n  561  cd manifests/\n  562  ls -l\n  563  cd 01-pod/\n  564  ls -l\n  565  more redis.yaml \n  566  cd ..\n  567  ls -l\n  568  cd 03-rs\n  569  ls -l\n  570  more rs.yaml \n  571  clear\n  572  ls -l\n  573  clear\n  574  kubectl get pods\n  575  kubectl get pods --v=9\n  576  clear\n  577  curl localhost:8001\n  578  curl localhost:8001/api/v1\n  579  curl localhost:8001/apis/apps/v1\n  580  kubectl get pods\n  581  kubectl delete pods oreilly-dbxfr --v=9\n  582  clear\n  583  kubectl get pods\n  584  kubectl get pods oreilly-5m87t -o yaml\n  585  kubectl get pods oreilly-5m87t -o json\n  586  kubectl get pods oreilly-5m87t -o yaml\n  587  clear\n  588  curl localhost:8001/api/v1/namespaces/default/pods/oreilly-5m87t\n  589  kubectl get pods\n  590  curl -XDELETE localhost:8001/api/v1/namespaces/default/pods/oreilly-5m87t\n  591  kubectl get pods\n  592  clear\n  593  kubectl get pods\n  594  cd ..\n  595  cd ..\n  596  ls -l\n  597  more pod.yaml \n  598  kubectl create -f pod.yaml \n  599  kubectl get pods\n  600  vi pod.yaml \n  601  kubectl create -f pod.yaml \n  602  kubectl create namespace foo\n  603  kubectl create -f pod.yaml --namespace foo\n  604  kubectl get pods\n  605  kubectl get pods --all-namespaces\n  606  curl localhost:8001/api/v1/namespaces/default/pods\n  607  curl localhost:8001/api/v1/namespaces/foo/pods\n  608  curl localhost:8001/api/v1/namespaces/foo/pods |jq -r .items[].smetadata.name\n  609  curl localhost:8001/api/v1/namespaces/foo/pods |jq -r .items[].metadata.name\n  610  curl -s localhost:8001/api/v1/namespaces/foo/pods |jq -r .items[].metadata.name\n  611  curl -s localhost:8001/api/v1/namespaces/default/pods |jq -r .items[].metadata.name\n  612  kubectl get ns\n  613  kubectl get pods\n  614  kubectl create quota oreilly --hard=pods=4\n  615  kubectl get resourcequota\n  616  kubectl get resourcequota oreilly -o yaml\n  617  vi pod.yaml \n  618  kubectl create -f pod.yaml \n  619  kubectl create -f pod.yaml -n foo\n  620  clear\n  621  kubectl get pods\n  622  kubectl delete pods oreilly\n  623  more pod.yaml \n  624  kubectl create -f pod.yaml \n  625  kubectl get pods\n  626  kubectl logs foo\n  627  kubectl get pods\n  628  vi svc.yaml\n  629  kubectl get pods --show-labels\n  630  kubectl labels pods foo video=online\n  631  kubectl label pods foo video=online\n  632  kubectl get pods --show-labels\n  633  vi svc.yaml \n  634  more svc.yaml \n  635  kubectl create -f svc.yaml \n  636  kubectl get svc\n  637  kubectl get services\n  638  kubectl edit svc foo\n  639  kubectl get services\n  640  kubectl get services -w\n  641  kubectl get pods\n  642  kubectl logs foo \n  643  clear\n  644  kubectl get pods\n  645  kubectl get svc\n  646  vi pod.yaml \n  647  kubectl create -f pod.yaml \n  648  kubectl get pods\n  649  kubectl delete pods food\n  650  kubectl delete pods foo\n  651  kubectl create -f pod.yaml \n  652  kubectl get pods\n  653  kubectl logs nginxredis -c redis\n  654  kubectl logs nginxredis -c nginx\n  655  clear\n  656  kubectl get pods\n  657  kubectl get svc\n  658  kubectl get endpoints\n  659  more svc.yaml \n  660  kubectl get pods --show-labels\n  661  kubectl label pod nginxredis video=online\n  662  kubectl get pods --show-labels\n  663  kubectl get endpoints\n  664  kubectl get pods nginxredis -o json | jq -r .status.PodIP\n  665  kubectl get pods nginxredis -o json | jq -r .status.podIP\n  666  kubectl run -it --rm busy --image=busybox:1.27 -- /bin/sh\n  667  kubectl run -it --rm busybox --image=busybox:1.27 -- /bin/sh\n  668  kubectl get pods oreilly-t9jq8 -o yaml\n  669  kubectl exec -ti oreilly-t9jq8 -- /bin/sh\n  670  clear\n  671  kubectl get svc\n  672  kubectl delete svc foo\n  673  kubectl get nodes\n  674  clear\n  675  kubectl get pods\n  676  kubectl get ns\n  677  kubectl delete ns foo\n  678  kubectl get ns\n  679  kubectl get pods\n  680  clear\n  681  kubectl get pods\n  682  kubectl delete pods nginxredis\n  683  kubectl delete rs oreilly\n  684  ls -l\n  685  kubectl get pods\n  686  kubectl run --help\n  687  kubectl run nginx --image=nginx --dry-run -o json\n  688  kubectl run nginx --image=nginx --restart=never --dry-run -o json\n  689  kubectl run nginx --image=nginx --restart=Never --dry-run -o json\n  690  kubectl create\n  691  kubectl create services --help | more\n  692  kubectl create service\n  693  kubectl create service clusterip\n  694  kubectl create service clusterip -h\n  695  clear\n  696  kubectl create service\n  697  kubectl run --rm -it busy --image=busybox:1.29 -- /bin/sh\n  698  kubectl delete deployments busy\n  699  kubectl run --rm -it busy --image=busybox:1.29 -- /bin/sh\n  700  kubectl get deployments\n  701  kubectl delete deployments busybox\n  702  clear\n  703  clear\n  704  kubectl get nodes\n  705  ls -l\n  706  more rs.yaml \n  707  vi rs.yaml \n  708  kubectl create -f rs.yaml \n  709  kubectl get rs\n  710  kubectl get pods\n  711  kubectl describe pods oreilly-bds8g\n  712  kubectl describe pods oreilly-bds8g\n  713  kubectl get pods\n  714  kubectl logs oreilly-bds8g\n  715  ls -l\n  716  clear\n  717  kubectl get pods\n  718  kubectl expose --help\n  719  clear\n  720  kubectl get rs\n  721  kubectl expose rs oreilly --port 80 --target-port 2368\n  722  kubectl get svc\n  723  kubectl get svc oreilly -o yaml\n  724  kubectl get pods -l day=second\n  725  kubectl get endpoints\n  726  kubectl get pods\n  727  kubectl run --it --rm debug --image=busybox:1.29 -- /bin/sh\n  728  kubectl run -it --rm debug --image=busybox:1.29 -- /bin/sh\n  729  kubectl run -it --rm debug1 --image=busybox:1.27 -- /bin/sh\n  730  kubectl run -it --rm debug1 --image=busybox:1.27 -- /bin/sh\n  731  kubectl run -it --rm debug2 --image=busybox:1.27 -- /bin/sh\n  732  clear\n  733  kubectl get pods\n  734  kubectl get svc\n  735  kubectl edit svc oreilly\n  736  kubectl get svc -w\n  737  clear\n  738  kubectl run nginx --image=nginx\n  739  kubectl get pods\n  740  kubectl scale deployment nginx --replicas 4\n  741  kubectl get pods\n  742  kubectl get rs\n  743  kubectl describe rs nginx-65899c769f\n  744  kubectl get resourcequota\n  745  kubectl edit resourcequota oreilly\n  746  kubectl get pods\n  747  kubectl set image deployment nginx nginx=nginx:1.1111\n  748  kubectl get pods\n  749  kubectl describe pods nginx-6f575df45b-dbjhc\n  750  kubectl get pods\n  751  kubectl set image deployment nginx nginx=redis\n  752  kubectl get rs -w\n  753  kubectl get pods\n  754  kubectl get rs\n  755  kubectl rollout history deployment nginx\n  756  kubectl rollout undo deployment nginx --to-revision 1\n  757  kubectl rollout history deployment nginx\n  758  kubectl get rs\n  759  kubectl get pods\n  760  kubectl get pods --show-labels\n  761  kubectl get pods -l run=nginx\n  762  kubectl get pods -l run=nginx -o json | jq -r .items[].spec.containers[0].image\n  763  kubectl rollout history deployment nginx\n  764  kubectl rollout undo deployment nginx --to-revision 2\n  765  kubectl get pods -l run=nginx -o json | jq -r .items[].spec.containers[0].image\n  766  kubectl get pods\n  767  kubectl rollout history deployment nginx\n  768  kubectl rollout undo deployment nginx --to-revision 4\n  769  kubectl get pods\n  770  kubectl get deployments\n  771  kubectl get replicasets\n  772  kubectl get pods\n  773  kubectl expose deployments nginx --port 80 --type LoadBalancer\n  774  kubectl get svc\n  775  kubectl get svc -w\n  776  kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/guestbook-all-in-one.yaml\n  777  kubectl get deployments\n  778  kubectl get pods\n  779  kubectl get pods\n  780  kubectl get svc\n  781  kubectl delete deployment nginx\n  782  kubectl delete svc oreilly nginx\n  783  kubectl delete deployment redis\n  784  clear\n  785  kubectl get svc\n  786  kubectl edit svc frontend\n  787  kubectl get svc -w\n  788  kubectl get pods\n  789  kubectl scale deployments redis-slave --replicas 4\n  790  kubectl get pods\n  791  kubectl get pods\n  792  kubectl exec -ti redis-master-55db5f7567-qthrj -- redis-cli info\n  793  kubectl scale deployments redis-slave --replicas 1\n  794  kubectl get pods\n  795  kubectl exec -ti redis-master-55db5f7567-qthrj -- redis-cli info\n  796  kubectl exec -ti redis-master-55db5f7567-qthrj -- redis-cli info\n  797  kubectl exec -ti redis-master-55db5f7567-qthrj -- redis-cli info\n  798  kubectl exec -ti redis-master-55db5f7567-qthrj -- redis-cli info\n  799  kubectl exec -ti redis-master-55db5f7567-qthrj -- redis-cli info\n  800  clear\n  801  kubectl run mysql --image=mysql:5.5 --env MYSQL_ROOT_PASSWORD=root\n  802  kubectl expose deployment mysql --port 3306\n  803  kubectl get pods\n  804  kubectl run wordpress --image=wordpress --env WORDPRESS_DB_HOST=mysql --env WORDPRESS_DB_PASSWORD=root\n  805  kubectl expose deployment wordpress --port 80 --type LoadBalancer\n  806  kubectl get pods\n  807  kubectl exec -ti mysql-55d65b64bb-x7ts6 -- mysql -uroot -p\n  808  clear\n  809  ls -l\n  810  kubectl run -it debuggg --image=busybox:1.27 -- /bin/sh\n  811  kubectl get pods\n  812  kubectl get svc\n  813  clear\n  814  ls -l\n  815  clear\n  816  kubectl get pods\n  817  kubectl delete rs oreilly\n  818  kubectl delete deployment debuggg\n  819  clear\n  820  kubectl get pods\n  821  ls -l\n  822  ls -l\n  823  clear\n  824  ls -l\n  825  vi index.html\n  826  cat index.html \n  827  kubectl create configmap www --from-file=index.html \n  828  kubectl get configmap\n  829  kubectl get cm\n  830  kubectl get cm www -o yaml\n  831  vi w.yaml\n  832  more w.yaml \n  833  kubectl create -f w.yaml \n  834  kubectl get pods\n  835  kubectl exec -ti www -- ls -l /usr/share/nginx/html\n  836  kubectl expose pod www --port 80 --type LoadBalancer\n  837  kubectl label pod www foo=bar\n  838  kubectl expose pod www --port 80 --type LoadBalancer\n  839  kubectl get svc -w\n  840  kubectl logs www\n  841  kubectl logs -f www\n  842  clear\n  843  ls -l\n  844  more w.yaml \n  845  ls -l\n  846  cd manifests/\n  847  ls -l\n  848  cd 06-volumes/\n  849  ls -l\n  850  pwd\n  851  more configmap.yaml \n  852  ls -l\n  853  more cm-vol.yaml \n  854  ls -l\n  855  more foobar.md \n  856  ls -l\n  857  more volumes.yaml \n  858  kubectl create -f volumes.yaml \n  859  kubectl get pods\n  860  kubectl exec -ti vol -c busy -- ls -l /busy\n  861  kubectl cp --help\n  862  kubectl cp volumes.yaml vol:/busy/volumes.yaml -c busy\n  863  kubectl exec -ti vol -c busy -- ls -l /busy\n  864  kubectl exec -ti vol -c box -- ls -l /busy\n  865  kubectl exec -ti vol -c box -- ls -l /box\n  866  kubectl exec -ti vol -c box -- cat /box/volumes.yaml\n  867  ls -l\n  868  kubectl get pods\n  869  kubectl delete deployments wordpress mysql\n  870  kubectl delete pods vol www\n  871  clear\n  872  kubectl get pods\n  873  ls -l\n  874  more pvc.yaml \n  875  kubectl get persistentvolumeclaim\n  876  kubectl get persistentvolume\n  877  kubectl get pvc\n  878  kubectl get pv\n  879  kubectl create -f pvc.yaml \n  880  kubectl get pv\n  881  kubectl get pvc\n  882  kubectl get pv pvc-bea040c8-1033-11e9-bbda-42010a8000a0 -o yaml\n  883  ls -l\n  884  more mysql.yaml \n  885  kubectl get pvc\n  886  kubectl create -f mysql.yaml \n  887  kubectl get pods\n  888  kubectl get pods\n  889  kubectl get pods\n  890  kubectl get pods\n  891  kubectl get pods -w\n  892  kubectl exec -ti data -- mysql -uroot -p\n  893  kubectl get pods\n  894  kubectl delete pods data\n  895  kubectl get pods\n  896  kubectl get pvc\n  897  ls -l\n  898  more mysql.yaml \n  899  ls -l\n  900  cp hostpath.yaml test.yaml\n  901  vi test.yaml \n  902  kubectl create -f test.yaml \n  903  kubectl get pods\n  904  kubectl get pods\n  905  kubectl get pods\n  906  kubectl exce -ti pvctest -- ls -l /oreilly\n  907  kubectl exec -ti pvctest -- ls -l /oreilly\n  908  kubectl exec -ti pvctest -- ls -l /oreilly/oreilly\n  909  kubectl get pods\n  910  kubectl delete pods pvctest\n  911  clear\n  912  kubectl run foo --image=ghost --dry-run -o yaml\n  913  kubectl run foo --image=ghost --dry-run -o json\n  914  kubectl run foo --image=ghost --dry-run -o json > foo.json\n  915  more foo.json \n  916  kubectl get pods\n  917  kubectl run foo --image=ghost --restart=Never --dry-run -o yaml\n  918  kubectl get pods\n  919  kubectl expose deployment frontend --port 80 --dry-run -o yaml \n  920  kubectl create\n  921  kubectl create quote foo --hard=pod=2 --dry-run -o yaml\n  922  kubectl create quota foo --hard=pod=2 --dry-run -o yaml\n  923  clear\n  924  which helm\n  925  helm init\n  926  kubectl get pods -n kube-system\n  927  clear\n  928  helm repo list\n  929  helm search redis\n  930  helm install stable/redis\n  931  helm repo update\n  932  helm install stable/redis\n  933  helm inspect stable/redis\n  934  helm install stable/redis-ha\n  935  helm create oreilly\n  936  tree oreilly/\n  937  cd oreilly/templates/\n  938  ls -l\n  939  more deployment.yaml \n  940  clear\n  941  cd ..\n  942  cd ..\n  943  ls -l\n  944  cd ..\n  945  ls -l\n  946  cd 07-crd/\n  947  ls -l\n  948  clear\n  949  kubectl get database\n  950  more database.yml \n  951  kubectl create -f database.yml \n  952  kubectl get database\n  953  kubectl proxy ?&\n  954  kubectl proxy &\n  955  ps -ef |grep proxy\n  956  curl localhost:8001/apis/\n  957  curl localhost:8001/\n  958  curl localhost:8001/apis/foo.bar/v1\n  959  ls -l\n  960  more db.yml \n  961  kubectl create -f db.yml \n  962  kubectl get db\n  963  kubectl get db my-new-db -o yaml\n  964  kubectl edit db my-new-db\n  965  kubectl get db\n  966  clear\n  967  ipython\n  968  cd ..\n  969  ls -l\n  970  cd 05-ingress-controller/\n  971  ls -l\n  972  more frontend.yaml \n"
  },
  {
    "path": "history/09282018/history.txt",
    "content": "    1  kubectl get pods\n    2  which git\n    3  sudo su apt-get install -y git\n    4  sudo su apt-get install git\n    5  sudo su\n    6  kubectl get pods\n    7  sudo kubectl get pods\n    8  kubectl get pods\n    9  clear\n   10  git clone https://github.com/sebgoa/oreilly-kubernetes.git\n   11  ls -l\n   12  cd oreilly-kubernetes/\n   13  ls -l\n   14  cd manifests/\n   15  ls -l\n   16  clear\n   17  tree\n   18  clear\n   19  sudo su\n   20  tree\n   21  clear\n   22  kubectl get pods\n   23  sudo su\n   24  kubectl get pods\n   25  clear\n   26  lear\n   27  clear\n   28  kubectl get nodes\n   29  cd ..\n   30  git remote -v\n   31  cd manifests/\n   32  ls -l\n   33  cd 01-pod/\n   34  ls -l\n   35  more redis.yaml \n   36  vi pod.yaml\n   37  more pod.yaml \n   38  kubectl create -f pod.yaml \n   39  clear\n   40  kubectl get pods\n   41  kubectl get pods --show-labels\n   42  kubectl label pods oreilly app=nginx\n   43  kubectl get pods --show-labels\n   44  kubectl expose pods oreilly --port 80 --type NodePort -o json --dry-run \n   45  kubectl expose pods oreilly --port 80 --type NodePort -o yaml --dry-run \n   46  kubectl expose pods oreilly --port 80 --type NodePort -o yaml --dry-run > svc.yaml\n   47  kubectl create -f svc.yaml \n   48  kubectl get svc\n   49  kubectl get endpoints\n   50  clear\n   51  kubectl get pods\n   52  which js\n   53  which jq\n   54  sudo su\n   55  clear\n   56  which jq\n   57  kubectl get --help\n   58  clear\n   59  kubectl get pods -o json |jq -r\n   60  kubectl get pods -o json |jq -r .status.podIP\n   61  kubectl get pods -o json |jq -r .items[].status.podIP\n   62  kubectl get endpoints\n   63  clear\n   64  kubectl run mysql --image=mysql:5.5 --env MYSQL_ROOT_PASSWORD=root\n   65  kubectl expose deployment mysql --port 3306\n   66  kubectl run wordpress --image=wordpress --env WORDPRESS_DB_PASSWORD=root --env WORDPRESS_DB_HOST=mysql\n   67  kubectl expose deployment wordpress --port 3306 --type NodePort\n   68  kubectl get pods\n   69  kubectl get svc\n   70  clear\n   71  kubectl get svc\n   72  kubectl get endpoints\n   73  kubectl get pods --show-labels\n   74  kubectl exec -it mysql-796fbf7-42hhj mysql -uroot -p\n   75  kubectl exec -it mysql-796fbf7-42hhj -- mysql -uroot -p\n   76  clear\n   77  kubectl get pods\n   78  kubectl logs wordpress-7c78cb8675-w5dbv\n   79  kubectl get pods\n   80  kubectl get svc\n   81  kubectl edit svc wordpress\n   82  kubectl get svc\n   83  clear\n   84  ls -l\n   85  cd ..\n   86  ls -l\n   87  06-volumes/\n   88  ls -l\n   89  cd 06-volumes/\n   90  ls -l\n   91  more volumes.yaml \n   92  kubectl create secret\n   93  kubectl create secret generic foo --from-literal=secret=password\n   94  kubectl get secrets\n   95  vi file.txt\n   96  cat file.txt \n   97  kubectl create cm foo --from-file=file.txt\n   98  kubectl get cm\n   99  clear\n  100  kubectl get secrets\n  101  kubectl get cm\n  102  kubectl get cm foo -o yaml\n  103  vi vol.yaml\n  104  kubectl create -f vol.yaml \n  105  clear\n  106  kubectl get pod\n  107  kubectl exec -ti foo -- ls -l /tmp\n  108  kubectl exec -ti foo -- ls -l /tmp/cm\n  109  kubectl exec -ti foo -- ls -l /tmp/secret\n  110  kubectl exec -ti foo -- cat /tmp/cm/file.txt\n  111  kubectl exec -ti foo -- cat /tmp/secret/password\n  112  kubectl exec -ti foo -- cat /tmp/secret/secret\n  113  more pod\n  114  more vol.yaml \n  115  clear\n  116  ls -l\n  117  kubectl get deployments\n  118  kubectl get svc\n  119  cd ..\n  120  ls -l\n  121  mkdir rydercup\n  122  cd rydercup/\n  123  ls -l\n  124  clear\n  125  kubectl get deployments wordpress --export -o yaml \n  126  kubectl get deployments wordpress --export -o yaml > wp.yaml\n  127  kubectl get deployments mysql --export -o yaml > mysql.yaml\n  128  ls -l\n  129  kubectl get svc wordpress --export -o yaml > wp-svc.yaml\n  130  kubectl get svc mysql --export -o yaml > mysql-svc.yaml\n  131  ls -l\n  132  more wp-svc.yaml \n  133  kubectl delete deployments wordpress mysql\n  134  kubectl delete svc wordpress mysql\n  135  kubectl get pods\n  136  kubectl delete pods foo oreilly\n  137  kubectl get pods\n  138  clear\n  139  ls -l\n  140  kubectl create -f mysql-svc.yaml \n  141  kubectl create -f mysql.yaml \n  142  kubectl create -f wp.yaml \n  143  kubectl create -f wp-svc.yaml \n  144  kubectl get pods\n  145  kubectl delete -f wp.yaml\n  146  kubectl get pods\n  147  kubectl create -f wp.yaml \n  148  kubectl get pods\n  149  vi wp-svc.yaml \n  150  kubectl replace -f wp-svc.yaml \n  151  kubectl delete -f wp-svc.yaml \n  152  kubectl replace -f wp-svc.yaml \n  153  kubectl create -f wp-svc.yaml \n  154  ls -l\n  155  kubectl delete -f mysql-svc.yaml \n  156  kubectl delete -f mysql.yaml \n  157  kubectl delete -f wp.yaml \n  158  kubectl delete -f wp-svc.yaml \n  159  cd ..\n  160  kubectl get pods\n  161  ls -l\n  162  kubectl get pods\n  163  kubectl apply -f ./rydercup/\n  164  kubectl get pods\n  165  vi rydercup/wp.yaml \n  166  kubectl apply -f ./rydercup/\n  167  kubectl get pods --show-labels\n  168  vi rydercup/wp.yaml \n  169  kubectl apply -f ./rydercup/\n  170  kubectl get pods --show-labels\n  171  ls -l\n  172  kubectl edit deployment wp\n  173  kubectl edit deployment wordpress\n  174  clear\n  175  kubectl get pods\n  176  kubectl delete -f ./rydercup/\n  177  kubectl get pods\n  178  clear\n  179  kubectl get pods\n  180  which helm \n  181  helm version\n  182  kubectl version\n  183  helm \n  184  kubectl get pods -n kube-system\n  185  helm repo list\n  186  helm init --client-only\n  187  helm repo list\n  188  helm search redis\n  189  helm search minio\n  190  helm inspect stable/minio\n  191  helm inspect values stable/minio\n  192  helm install stable/minio\n  193  kubectl get svc\n  194  kubectl edit svc vociferous-condor-minio\n  195  kubectl get svc\n  196  clear\n  197  ls -l\n  198  cd rydercup/\n  199  ls -l\n  200  cd ..\n  201  helm\n  202  helm create oreilly\n  203  tree oreilly/\n  204  cd oreilly/\n  205  ls\n  206  ls -l\n  207  rm -rf charts/\n  208  vi Chart.yaml \n  209  vi values.yaml \n  210  rm values.yaml \n  211  vi values.yaml\n  212  cd ..\n  213  ls -l\n  214  rm -rf oreilly/\n  215  ls -l\n  216  kubectl get svc\n  217  kubectl edit svc vociferous-condor-minio\n  218  ls -l\n  219  helm\n  220  helm create oreilly\n  221  tree oreilly/\n  222  cd oreilly/\n  223  ls -l\n  224  rm -rf charts/\n  225  vi Chart.yaml \n  226  ls -l\n  227  vi values.yaml \n  228  more values.yaml \n  229  cd templates/\n  230  ls -l\n  231  rm *.yaml\n  232  ls -l\n  233  rm _helpers.tpl \n  234  rm N\n  235  rm NOTES.txt \n  236  ls -l\n  237  cp ../../rydercup/*\n  238  ls -l\n  239  cp ../../rydercup/* .\n  240  ls -l\n  241  vi wp.yaml \n  242  cd ..\n  243  ls -l\n  244  cd ..\n  245  ls -l\n  246  helm ls\n  247  helm install ./oreilly/\n  248  helm create foo\n  249  cd foo/templates/\n  250  ls -l\n  251  more deployment.yaml \n  252  cd ..\n  253  vi oreilly/templates/wp.yaml \n  254  helm install ./oreilly/\n  255  ls -l\n  256  cd rydercup/\n  257  ls -l\n  258  cd ..\n  259  cd oreilly/\n  260  ls -l\n  261  cd templates/\n  262  ls -l\n  263  vi pvc.yaml\n  264  vi mysql.yaml \n  265  more pvc.yaml \n  266  ls -l\n  267  cd ..\n  268  helm ls\n  269  helm wobbly-cricket\n  270  helm delete wobbly-cricket\n  271  helm delete vociferous-condor\n  272  kubectl get pods\n  273  clear\n  274  kubectl get pods\n  275  clear\n  276  kubectl get pods\n  277  clear\n  278  kubectl get pods\n  279  ls -l\n  280  helm install ./oreilly/\n  281  kubectl get pods\n  282  kubectl get pvc\n  283  kubectl get pv\n  284  kubectl get svc\n  285  cat oreilly/templates/pvc.yaml \n  286  cd ..\n  287  ls -l\n  288  wget https://github.com/kubernetes/kompose/releases/download/v1.16.0/kompose-linux-amd64\n  289  ls -l\n  290  mv kompose-linux-amd64 kompose\n  291  chmod 744 kompose \n  292  kompose\n  293  ./kompose \n  294  vi docker-compose.yaml\n  295  ./kompose convert \n  296  ls -l\n  297  vi docker-compose.yaml\n  298  ./kompose convert \n  299  ls -l\n  300  more frontend-deployment.yaml \n  301  ./kompose up\n  302  kubectl proxy &\n  303  ps -ef|grep proxy\n  304  kill -9 24762\n  305  ps -ef|grep proxy\n  306  kubectl proxy --port 8080&\n  307  ./kompose up\n  308  kubectl get pods\n  309  kubectl get svc\n  310  kubectl get pods\n  311  ls -l\n  312  cd scripts/\n  313  ls -l\n  314  more create_pod.py \n  315  ls -l\n  316  more create_cronjob.py \n  317  ls -l\n  318  cd ..\n  319  ls -l\n  320  cd oreilly-kubernetes/\n  321  ls -l\n  322  rm oreilly-0.1.0.tgz \n  323  ls -l\n  324  cd m\n  325  cd manifests/\n  326  ls -l\n  327  rm -rf rydercup/\n  328  rm -rf oreilly/\n  329  ls -l\n  330  cd 07-crd\n  331  ls -l\n  332  more database.yml \n  333  kubectl get databases\n  334  kubectl create -f database.yml \n  335  kubectl get databases\n  336  kubectl apiversions\n  337  kubectl api-versions\n  338  curl localhost:8080/foo.bar/v1\n  339  curl localhost:8080/apis/foo.bar/v1\n  340  more db.yml \n  341  kubectl apply -f db.yml \n  342  kubectl get db\n  343  kubectl get db my-new-db -o yaml\n  344  vi db.yml \n  345  kubectl apply -f db.yml \n  346  kubectl get databases\n"
  },
  {
    "path": "history/11042017/history.txt",
    "content": "    7  which gcloud\n    8  gcloud container clusters list\n    9  gcloud container clusters create foobar\n   10  clear\n   11  gcloud container clusters list\n   12  kubectl get nodes\n   13  kubectl config view\n   14  kubectl config view | more\n   15  clear\n   16  kubectl config use-context minikube\n   17  kubectl get nodes\n   18  kubectl config use-context gke_skippbox_europe-west1-b_foobar\n   19  kubectl get nodes\n   20  clear\n   25  kubectl get nodes\n   26  kubectl config use-context minikube\n   27  kubectl get nodes\n   28  minikube version\n   29  minikube dashboard\n   30  clear\n   31  kubectl get pods\n   32  kubectl get deployments\n   33  kubectl get replicasets\n   34  kubectl logs redis-3215927958-kqvfb\n   35  minikube ssh\n   36  clear\n   37  kubectl get pods\n   38  kubectl scale deployments redis --replicas=4\n   39  kubectl get pods\n   40  kubectl get pods\n   41  kubectl delete pods redis-3215927958-tqp1q\n   42  kubectl get pods\n   43  kubectl scale deployments redis --replicas=2\n   44  kubectl scale deployments redis --replicas=0\n   45  clear\n   46  kubectl get pods\n   47  kubectl get deployments\n   48  kubectl scale deployments redis --replicas=1\n   49  kubectl get deployments\n   50  kubectl get pods\n   51  clear\n   52  ls -l\n   53  vi foo.yaml\n   54  kubectl create -f foo.yaml \n   55  kubectl get pods\n   56  more foo.yaml \n   57  kubectl get redis-3215927958-h243b -o yaml | more\n   58  kubectl get pods redis-3215927958-h243b -o yaml | more\n   59  clear\n   60  more foo.yaml \n   61  kubectl get pods\n   62  vi foo.yaml \n   63  kubectl apply -f foo.yaml \n   64  kubectl get pods\n   65  kubectl delete pods foobar\n   66  kubectl apply -f foo.yaml \n   67  clear\n   68  kubectl get pods\n   69  kubectl get pods\n   70  kubectl get pods\n   71  kubectl apply -f foo.yaml \n   72  kubectl get pods\n   73  clear\n   74  kubectl get pods\n   75  kubectl --v=99 get pods | more\n   76  kubectl --v=99 delete pods foobar | more\n   77  clear\n   78  minikube ssh\n   79  clear\n   80  kubectl get pods\n   81  kubectl exec -ti redis-3215927958-h243b -- /bin/bash\n   82  which redis-cli\n   83  which brew\n   84  clear\n   85  kubectl get namespace\n   86  kubectl get ns\n   87  kubectl get pods --all-namespaces\n   88  more foo.yaml \n   89  kubectl apply -f foo.yaml \n   90  kubectl get pods\n   91  kubectl get pods\n   92  kubectl create -f foo.yaml \n   93  clear\n   94  kubectl create ns oreilly\n   95  kubectl get ns\n   96  vi foo.yaml \n   97  kubectl create -f foo.yaml \n   98  kubectl get pods\n   99  kubectl get pods --all-namespaces\n  100  minikube ssh\n  101  clear\n  102  kubectl get pods\n  103  ls -l\n  104  cd manifests/\n  105  ls -l\n  106  more rq.yaml \n  107  clear\n  108  kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml\n  109  kubectl get pods\n  110  kubectl get deployments\n  111  kubectl get service\n  112  clear\n  113  kubectl get pods\n  114  ls -l\n  115  git remote -v\n  116  more rq.yaml \n  117  kubectl get pods --namespace oreilly\n  118  vi quota.yaml\n  119  kubectl create -f quota.yaml \n  120  kubectl get resourcequotas\n  121  kubectl get resourcequotas --namespace oreilly\n  122  vi test.yaml\n  123  kubectl get pods --namespace oreilly\n  124  kubectl create -f test.yaml \n  125  kubectl edit resourcequotas counts --namespace oreilly\n  126  kubectl create -f test.yaml \n  127  kubectl get pods --namespace oreilly\n  128  kubectl get pods\n  129  kubectl get svc\n  130  kubectl edit svc frontend\n  131  kubectl get svc\n  132  minikube ip\n  133  kubectl run ghost --image=ghost\n  134  kubectl get pods\n  135  kubectl get deployments\n  136  kubectl expose deployment ghost --port=2368 --type=NodePort\n  137  kubectl get svc\n  138  clear\n  139  kubectl get pods\n  140  kubectl get pods foobar -o yaml\n  141  clear\n  142  kubectl get pods\n  143  kubectl get pods --show-labels\n  144  kubectl get pods -Lrun\n  145  kubectl get pods -l run=ghost\n  146  kubectl scale deployments ghost --replicas=3\n  147  kubectl get pods -l run=ghost\n  148  clear\n  149  kubectl get deployments\n  150  kubectl get rs\n  151  kubectl get rs ghost-2663835528 -o yaml\n  152  kubectl get pods\n  153  kubectl get pods -l run=ghost\n  154  kubectl label pods ghost-2663835528-1269w run=ghost-not-working --overwrite\n  155  kubectl get pods -Lrun\n  156  clear\n  157  kubectl get svc\n  158  kubectl get svc ghost -o yaml\n  159  kubectl get endpoints\n  160  kubectl scale deployments ghost --replicas=1\n  161  kubectl get pods -l run=ghost\n  162  kubectl get endpoints\n  163  clear\n  164  clear\n  165  kubectl get pods\n  166  kubectl delete ns oreilly\n  167  kubectl delete deployments frontend, redis-master, redis-slave\n  168  kubectl delete deployments frontend redis-master redis-slave\n  169  kubectl get pods\n  170  kubectl delete deployment ghost\n  171  kubectl delete pods foobar\n  172  kubectl get ns\n  173  kubectl get pods\n  174  kubectl delete deployment redis\n  175  kubectl get pods\n  176  kubectl get pods\n  177  kubectl get svc\n  178  kubectl delete svc frontend ghost redis-master redis-slave\n"
  },
  {
    "path": "history/14022016/history-1.txt",
    "content": "kubectl get pods\n  509  kubectl get rc\n  510  kubectl get rs\n  516  kubectl create -f redis-rc.yaml \n  517  kubectl get rc\n  518  kubectl get rs\n  519  kubectl delete deployments redis \n  520  kubectl get rs\n  521  kubectl get pods\n  522  kubectl delete pods busybox\n  523  kubectl delete pods multi\n  525  kubectl get pods\n  526  kubectl get rc\n  527  kubectl get rc redis -o yaml\n  528  kubectl get rc redis -o yaml | more\n  531  kubectl get rc\n  532  kubectl get pods\n  534  kubectl scale rc redis --replicas=4\n  535  kubectl get pods\n  537  kubectl get pods --show-labels\n  538  kubectl label pods redis-pt732 app=foobar\n  539  kubectl label pods --overwrite redis-pt732 app=foobar\n  540  kubectl get pods --show-labels\n  541  kubectl delete rc redis\n  542  kubectl get pods --show-labels\n  543  kubectl logs redis-pt732\n  545  kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml\n  546  kubectl get pods\n  547  kubectl exec -ti redis-master-343230949-lktfz -- redis-cli\n  549  kubectl get deployments\n  550  kubectl scale deployments redis-slave --replicas=5\n  551  kubectl get pods\n  552  kubectl exec -ti redis-master-343230949-lktfz -- redis-cli info\n  554  kubectl get pods\n  555  kubectl get svc\n  556  kubectl get services\n  557  kubectl edit svc frontend\n  558  kubectl describe svc frontend\n  560  kubectl get svc\n  561  kubectl create -f busybox.yaml \n  562  kubectl get pods\n  563  kubectl exec -ti busybox -- nslookup frontend\n  564  kubectl exec -ti busybox -- nslookup redis-master\n  565  kubectl exec -ti busybox -- nslookup redis-slave\n  567  kubectl get endpoints\n  568  kubectl get pods\n  569  kubectl get svc frontend -o yaml\n  570  kubectl get pods --show-labels\n  572  kubectl get pods\n  573  kubectl run ghost --image=ghost\n  574  kubectl expose deployment ghost --port=2368 --type=NodePort\n  575  kubectl get pods\n  576  kubectl get svc\n  577  kubectl get pods\n  579  kubectl get svc\n  580  kubectl get endpoints\n"
  },
  {
    "path": "history/14022016/history.txt",
    "content": "gcloud container clusters create oreilly\n  644  clear\n  645  kubectl get nodes\n  646  kubectl config use-context minikube\n  647  kubectl get nodes\n  648  vi ~/.kube/config\n  649  kubectl config use-context gke_skippbox_europe-west1-b_oreilly\n  650  kubectl get nodes\n  651  kubectl config use-context minikube\n  652  kubectl get nodes\n  653  minikube ssh\n  654  clear\n  655  minikube status\n  656  kubectl get nodes\n  657  minikube dashboard\n  658  kubectl get pods\n  659  kubectl get replicasets\n  660  kubectl get deployments\n  661  kubectl scale deployments redis --replicas=5\n  662  kubectl get replicasets\n  663  kubectl get pods\n  664  kubectl scale deployments redis --replicas=2\n  665  kubectl get pods\n  666  clear\n  667  kubectl get pods\n  668  kubectl exec -ti redis-3133791336-1cgrj -- redis-cli\n  669  clear\n  670  kubectl get pods\n  671  minikube ssh\n  672  clear\n  673  kubectl get pods\n  674  redis-cli\n  675  clear\n  676  kubectl get pods\n  677  kubectl get po\n  678  kubectl get rs\n  679  kubectl get deployments\n  680  kubectl get secrets\n  681  kubectl get configmaps\n  682  kubectl get persistentvolumes\n  683  kubectl get namespaces\n  684  kubectl get services\n  685  clear\n  686  kubectl get pods\n  687  kubectl delete pods redis-3133791336-1cgrj\n  688  kubectl get pods\n  689  kubectl get pods\n  690  kubectl scale deployment redis --replicas=1\n  691  kubectl get pods\n  692  kubectl --v=99 get pods\n  693  kubectl --v=99 get svc\n  694  clear\n  695  minikube ssh\n  696  clear\n  697  kubectl get pods\n  698  kubectl get pods redis-3133791336-c37p0 -o yaml\n  699  kubectl get pods redis-3133791336-c37p0 -o yaml | more\n  700  kubectl get pods redis-3133791336-c37p0 -o json |more\n  701  kubectl get pods redis-3133791336-c37p0 -o json | jq\n  702  ls -l\n  703  clear\n  704  git remote -v\n  705  ls -l\n  706  cd manifests/\n  707  ls -l\n  708  clear\n  709  cat busybox.yaml \n  710  kubectl create -f busybox.yaml \n  711  kubectl get pods\n  712  kubectl get pods\n  713  kubectl exec -ti busybox\n  714  kubectl exec -ti busybox /bin/sh\n  715  clear\n  716  cat busybox.yaml \n  717  kubectl get pods\n  718  cp busybox.yaml multi.yaml\n  719  vi multi.yaml \n  720  kubectl create -f multi.yaml \n  721  clear\n  722  kubectl get pods\n  723  kubectl exec -ti multi /bin/sh\n  724  kubectl exec -ti multi -- redis-cli\n  725  kubectl exec -ti multi -c redis -- redis-cli\n  726  kubectl get pods\n  727  clear\n  728  kubectl get pods\n  729  kubectl get pods --all-namespaces\n  730  kubectl create ns oreilly\n  731  kubectl get ns\n  732  vi multi.yaml \n  733  kubectl create -f multi.yaml \n  734  kubectl get pods --all-namespaces\n  735  vi multi.yaml \n\n"
  },
  {
    "path": "history/14072017/14072017.txt",
    "content": "   18  minikube stop\n   19  minikube delete\n   20  curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/\n   21  minikube version\n   22  minikube start\n   23  kubectl version\n   24  which kubectl\n   25  gcloud components updates\n   26  gcloud components update\n   27  which kubectl\n   28  kubectl version\n   29  kuebctl get nodes\n   30  kubectl get nodes\n   31  kubectl get pods\n   32  clear\n   33  gcloud container clusters list\n   34  gcloud container clusters delete redeye\n   35  clear\n   36  clear\n   37  clear\n   38  kubectl get pods --all-namespaces\n   39  clear\n   40  clear\n   41  minikube status\n   42  kubectl get pods\n   43  kubectl get nodes\n   44  clear\n   45  which gcloud\n   46  gcloud container cluster create foobar\n   47  gcloud container clusters create foobar\n   48  gcloud container clusters list\n   49  kubectl get nodes\n   50  kubectl config view use-context minikube\n   51  kubectl config view use-context minikubeclear\n   52  kubectl config use-context minikubeclear\n   53  kubectl config use-context minikube\n   54  kubectl get nodes\n   55  clear\n   56  minikube status\n   57  kubectl get pods\n   58  kubectl get nodes\n   59  kubectl run ghost --image=ghost\n   60  kubectl get pods\n   61  kubectl get pods\n   62  kubectl expose deployments ghost --port=2368 --type=NodePort\n   63  kubectl get service\n   64  minikube service ghost\n   65  minikube service ghost\n   66  clear\n   67  kubectl run nginx --image=nginx\n   68  kubectl expose deployments nginx --port=80 --type=NodePort\n   69  kubectl get pods\n   70  kubectl get services\n   71  minikube service nginx\n   72  kubectl get pods\n   73  kubectl exec -ti ghost-2663835528-z5sgd -- /bin/bash\n   74  kubectl logs ghost-2663835528-z5sgd\n   75  kubectl get pods\n   76  kubectl logs nginx-2371676037-kgfr3\n   77  clear\n   78  minikube dashboard\n   79  kubectl get pods\n   80  minikube ssh\n   81  minikube logs\n   82  clear\n   83  kubectl get pods\n   84  vi foobar.yml\n   85  kubectl create -f foobar.yml \n   86  kubectl get pods\n   87  kubectl get pods foobar -o yaml\n   88  kubectl get pods foobar -o yaml | more\n   89  kubectl get pods foobar -o json\n   90  kubectl get pods foobar -o json | jq -r\n   91  kubectl get pods foobar -o json | jq -r .spec\n   92  kubectl get pods foobar -o json | jq -r .spec.nodeName\n   93  clear\n   94  minikube ssh\n   95  clear\n   96  kubectl get\n   97  clear\n   98  kubectl get pods\n   99  kubectl delete pods foobar\n  100  kubectl get pods\n  101  kubectl get pods -v=9 ghost-2663835528-z5sgd\n  102  clear\n  103  curl -k -v -XGET  -H \"Accept: application/json\" -H \"User-Agent: kubectl/v1.7.0 (darwin/amd64) kubernetes/d3ada01\" https://192.168.99.100:8443/api/v1/namespaces/default/pods/ghost-2663835528-z5sgd\n  104  minikube ssh\n  105  clear\n  106  kubectl get ns\n  107  kubectl get namespace\n  108  kubectl get pods --all-namespaces\n  109  kubectl create ns oreilly\n  110  kubectl get namespace\n  111  vi foobar.yml \n  112  kubectl create -f foobar.yml \n  113  vi foobar.yml \n  114  kubectl create -f foobar.yml \n  115  kubectl get pods --all-namespaces\n  116  ls -l\n  117  git remote -v\n  118  cd manifests/\n  119  ls -l\n  120  git status -s\n  121  more quota.yaml \n  122  kubectl create -f quota.yaml \n  123  kubectl get pods -n oreilly\n  124  cd ..\n  125  vi foobar.yml \n  126  kubectl create -f foobar.yml \n  127  clear\n  128  kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/guestbook-all-in-one.yaml\n  129  kubectl get pods\n  130  kubectl get services\n  131  kubectl edit service frontend\n  132  kubectl get pods\n  133  kubectl get pods\n  134  kubectl exec -ti redis-master-1068406935-vndx7 -- redis-cli info\n  135  kubectl get pods\n  136  kubectl scale deployments redis-slave --replicas=5\n  137  kubectl get pods\n  138  kubectl get pods\n  139  kubectl exec -ti redis-master-1068406935-vndx7 -- redis-cli info\n  140  kubectl get pods\n  141  minikube service frontend\n  142  clear\n  143  ls -l\n  144  cd manifests/\n  145  ls -l\n  146  cd rs/\n  147  ls -l\n  148  clear\n  149  more\n  150  ls -l\n  151  pwd\n  152  more rs-example.yml \n  153  vi rs-example.yml \n  154  git rs-example.yml \n  155  git add rs-example.yml \n  156  git commit -m \"fix rs example\"\n  157  git push\n  158  more rs-example.yml \n  159  kubectl create -f rs-example.yml \n  160  kubectl get rs\n  161  kubectl delete deployments ghost nginx redis redis-master redis-slave\n  162  kubectl get rs\n  163  kubectl delete deployments frontend\n  164  kubectl get rs\n  165  kubectl getpods\n  166  kubectl get pods\n  167  kubectl edit rs foo\n  168  kubectl get pods\n  169  kubectl edit rs foo\n  170  kubectl get pods\n  171  kubectl get pods --show-labels\n  172  more rs-example.yml \n  173  kubectl edit rs foo\n  174  kubectl get pods\n  175  tmux\n  176  kubectl get pods -l run=nginx\n  177  kubectl get pods -l run=nginx -o json | more\n  178  kubectl get pods -l run=nginx -o json | jq -r [].status.podIP\n  179  kubectl get pods -l run=nginx -o json | jq -r [].status\n  180  kubectl get pods -l run=nginx -o json | jq -r .items.status\n  181  kubectl get pods -l run=nginx -o json | jq -r .items\n  182  kubectl get pods -l run=nginx -o json | jq -r .items.podIP\n  183  kubectl get pods -l run=nginx -o json | jq -r .items.status.podIP\n  184  kubectl get pods -l run=nginx -o json | jq -r .items.status\n  185  kubectl get pods -l run=nginx -o json | jq -r .items[].status\n  186  kubectl get pods -l run=nginx -o json | jq -r .items[].status.podIP\n  187  kubectl get svc\n  188  kubectl get endpoints\n  189  kubectl get pods -l run=nginx -o json | jq -r .items[].status.podIP\n  190  kubectl scale deployments nginx --replicas=5\n  191  kubectl get pods -l run=nginx -o json | jq -r .items[].status.podIP\n  192  kubectl get pods -l run=nginx -o json | jq -r .items[].status.podIP\n  193  kubectl get pods\n  194  kubectl get pods -l run=nginx -o json | jq -r .items[].status.podIP\n  195  kubectl get endpoints\n  196  clear\n  197  cd ..\n  198  ls -l\n  199  clear\n  200  ls -l\n  201  cd ..\n  202  ls -l\n  203  clear\n  204  clear\n  205  clear\n  206  minikube status\n  207  kubectl get pods\n  208  clear\n  209  minikube status\n  210  kubectl get nodes\n  211  kubectl get\n  212  kubectl run ghost --image=ghost\n  213  kubectl expose deployments ghost --port=2368 --type=NodePort\n  214  clear\n  215  kubectl get pods\n  216  kubectl get replicaset\n  217  kubectl get pods -LRUN\n  218  kubectl get pods -Lrun\n  219  kubectl get svc\n  220  kubectl get endpoints\n  221  kubectl get pods -Lrun\n  222  minikube service ghost\n  223  clear\n  224  kubectl get pods\n  225  kubectl get pods ghost-2663835528-k9859 -o yaml\n  226  kubectl get pods ghost-2663835528-k9859 -o json\n  227  kubectl get pods ghost-2663835528-k9859 -o json | jq -r\n  228  kubectl get pods ghost-2663835528-k9859 -o json | jq -r.status.podIP\n  229  kubectl get pods ghost-2663835528-k9859 -o json | jq -r.items[].status.podIP\n  230  kubectl get pods ghost-2663835528-k9859 -o json | jq -r.items.status.podIP\n  231  kubectl get pods ghost-2663835528-k9859 -o json | jq -r .status\n  232  kubectl get pods ghost-2663835528-k9859 -o json | jq -r .status.podIP\n  233  clear\n  234  kubectl get pods\n  235  kubectl get pods ghost-2663835528-k9859 -o yaml | more\n  236  kubectl run --help\n  237  clear\n  238  kubectl get pods\n  239  kubectl get deployments\n  240  kubectl scale deployments ghost --replicas=5\n  241  kubectl get pods\n  242  kubectl get pods --watch\n  243  kubectl get pods\n  244  clear\n  245  kubectl set image deployment ghost ghost=ghost:0.9\n  246  kubectl get pods\n  247  kubectl get pods --watch\n  248  kubectl get pods\n  249  kubectl get pods\n  250  kubectl get pods\n  251  kubectl get pods --watch\n  252  kubectl get rs\n  253  kubectl get rs\n  254  kubectl get pods\n  255  kubectl set image deployment ghost ghost=ghost:08\n  256  kubectl get pods\n  257  kubectl get pods --watch\n  258  kubectl get rs\n  259  kubectl rollout history deployment ghost\n  260  kubectl rollout undo deployment ghost --to-revision=1\n  261  kubectl get rs\n  262  kubectl get rs\n  263  kubectl get rs\n  264  kubectl get rs\n  265  kubectl get rs\n  266  kubectl get rs\n  267  kubectl rollout history deployment ghost\n  268  kubectl annotate deployment ghost kubernetes.io/change-cause=\"oreilly demo\"\n  269  kubectl rollout history deployment ghost\n  270  kubectl get deployments ghost -o yaml | more\n  271  clear\n  272  kubectl rollout history deployment ghost\n  273  kubectl set image deployment ghost ghost=ghost:0.8\n  274  kubectl rollout history deployment ghost\n  275  ls -l\n  276  cd manifests/\n  277  ls -l\n  278  cd ingress-controller/\n  279  ls -l\n  280  more ghost.yaml \n  281  kubectl get svc\n  282  kubectl edit svc ghost\n  283  kubectl get svc\n  284  minikube service ghost\n  285  kubectl get pods --all-namespaces\n  286  minikube addons list\n  287  minikube addons enabled ingress\n  288  minikube addons enable ingress\n  289  clear\n  290  kubectl get pods --all-namespaces\n  291  kubectl create -f ghost.yaml \n  292  kubectl get ingress\n  293  kubectl get pods\n  294  kubectl run game --image=runseb/2048\n  295  kubectl expose deployment game --port=80\n  296  kubectl get pods\n  297  ls -l\n  298  more game.yaml \n  299  kubectl get pods\n  300  kubectl get pods --watch\n  301  kubectl get pods\n  302  more game.yaml \n  303  kubectl get pods\n  304  kubectl get svc\n  305  ls- l\n  306  ls -l\n  307  kubectl get pods --all-namespaces\n  308  kubectl exec -ti nginx-ingress-controller-p4gc1 -n kube-system -- /bin/bash\n  309  clear\n  310  kubectl get pods nginx-ingress-controller-p4gc1 -n kube-system -o yaml | more\n  311  kubectl get ingress\n  312  more game.yaml \n  313  kubectl create -f game.yaml \n  314  kubectl get ingress\n  315  kubectl get ingress\n  316  pwd\n  317  apiVersion: extensions/v1beta1\n  318  kind: Ingress\n  319  metadata:\n  320    name: game\n  321  spec:\n  322    rules:\n  323    - host: game.192.168.99.100.nip.io\n  324      http:\n  325        paths:\n  326        - backend:\n  327            serviceName: game\n  328            servicePort: 80\n  329  pwd\n  330  clear\n  331  kubectl run mysql --image=mysql:5.5 --env MYSQL_ROOT_PASSWORD=root\n  332  kubectl run wordpress --image=wordpress --env WORDPRESS_DB_HOST=mysql --env WORDPRESS_DB_PASSWORD=root\n  333  kubectl expose deployment mysql --port=3306\n  334  kubectl get svc\n  335  kubectl expose deployment wordpress --port=80 --type=NodePort\n  336  kubectl get svc\n  337  kubectl get pods\n  338  kubectl scale deployments ghost --replicas=1\n  339  clear\n  340  kubectl get pods\n  341  kubectl exec -ti mysql-3678996555-7b50q -- mysql -uroot -p\n  342  clear\n  343  kubectl get pods\n  344  minikube service wordpress\n  345  minikube service wordpress\n  346  kubectl get pods\n  347  kubectl exec -ti mysql-3678996555-7b50q -- mysql -uroot -p\n  348  kubectl get pods\n  349  kubectl get pods mysql-3678996555-7b50q -o yaml |more\n  350  ls -l\n  351  kubectl create secret generic mysql --from-literal=password=foobar\n  352  kubectl get secrets\n  353  ls -l\n  354  cd ...\n  355  cd ..\n  356  ls -l\n  357  cd wordpress/\n  358  ls -l\n  359  more mysql-secret.yaml \n  360  pwd\n  361  kubectl create -f mysql-secret.yaml \n  362  kubectl get pods\n  363  kubectl exec -ti mysql-secret -- mysql -uroot -p\n  364  ls-l\n  365  ls -l\n  366  kubectl get secrets\n  367  kubectl get secrets mysql -o yaml\n  368  echo \"Zm9vYmFy\" | base64 --decode\n  369  kubectl get psp\n  370  clear\n  371  kubectl get pods\n  372  cd ..\n  373  ls -l\n  374  cd volumes/\n  375  ls -l\n  376  more volumes.yaml \n  377  containers:\n  378    - image: busybox\n  379      command:\n  380        - sleep\n  381        - \"3600\"\n  382      volumeMounts:\n  383      - mountPath: /busy\n  384        name: test\n  385      imagePullPolicy: IfNotPresent\n  386      name: busy\n  387    - image: busybox\n  388      command:\n  389        - sleep\n  390        - \"3600\"\n  391      volumeMounts:\n  392      - mountPath: /box\n  393        name: test\n  394      imagePullPolicy: IfNotPresent\n  395      name: box\n  396    restartPolicy: Always\n  397  more volumes.yaml \n  398  kubectl create -f volumes.yaml \n  399  kubectl get pods\n  400  kubectl get pods\n  401  kubectl exec -ti vol -c busy -- ls -l /busy\n  402  kubectl exec -ti vol -c busy -- touch /busy/foobar\n  403  kubectl exec -ti vol -c busy -- ls -l /busy\n  404  kubectl exec -ti vol -c box -- ls -l /box\n  405  more volumes.yaml \n  406  clear\n  407  ls -l\n  408  more pvc.yaml \n  409  kubectl get pv\n  410  kubectl get pvc\n  411  kubectl get persistentvolume\n  412  kubectl create -f pvc.yaml \n  413  kubectl get pvc\n  414  kubectl get pv\n  415  ls -l\n  416  more hostpath.yaml \n  417  kubectl create -f hostpath.yaml \n  418  kubectl get pods\n  419  kubectl get pods\n  420  kubectl exec -ti hostpath -- ls -l /bitnami\n  421  kubectl exec -ti hostpath -- echo \"dynamic storage rocks\" > /bitnami/claim\n  422  kubectl exec -ti hostpath -- touch /bitnami/storage\n  423  kubectl exec -ti hostpath -- ls -l /bitnami\n  424  kubectl delete pods hostpath\n  425  kubectl get pods\n  426  kubectl get pods\n  427  kubectl get pods\n  428  kubectl get pods\n  429  kubectl get pods\n  430  kubectl get pods\n  431  kubectl get pods\n  432  kubectl get pvc\n  433  kubectl get pv\n  434  kubectl get pv pvc-3359d727-68c2-11e7-a55e-080027b13bd8 -o yaml | more\n  435  minikube ssh\n  436  kubectl get pods\n  437  ls -l\n  438  cd ..\n  439  ls -l\n  440  history\n  441  ls -l\n  442  cd wordpress/\n  443  ls -l\n  444  more mysql.yaml \n  445  cd ..\n  446  l s-l\n  447  ls -l\n  448  cd volumes/\n  449  ls -l\n  450  vi config.js\n  451  kubectl create configmap test --from-file=config.js\n  452  kubectl get configmaps\n  453  kubectl get cm\n  454  kubectl get cm test -o yaml\n  455  clear\n  456  kubectl get pods\n  457  ls -l\n  458  vi foobar.yaml\n  459  which helm\n  460  helm init\n  461  kubectl get pods --all-namespaces\n  462  helm repo list\n  463  helm search redis\n  464  helm install stable/redis\n  465  helm ls\n  466  kubectl get pods\n  467  kubectl get svc\n  468  kubectl get pvc\n  469  kubectl get pv\n  470  kubectl get deployments\n  471  kubectl get secrets\n  472  clear\n  473  ipython\n  474  clear\n  475  cd..\n  476  ls -l\n  477  cd ..\n  478  ls -l\n  479  cd tpr\n  480  ls -l\n  481  clear\n  482  ls -l\n  483  kubectl get thridpartyresources\n  484  more bananas.yaml \n  485  kubectl create -f bananas.yaml \n  486  kubectl get thridpartyresources\n  487  kubectl get thridpartyresources\n  488  kubectl get thridpartyresource\n  489  kubectl get thirdpartyresource\n  490  minikube ssh\n  491  ls-l\n  492  ls -l\n  493  more kubie.yaml \n  494  more custom-example.yaml \n  495  minikube ssh\n  496  more bananas.yaml \n  497  vi foobar.yaml\n  498  kubectl create -f foobar.yaml \n  499  kubectl get kubecons\n  500  kubectl get kubecons ortwin -o yaml\n  501  clear\n  502  cd ..\n  503  cd ..\n  504  cd history/\n  505  ls -l\n  506  cat history > 14072017.txt\n  507  history > 14072017.txt\n"
  },
  {
    "path": "history/16022016/history-1.txt",
    "content": "  307  cd manifests/\n  308  ls -l\n  309  cd ingress-controller/\n  310  ls -l\n  311  more backend.yaml \n  312  kubectl create -f backend.yaml \n  313  clear\n  314  kubectl get pods\n  315  kubectl delete deployments game\n  316  kubectl delete deployments ghost\n  317  kubectl get pods\n  318  kubectl get pods\n  319  kubectl run game --image=runseb/2048\n  320  kubectl expose deployments game --port=80\n  321  kubectl delete svc game\n  322  kubectl expose deployments game --port=80\n  323  kubectl get svc game -o yaml\n  324  clear\n  325  kubectl get svc\n  326  kubectl get svc ghost -o json | jq -r .spec.type\n  327  kubectl get pods\n  328  kubectl get pods game-1597610132-qv0z2 | jq -r .spec.containers[0].image\n  329  kubectl get pods game-1597610132-qv0z2 | jq -r .spec.containers\n  330  kubectl get pods game-1597610132-qv0z2 | jq -r .spec\n  331  kubectl get pods game-1597610132-qv0z2 | jq -r .items[0]\n  332  kubectl get pods game-1597610132-qv0z2 | jq -r\n  333  kubectl get pods game-1597610132-qv0z2 -o json | jq -r\n  334  kubectl get pods game-1597610132-qv0z2 -o json | jq -r .spec\n  335  kubectl get pods game-1597610132-qv0z2 -o json | jq -r .spec.containers[0].image\n  336  clear\n  337  kubectl get pods\n  338  ls -l\n  339  cat game.yaml \n  340  kubectl get svc game -o yaml\n  341  kubectl create -f game.yaml \n  342  clear\n  343  kubectl get ingress\n  344  kubectl get pods\n  345  kubectl run mysql --image=mysql:5.5 --env=MYSQL_ROOT_PASSWORD=root \n  346  kubectl create secret generic mysql --from-literal=MYSQL_ROOT_PASSWORD=root\n  347  kubectl get secrets\n  348  kubectl get secrets mysql -o yaml\n  349  kubectl label secret mysql foo=bar\n  350  kubectl get secrets mysql -o yaml\n  351  clear\n  352  kubectl get secrets\n  353  kubectl get pods\n  354  kubectl exec -ti mysql-3779901834-c7nc8 -- mysql -uroot -p\n  355  cd ..\n  356  ls -l\n  357  more mysql.yaml \n  358  more mysql-secret.yaml \n  359  kubectl get secrets\n  360  kubectl create -f mysql-secret.yaml \n  361  kubectl get pods\n  362  clear\n  363  kubectl get pods\n  364  more mysql-secret.yaml \n  365  kubectl describe pods mysql\n  366  more mysql-secret.yaml \n  367  vi mysql-secret.yaml \n  368  kubectl create -f mysql-secret.yaml \n  369  kubectl delete pods mysql\n  370  clear\n  371  kubectl create -f mysql-secret.yaml \n  372  kubectl get pods\n  373  kubectl get secrets mysql -o yaml\n  374  kubectl delete secret mysql\n  375  kubectl create secret generic mysql --from-literal=password=root\n  376  kubectl get pods\n  377  kubectl delete pods mysql\n  378  kubectl create -f mysql-secret.yaml \n  379  kubectl get pods\n  380  clear\n  381  kubectl get pods\n  382  kubectl get pods mysql -o yaml\n  383  kubectl get pods mysql -o yaml | more\n  384  clear\n  385  kubectl get pods\n  386  l s-l\n  387  l s-l\n  388  ls -l\n  389  cd ingress-controller/\n  390  ls -l\n  391  more wordpress.yaml \n  392  cd ..\n  393  ls -l\n  394  clear\n  395  kubectl get pds\n  396  kubectl get pods\n  397  kubectl delete pods mysql\n  398  kubectl get deployments\n  399  kubectl get svc\n  400  kubectl expose deployments mysql --port=3306\n  401  kubectl run wordpress --image=wordpress\n  402  kubectl expose deployments wordpress --port=80\n  403  kubectl get pds\n  404  kubectl get pods\n  405  cd ingress-controller/\n  406  ls -l\n  407  more wordpress.yaml \n  408  kubectl create -f wordpress.yaml \n  409  kubectl get ingress.yaml \n  410  kubectl get ingress\n  411  kubectl delete ingress game\n  412  kubectl get ingress\n  413  clear\n  414  kubectl get ingress\n  415  kubectl get secret\n  416  kubectl get deployments\n  417  kubectl get pods\n  418  kubectl get svc\n  419  kubectl delete svc game, ghost\n  420  kubectl delete svc game ghost\n  421  kubectl get svc\n  422  kubectl get pods --watch\n  423  clear\n  424  kubectl get pods\n  425  kubectl get ingress\n  426  clear\n  427  ls -l\n  428  kubectl get pods\n  429  ls -l\n  430  more wordpress.yaml \n  431  kubectl get ingress\n  432  kubectl get svc\n  433  kubectl get deployments\n  434  kubectl get pods\n  435  kubectl logs -f wordpress-468622735-p57t0\n  436  kubectl get pods\n  437  kubectl delete deployments wordpress\n  438  kubectl run wordpress --image=wordpress --env=WORDPRESS_DB_PASSOWRD=root --env=MYSQL_DB_HOST=mysql\n  439  kubectl get pods\n  440  kubectl get pods\n  441  kubectl get pods\n  442  kubectl get pods\n  443  kubectl edit deployments wordpress\n  444  kubectl get pods\n  445  kubectl get rs\n  446  kubectl get pods\n  447  kubectl describe pods wordpress-2537937599-zg2bt \n  448  kubectl edit deployments wordpress\n  449  kubectl get rs\n  450  kubectl get pods\n  451  kubectl exec -ti mysql-3779901834-c7nc8 /bin/bash\n  452  clear\n  453  kubectl get pv\n  454  kubectl get persistentvolumes\n  455  kubectl get persistentvolumeclaim\n  456  kubectl get pvs\n  457  kubectl get pvc\n  458  cd ..\n  459  ls -l\n  460  more volumes.yaml \n  461  kubectl create -f volumes.yaml \n  462  kubectl get pods\n  463  kubectl exec -ti vol -c busy /bin/sh\n  464  kubectl exec -ti vol -c box -- ls -l /box\n  465  kubectl exec -ti vol -c box -- ls -l /box\n  466  kubectl exec -ti vol -c box /bin/sh\n  467  kubectl exec -ti vol -c busy /bin/sh\n  468  cler\n  469  clear\n  470  kubectl get pods\n  471  kubectl delete pods vol\n  472  kubectl get pods\n  473  which helm\n  474  helm init\n  475  kubectl get pods --all-namespaces\n  476  kubectl get pods --all-namespaces\n  477  helm ls\n  478  helm repo ls\n  479  helm repo list\n  480  helm install stable/minio\n  481  clear\n  482  kubectl get pods\n  483  kubectl get secrets\n  484  kubectl get svc\n  485  kubectl get deployments\n  486  kubectl get pvc\n  487  kubectl get pv\n  488  kubectl describe svc busted-ocelot-minio\n  489  kubectl edit svc busted-ocelot-minio\n  490  kubectl describe svc busted-ocelot-minio\n  491  clear\n  492  kubectl get pv\n  493  clear\n  494  ipython\n  495  ls -l\n  496  cd ..\n  497  ls -l\n  498  cd scripts/\n  499  ls -l\n  500  more create_pod.py \n  501  ./create_pod.py \n  502  kubectl get pods\n"
  },
  {
    "path": "history/16022016/history.txt",
    "content": "minikube status\n   62  kubectl get pods\n   63  kubectl get nodes\n   64  kubectl run game --image=runseb/2048\n   65  kubectl get deployments\n   66  kubectl get rs\n   67  kubectl get rs game-1597610132 -o yaml |more\n   68  cler\n   69  clear\n   70  kubectl get pods --show-labels\n   71  kubectl expose deployments game --port=80 --type=NodePort\n   72  kubectl get svc\n   73  kubectl describe svc game\n   74  minikube ssh\n   75  clear\n   76  minikube\n   77  minikube ip\n   78  minikube service game\n   79  clear\n   80  kubectl run ghost --image=ghost --record\n   81  kubectl get deployments\n   82  kubectl get deployments ghost -o yaml | more\n   83  clear\n   84  kubectl rollout history deployments ghost\n   85  kubectl get pods\n   86  kubectl expose deployment/ghost --port=2368 --type=NodePort\n   87  kubectl get svc\n   88  kubectl get pods\n   89  minikube service ghost\n   90  kubectl set image deployment/ghost ghost=ghost:09\n   91  kubectl get pods\n   92  kubectl get pods\n   93  kubectl rollout history deployment/ghost\n   94  kubectl rollout history deployment/ghost undo\n   95  kubectl rollout history deployments ghost undo\n   96  kubectl rollout history undo deployments ghost\n   97  clear\n   98  kubectl get pods\n   99  kubectl get pods --watch\n  100  kubectl rollout history deployments ghost\n  101  kubectl get rs\n  102  kubectl rollout undo deployment/ghost\n  103  kubectl get pods\n  104  kubectl scale deployment ghost --replicas=5\n  105  kubectl get pods\n  106  kubectl get pods\n  107  kubectl get pods\n  108  kubectl get pods\n  109  kubectl rollout history deployment/ghost\n  110  kubectl set image deployment/ghost ghost=ghost:0.9\n  111  kubectl get pods\n  112  kubectl get pods --watch\n  113  kubectl get pods\n  114  kubectl get pods --watch\n  115  clear\n  116  kubectl get rs\n  117  kubectl get rs\n  118  kubectl get pods\n  119  kubectl get deployment ghost -o yaml |more\n  120  clear\n  121  kubectl get pods\n  122  kubectl get pods ghost-3487275284-6tr23 -o yaml | more\n  123  clear\n  124  kubectl get rs\n  125  kubectl rollout history deployment/ghost\n  126  kubectl rollout deployment/ghost --to-revision=3\n  127  kubectl rollout --help\n  128  clear\n  129  kubectl rollout history --help\n  130  kubectl rollout history deployment/ghost\n  131  kubectl rollout history deployment/ghost --revision=3\n  132  clear\n  133  kubectl get rs --watch\n  134  kubectl get rs --watch\n  135  kubectl rollout history deployment/ghost\n  136  kubectl get rs --watch\n  137  kubectl rollout history deployment/ghost --revision=3\n  138  clear\n  139  kubectl get rs --watch\n  140  kubectl get pods\n  141  kubectl rollout history deployment/ghost\n  142  kubectl rollout history --help\n  143  kubectl rollout undo --help\n  144  kubectl rollout undo deployment/ghost --to-revision=3\n  145  clear\n  146  kubectl get rs --watch\n  147  kubectl get pods\n  148  kubectl get pods ghost-943298627-8p64g -o yaml |more\n  149  kubectl rollout --help\n  150  minikube service ghost\n  151  clear\n  152  kubectl edit deployments ghost\n  153  kubectl rollout history deployments ghost\n  154  kubectl get pods\n  155  kubectl get svc\n  156  kubectl edit svc ghost\n  157  kubectl get rs\n  158  kubectl completion --help\n  159  kubectl completion bash\n"
  },
  {
    "path": "history/16052017/history.txt",
    "content": "   12  which minikube\n   13  minikube version\n   14  minikube status\n   15  minikube start\n   16  kubectl get pods\n   17  kubectl delete deployments ghost\n   18  kubectl get pods --all-namespaces\n   19  minikube addons disable heapster\n   20  minikube addons list\n   21  minikube addons disable ingress\n   22  minikube addons list\n   23  minikube stop\n   24* minikube del\n   25  minikube start\n   26  clear\n   27  whcih gcloud\n   28  which gcloud\n   29  gcloud container clusters list\n   30  gcloud components update\n   31  clear\n   32  ls -l\n   33  clear\n   34  minikube status\n   35  kubectl get nodes\n   36  clear\n   37  clear\n   38  gcloud container clusters list\n   39  gcloud container clusters create foobar\n   40  kubectl config use-context minikube\n   41  kubect get nodes\n   42  kubectl get nodes\n   43  clear\n   44  kubectl get nodes\n   45  minikube ip\n   46  kubectl config view\n   47  clear\n   48  kubectl run ghost --image=ghost:0.9\n   49  kubectl expose deployments ghost --port=2368 --type=NodePort\n   50  kubectl get pods\n   51  kubectl get pods --watch\n   52  kubectl get pods\n   53  kubectl get service\n   54  minikube service ghost\n   55  clear\n   56  minikube ssh\n   57  kubectl get pods\n   58  kubectl logs ghost-3503942313-7rdcb\n   59  kubectl exec -ti ghost-3503942313-7rdcb -- /bin/bash\n   60  clear\n   61  minikube\n   62  minikube dashboard\n   63  clear\n   64  kubectl get replicasets\n   65  kubectl get pods\n   66  kubectl get replicasets\n   67  kubectl get service\n   68  clear\n   69  kubectl get replicasets\n   70  kubectl scale replicasets ghost --replicas=5\n   71  kubectl scale replicasets ghost-3503942313 --replicas=5\n   72  kubectl get replicasets\n   73  kubectl get replicasets\n   74  kubectl get replicasets\n   75  clear\n   76  kubectl scale deployments ghost --replicas=5\n   77  kubectl get replicasets\n   78  kubectl get replicasets\n   79  kubectl get pods\n   80  kubectl scale deployments ghost --replicas=2\n   81  kubectl get replicasets\n   82  kubectl get pods\n   83  clear\n   84  minikube ssh\n   85  clear\n   86  kubectl get pods\n   87  kubectl get po\n   88  kubectl get po --all-namespaces\n   89  kubectl get namespace\n   90  kubectl get ns\n   91  kubectl create namespace oreilly\n   92  kubectl get ns\n   93  cd ..\n   94  ls -l\n   95  vi foobar.yml\n   96  kubectl create -f foobar.yml \n   97  vi foobar.yml\n   98  kubectl create -f foobar.yml \n   99  clear\n  100  kubectl get pods\n  101  kubectl get pods\n  102  vi foobar.yml \n  103  kubectl create -f foobar.yml \n  104  kubectl get pods --all-namespaces\n  105  cat foobar.yml \n  106  kubectl get pods foobar -o yaml | more\n  107  kubectl get pods foobar -o json | more\n  108  kubectl get pods foobar -o json \n  109  kubectl get pods foobar -o json | jq -r \n  110  kubectl get pods\n  111  kubectl get svc\n  112  kubectl get svc ghost -o yaml | more\n  113  kubectl get svc deployments -o yaml | more\n  114  kubectl get deployments -o yaml | more\n  115  kubectl get deployments ghost -o yaml | more\n  116  q\n  117  clear\n  118  minikube ssh\n  119  clear\n  120  kubectl get --v=99 pods ghost\n  121  kubectl get --v=99 deployments ghost\n  122  kubectl delete --v=99 deployments ghost\n  123  kubectl get deployments\n  124  clear\n  125  kubectl get pods\n  126  vi game.yml\n  127  kubectl create -f game.yml \n  128  vi game.yml\n  129  kubectl create -f game.yml \n  130  kubectl get deployments\n  131  kubectl get pods\n  132  kubectl get pods --show-labesl\n  133  clear\n  134  kubectl get pods --show-labels\n  135  kubectl get pods -Lapp\n  136  kubectl get rs\n  137  kubectl get rs game-3045583940 -o yaml | more\n  138  kubectl get pods\n  139  vi game-svc.yml\n  140  kubectl create -f game-svc.yml\n  141  clear\n  142  kubectl get svc\n  143  kubectl get endpoints\n  144  kubectl edit svc game\n  145  kubectl get svc\n  146  minikube service game\n  147  ls-l\n  148  ls -l\n  149  more game.yml \n  150  more game-svc.yml \n  151  kubectl scale deployments game --replicas=10\n  152  kubectl get pods\n  153  kubectl get pods\n  154  kubectl get pods\n  155  kubectl get pods --watch\n  156  kubectl get pods\n  157  kubectl get endpoints\n  158  kubectl get endpoints game -o yaml | more\n  159  vi game-svc.yml \n"
  },
  {
    "path": "history/21022017/history.txt",
    "content": "clear\n  304  which minikube\n  305  minikube version\n  306  minikube start\n  307  kubectl get nodes\n  308  clear\n  309  kubectl get nodes\n  310  minikube dashboard\n  311  kubectl get pods\n  312  kubectl get deployments\n  313  kubectl get replicasets\n  314  kubectl get pods\n  315  kubectl logs redis-3133791336-392v8\n  316  clear\n  317  kubectl exec -ti redis-3133791336-392v8 -- redis-cli\n  318  clear\n  319  minikube ssh\n  320  cler\n  321  clear\n  322  kubectl get pods\n  323  kubectl get pods redis-3133791336-392v8 -o yaml\n  324  clesr\n  325  clear\n  326  ls -l\n  327  vi foobar.yaml\n  328  kubectl create -f foobar.yaml\n  329  kubectl get pods\n  330  kubectl get pods\n  331  kubectl get pods\n  332  kubectl scale deployments redis --replicas=5\n  333  kubectl get pods\n  334  kubectl get pods\n  335  kubectl scale deployments redis --replicas=2\n  336  kubectl get pods\n  337  kubectl delete pods foobar\n  338  kubectl get pods\n  339  clear\n  340  kubectl get pods\n  341  kubectl get pods\n  342  kubectl get pods --watch\n  343  kubectl get pods\n  344  kubectl delete pods redis-3133791336-392v8\n  345  kubectl get pods\n  346  kubectl get pods\n  347  clear\n  348  minikube ssh\n  349  clear\n  350  kubectl get ns\n  351  kubectl get namespaces\n  352  kubectl get pods --all-namespaces\n  353  kubectl get pods\n  354  kubectl get pods --all-namespaces\n  355  kubectl get deployments --all-namespaces\n  356  kubectl get rc --all-namespaces\n  357  clear\n  358  kubectl get ns\n  359  kubectl create ns foobar\n  360  kubectl get ns\n  361  vi foobar.yaml \n  362  kubectl create -f foobar.yaml \n  363  kubectl get pods\n  364  kubectl get pods --all-namespaces\n  365  kubectl create -f foobar.yaml \n  366  vi foobar.yaml \n  367  kubectl create -f foobar.yaml \n  368  clear\n  369  kubectl get pods --all-namespaces\n  370  kubectl get nodes\n  371  kubectl get nodes minikube -o yaml\n  372  minikube ssh\n  373  clear\n  374  ls -l\n  375  kubectl --v=99 get pods\n  376  kubectl --v=99 get pods --namespace=foobar\n  377  clear\n  378  kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml\n  379  kubectl get pods\n  380  kubectl get pods\n  381  kubectl get deployments\n  382  kubectl delete deployments redis\n  383  kubectl get deployments\n  384  clear\n  385  kubectl get pods\n  386  kubectl get svc\n  387  kubectl get services\n  388  kubectl edit svc frontend\n  389  kubectl get services\n  390  kubectl get pods\n  391  kubectl get pods --watch\n  392  kubectl get pods\n  393  kubectl get endpoints\n  394  kubectl get pods\n  395  minikube service frontend\n  396  clear\n  397  kubectl scale deployments redis-slave --replicas=5\n  398  kubectl get pods\n  399  kubectl exec -ti redis-master-343230949-816xl -- redis-cli info\n  400  clear\n  401  kubectl get pods\n  402  kubectl get pods --show-labels\n  403  kubectl get pods -l tier=backend\n  404  kubectl get pods -Ltier\n  405  kubectl get pods -l tier=frontend\n  406  clear\n  407  kubectl get pods\n  408  kubectl label pod foobar oreilly=rocks\n  409  kubectl get pods -l oreilly\n  410  kubectl get pods -Loreilly\n  411  clear\n  412  kubectl get rs\n  413  kubectl get rs frontend-88237173 -o yaml | more\n  414  kubectl get pods\n  415  kubectl label pod frontend-88237173-5l21n tier=broken\n  416  kubectl label pod frontend-88237173-5l21n tier=broken --overwrite\n  417  kubectl get pods\n  418  kubectl get pods -Ltier\n  419  kubectl get pods -l tier=broken\n  420  clear\n  421  kubectl get pods\n  422  kubectl delete pods frontend-88237173-5l21n\n  423  kubectl get pods\n  424  kubectl scale deployments redis-slave --replicas=2\n  425  clear\n  426  kubectl get pods\n  427  kubectl get svc\n  428  kubectl get svc frontend -o json\n  429  kubectl get svc frontend -o json | jq -r\n  430  kubectl get svc frontend -o json | jq -r .spec.clusterIP\n  431  kubectl get endpoints\n  432  kubectl get pods -l app=guestbook -l tier=frontend\n  433  kubectl get pods -l app=guestbook -l tier=frontend -o json | jq -r .spec.podIP\n  434  kubectl get pods -l app=guestbook -l tier=frontend -o json\n  435  kubectl get pods -l app=guestbook -l tier=frontend -o json | jq -r .spec.status.podIP\n  436  clear\n  437  kubectl get pods -l app=guestbook -l tier=frontend -o json | jq -r .items.spec.status.podIP\n  438  kubectl get pods -l app=guestbook -l tier=frontend -o json\n  439  kubectl get pods -l app=guestbook -l tier=frontend -o json | grep podIP\n  440  kubectl get endpoints\n  441  kubectl get pods -l app=guestbook -l tier=frontend -o json | jq -r .items[0].status.podIP\n  442  clear\n  443  ls -l\n  444  kubectl get pods\n  445  kubectl get pods -l app=guestbook -l tier=frontend -o json | jq -r .items[].status.podIP\n  446  kubectl run busybox --image=busybox --command sleep 3600\n  447  kubectl get pods\n  448  kubectl get pods\n  449  kubectl get pods\n  450  clear\n  451  kubectl get pods\n  452  kubectl exec -ti busybox-1418042613-qn8m6 -- nslookup frontend\n  453  kubectl get svc\n  454  kubectl exec -ti busybox-1418042613-qn8m6 -- nslookup redis-master\n"
  },
  {
    "path": "history/22082017/22082017.txt",
    "content": "   21  minikube start\n   22  kubectl get pods\n   23  clear\n   24  clear\n   25  minikube status\n   26  kubectl get pods\n   27  kubectl run ghost --image=ghost\n   28  kubectl expose deployment ghost --port=2368 --type=NodePort\n   29  kubectl get pods\n   30  kubectl get pods -w\n   31  kubectl get pods\n   32  minikube service ghost\n   33  minikube service ghost\n   34  clear\n   35  which minikube\n   36  minikube\n   37  clear\n   38  minikube status\n   39  minikube ip\n   40  minikube\n   41  clear\n   42  minikube docker-env\n   43  eval $(minikube docker-env)\n   44  docker ps\n   45  coker ps\n   46  docker ps\n   47  clear\n   48  clear\n   49  clear\n   50  minikube dashboard\n   51  kubectl get nodes\n   52  kubectl get pods\n   53  vi pod.yaml\n   54  kubectl create -f pod.yaml \n   55  kubectl get pods\n   56  more pod.yaml \n   57  kubectl get pods foobar\n   58  kubectl get pods foobar -o yaml\n   59  more pod.yaml \n   60  kubectl get pods\n   61  kubectl get pods ghost-1255708890-1qc4n -o yaml\n   62  clear\n   63  kubectl get pods\n   64  kubectl logs redis-2913962463-jhwtt\n   65  kubectl exec -ti redis-2913962463-jhwtt -- /bin/sh\n   66  clear\n   67  kubectl get pods\n   68  minikube ssh\n   69  clear\n   70  kubectl get pods\n   71  kubectl -v=9 get pods\n   72  minikube ssh\n   73  kubectl get pods\n   74  kubectl get\n   75  clear\n   76  kubectl get pods\n   77  kubectl get pods --all-namespaces\n   78  kubectl get ns\n   79  kubectl create -f pod.yaml \n   80  kubectl get pods\n   81  kubectl create ns oreilly\n   82  kubectl get ns\n   83  kubectl create -f pod.yaml -n oreilly\n   84  kubectl get pods --all-namespaces\n   85  clear\n   86  ls -l\n   87  cd manifests/\n   88  ls -l\n   89  more quota.yaml \n   90  kubectl create -f quota.yaml \n   91  kubectl get resourcequota -n oreilly\n   92  kubectl get pods -n oreilly\n   93  cp ../pod.yaml bar.yaml\n   94  vi bar.yaml \n   95  kubectl create -f bar.yaml -n oreilly\n   96  kubectl edit resourcequota counts\n   97  kubectl edit resourcequota counts -n oreilly\n   98  kubectl create -f bar.yaml -n oreilly\n   99  kubectl get pods -n oreilly\n  100  kubectl create -f bar.yaml -n oreilly\n  101  kubectl edit pods bar\n  102  kubectl edit pods bar -n oreilly\n  103  clear\n  104  vi bar.yaml\n  105  ls-l\n  106  ls -l\n  107  kubectl get pods\n  108  kubectl scale deployment ghost --replicas=5\n  109  kubectl get pods\n  110  kubectl get pods\n  111  kubectl scale deployment ghost --replicas=2\n  112  kubectl get pods\n  113  clear\n  114  ls -l\n  115  cd rs/\n  116  ls -l\n  117  more rs-example.yml \n  118  kubectl create -f rs-example.yml \n  119  kubectl get rs\n  120  kubectl get pods\n  121  kubectl logs foo-178pj \n  122  kubectl logs -t foo-178pj \n  123  kubectl logs -f foo-178pj \n  124  minikube logs foo-178pj \n  125  clear\n  126  kubectl get pods\n  127  kubectl exec -ti foo-178pj -- /bin/sh\n  128  kubectl get pods foo-178pj -o yaml\n  129  kubectl get pods\n  130  more rs-example.yml \n  131  kubectl get pods --show-labels\n  132  kubectl get pods -Lred\n  133  kubectl get pods -l red=blue\n  134  kubectl get pods -l red=blue | grep foo | wc -l\n  135  kubectl label pods foo-178pj red=green --overwrite\n  136  kubectl get pods -Lred\n  137  cd ..\n  138  ls -l\n  139  ls -l\n  140  vi bar.yaml \n  141  kubectl create -f bar.yaml \n  142  kubectl create -f bar.yaml -n default\n  143  kubectl edit resourcequota counts -n oreilly\n  144  kubectl create -f bar.yaml \n  145  kubectl get pods --all-namespaces --show-labels\n  146  kubectl get pods\n  147  clear\n  148  clear\n  149  kubectl get pods\n  150  kubectl get pods -Lred\n  151  ls -l\n  152  cd ..\n  153  ls -l\n  154  cd manifests/\n  155  ls -l\n  156  more test.yaml \n  157  clear\n  158  ls -l\n  159  grep -r Service .\n  160  cp 1605207/game-svc.yml svc.yaml\n  161  vi svc.yaml \n  162  more svc.yaml \n  163  kubectl create -f svc.yaml \n  164  kubectl get svc\n  165  minikube service nginx\n  166  kubectl get pods\n  167  kubectl get pods -Lred\n  168  kubectl get endpoints\n  169  kubectl edit rs foo\n  170  kubectl get pods\n  171  kubectl get pods -Lred\n  172  kubectl get endpoints\n"
  },
  {
    "path": "history/23052018/history.txt",
    "content": "\n  500  vi pod.yaml \n  503  more pod.yaml \n  504  kubectl create -f pod.yaml \n  505  kubectl get nodes --show-labels\n  506  kubectl label node minikube oreilly=rocks\n  507  clear\n  508  clear\n  509  kubectl get pods -w\n  510  clear\n  511  kubectl get pods\n  512  kubectl delete pods oreilly\n  513  kubectl delete pods oreilly-2\n  514  kubectl delete rs oreilly\n  515  clear\n  516  kubectl get pods\n  517  kubectl get pods --show-labels\n  518  more rs.yaml\n  519  vi rs.yaml \n  520  kubectl create -f rs.yaml \n  521  clear\n  522  kubectl get rs\n  523  kubectl get pods\n  524  kubectl get pods --show-labels\n  525  vi svc.yaml\n  526  kubectl create -f svc.yaml \n  527  kubectl get svc\n  528  curl 192.168.99.100:31446446\n  529  clear\n  530  kubectl get svc\n  531  kubectl run busy -it --image=busybox -- /bin/sh\n  532  clear\n  533  minikube stop\n  534  minikube delete\n  535  cd ~/Documents\n  536  ls -l\n  537  cd seb/\n  538  ls -l\n  539  cd ..\n  540  cd ..\n  541  ls -l\n  542  cd gitforks/\n  543  ls- l\n  544  ls -l\n  545  cd oreilly-kubernetes/\n  546  ls -l\n  547  clear\n  548  clear\n  549  clear\n  550  ls -l\n  551  minikube status\n  552  minikube delete\n  553  minikube start\n  554  kubectl get pods\n  555  clear\n  556  kubectl version\n  557  clear\n  558  which minikube\n  559  which gcloud\n  560  clear\n  561  minikube\n  562  minikube status\n  563  clear\n  564  minikube status\n  565  which kubectl \n  566  kubectl version\n  567  kubectl proxy\n  568  clear\n  569  tmux\n  570  clear\n  571  clear\n  572  ls -l\n  573  minikube status\n  574  minikube start\n  575  minikube dashboard\n  576  clear\n  577  kubectl get pods\n  578  kubectl get rs\n  579  kubectl get deployments\n  580  kubectl scale deployment redis --replicas 4\n  581  kubectl get deployments\n  582  kubectl get pods\n  583  kubectl set image redis redis=redis:4.5\n  584  kubectl set --help\n  585  kubectl set image --help\n  586  clear\n  587  kubectl set image deployment/redis redis=redis:4.5\n  588  kubectl get pods\n  589  kubectl get pods -w\n  590  kubectl get pods -w\n  591  kubectl get pods\n  592  kubectl get pods\n  593  kubectl get pods\n  594  kubectl get pods\n  595  kubectl get pods\n  596  kubectl get pods\n  597  kubectl set image deployment/redis redis=redis:3.2\n  598  kubectl get pods -w\n  599  clear\n  600  kubectl get pods\n  601  kubectl get pods redis-54bb49b6f9-2r652 -o yaml\n  602  clear\n  603  kubectl get pods\n  604  kubectl get pods -o json | jq -r .items[]\n  605  kubectl get pods -o json | jq -r .items[].spec.containers[0].image\n  606  kubectl rollout history deployment redis\n  607  kubectl rollout undo deployment redis --to-revision 2\n  608  kubectl get pods -o json | jq -r .items[].spec.containers[0].image\n  609  kubectl get pods\n  610  kubectl rollout history deployment redis\n  611  kubectl rollout undo deployment redis --to-revision 1\n  612  kubectl get pods\n  613  kubectl get pods\n  614  kubectl get pods\n  615  kubectl get pods\n  616  kubectl get pods\n  617  kubectl get pods\n  618  kubectl get pods\n  619  kubectl get pods\n  620  kubectl get pods -o json | jq -r .items[].spec.containers[0].image\n  621  kubectl get pods\n  622  kubectl get deployments\n  623  kubectl get rs\n  624  kubectl rollout history deployment redis\n  625  kubectl rollout undo deployment redis --to-revision 3\n  626  kubectl get rs\n  627  kubectl get rs\n  628  kubectl get rs\n  629  kubectl get rs\n  630  kubectl get rs\n  631  kubectl get rs\n  632  kubectl rollout history deployment redis\n  633  kubectl get deployment redis\n  634  kubectl get deployment redis -o yaml\n  635  clear\n  636  vi deploy.yaml\n  637  cat deploy.yaml \n  638  kubectl create -f deploy.yaml \n  639  vi deploy.yaml\n  640  clear\n  641  cat deploy.yaml \n  642  kubectl create -f deploy.yaml \n  643  kubectl get deployment\n  644  kubectl get rs\n  645  kubectl get pods\n  646  cat deploy.yaml \n  647  kubectl rollout history deployment redis\n  648  kubectl edit deployment redis\n  649  kubectl rollout history deployment redis\n  650  kubectl set image deployment/redis redis=redis:3.2\n  651  kubectl rollout history deployment redis\n  652  kubectl set image deployment/redis redis=redis:3.9\n  653  kubectl rollout history deployment redis\n  654  clear\n  655  kubectl get pods\n  656  kubectl delete deployment redis\n  657  kubectl get pods\n  658  kubectl run game --image=runseb/2048\n  659  kubectl get deployments\n  660  kubectl get rs\n  661  kubectl get pods\n  662  kubectl get pods\n  663  kubectl get svc\n  664  ls -l\n  665  more svc.yaml \n  666  kubectl get pods\n  667  kubectl get pods --show-labels\n  668  vi svc.yaml \n  669  kubectl create -f svc.yaml \n  670  kubectl get svc\n  671  kubectl expose deployments game --port 80 --type NodePort\n  672  kubectl delete -f svc.yaml \n  673  kubectl get svc\n  674  kubectl create -f svc.yaml \n  675  kubectl get svc\n  676  clear\n  677  minikube ssh\n  678  clear\n  679  ls -l\n  680  clear\n  681  kubectl get svc\n  682  minikube service game\n  683  minikube service ghost\n  684  clear\n  685  kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/guestbook-all-in-one.yaml\n  686  kubectl get deployments\n  687  kubectl get svc\n  688  kubectl edit svc frontend\n  689  kubectl get svc\n  690  kubectl get pods\n  691  kubectl get pods\n  692  kubectl get pods\n  693  kubectl get pods\n  694  kubectl get pods\n  695  minikube service frontend\n  696  more svc.yaml \n  697  clear\n  698  minikube addons list\n  699  kubectl get pods -n kube-system\n  700  kubectl get pods nginx-ingress-controller-5g4z8 -o yaml -n kube-system\n  701  kubectl get pods\n  702  kubectl get svc\n  703  kubectl edit svc frontend\n  704  kubectl get svc\n  705  ls -l\n  706  cd manifests/\n  707  ls -l\n  708  cd 05-ingress-controller/\n  709  ls -l\n  710  pwd\n  711  more frontend.yaml \n  712  vi frontend.yaml \n  713  vi ghost.yaml \n  714  vi game.yaml \n  715  kubectl create -f frontend.yaml \n  716  kubectl create -f ghost.yaml \n  717  kubectl create -f game.yaml \n  718  kubectl get ingress\n  719  kubectl exec -ti nginx-ingress-controller-5g4z8 -- /bin/sh -n kube-system\n  720  kubectl exec -ti nginx-ingress-controller-5g4z8 -n kube-system -- /bin/sh\n  721  clear\n  722  clear\n  723  kubectl run mysql --image=mysql:5.5 --env MYSQL_ROOT_PASSWORD=root\n  724  kubectl expose deployment mysql --port 3306\n  725  kubectl get pods\n  726  kubectl delete deployments frontend redis-master redis-slave\n  727  kubectl get pods\n  728  kubectl exec -ti mysql-55d65b64bb-qzhk5 -- mysql -uroot -p\n  729  kubectl run wordpress --image=wordpress --env WORDPRESS_DB_PASSWORD=root --env WORDPRESS_DB_HOST=mysql\n  730  kubectl expose deployments wordpress --port 80\n  731  kubectl get pods\n  732  kubectl get pods\n  733  ls -l\n  734  more wordpress.yaml \n  735  kubectl create -f wordpress.yaml \n  736  kubectl get ingress\n  737  kubectl get ingress\n  738  kubectl get ingress\n  739  kubectl get ingress\n  740  kubectl get ingress\n  741  cat wordpress.yaml \n  742  pwd\n  743  history |grep kubectl\n  744  kubectl get pods\n  745  kubectl exec -ti mysql-55d65b64bb-qzhk5 -- mysql -uroot -p\n  746  kubectl exec -ti mysql-55d65b64bb-qzhk5 -- mysql -uroot -p\n  747  kubectl run -ti busybox --image=busybox -- /bin/sh\n  748  cler\n  749  clear\n  750  kubectl get pods\n  751  kubectl get pods mysql-55d65b64bb-qzhk5 -o yaml\n  752  clear\n  753  kubectl get secrets\n  754  kubectl create secret generic foobar --from-literal=password=root\n  755  kubectl get secrets\n  756  kubectl get secrets foobar -o yaml\n  757  echo \"cm9vdA==\" | base64 -D\n  758  ls -l\n  759  cd ..\n  760  ls -l\n  761  cd wordpress/\n  762  ls -l\n  763  more mysql-secret.yaml \n  764  kubectl create -f mysql-secret.yaml \n  765  kubectl get pods\n  766  kubectl get pods\n  767  kubectl logs mysql-secret\n  768  kubectl describe pods mysql-secret\n  769  more mysql-secret.yaml \n  770  kubectl get secrets\n  771  kubectl edit pod mysql-secret\n  772  kubectl delete pods mysql-secret\n  773  vi mysql-secret.yaml \n  774  kubectl create -f mysql-secret.yaml \n  775  kubectl get pods\n  776  kubectl exec -ti mysql-secret -- mysql -uroot -p\n  777  clear\n  778  kubectl get pods\n  779  ls -l\n  780  cd ..\n  781  ls -l\n  782  cd 06-volumes/\n  783  ls -l\n  784  more configmap.yaml \n  785  ls -l\n  786  vi foobar.md\n  787  kubectl create configmap foobar --from-file=foobar.md\n  788  kubectl get configmap\n  789  kubectl get configmap foobar -o yaml\n  790  ls -l\n  791  more configmap.yaml \n  792  pwd\n  793  kubectl create -f configmap.yaml \n  794  kubectl get pods\n  795  kubectl exec -ti cm-test -- /bin/sh\n  796  ls -l\n  797  more volumes.yaml \n  798  kubectl create -f volumes.yaml \n  799  kubectl get pods\n  800  kubectl exec -ti vol -c busy -- ls -l /busy\n  801  kubectl exec -ti vol -c busy -- touch /busy/busy\n  802  kubectl exec -ti vol -c busy -- ls -l /busy\n  803  kubectl exec -ti vol -c box -- ls -l /box\n  804  ls -l\n  805  kubectl get pods\n  806  kubectl delete pods cm-test vol mysql-secret\n  807  kubectl get pods\n  808  kubectl get pods\n  809  kubectl get pods\n  810  kubectl get pods\n  811  kubectl get pods\n  812  kubectl get pods\n  813  kubectl get pods\n  814  kubectl get pods\n  815  kubectl get pods\n  816  kubectl get pods\n  817  kubectl get pods\n  818  kubectl get pods\n  819  clear\n  820  kubectl get pods\n  821  kubectl get pods\n  822  kubectl get pods\n  823  kubectl get pods\n  824  kubectl get pods\n  825  kubectl get pods\n  826  kubectl get pods\n  827  kubectl get pods\n  828  clear\n  829  kubectl get pods\n  830  kubectl delete pods mysql-55d65b64bb-qzhk5\n  831  kubectl get pods\n  832  kubectl get pods\n  833  ls -l\n  834  more pvc.yaml \n  835  kubectl create -f pvc.yaml \n  836  kubectl get pvc\n  837  kubectl get pv\n  838  kubectl get storageclass\n  839  kubectl get storageclass standard -o yaml\n  840  kubectl get pv\n  841  kubectl get pv pvc-9a79e0ae-5e95-11e8-9bed-080027dd6acf -o yaml\n  842  ls -l /tmp/hostpath-provisioner/pvc-9a79e0ae-5e95-11e8-9bed-080027dd6acf\n  843  clear\n  844  ls -l\n  845  kubectl get pvc\n  846  more mysql.yaml \n  847  kubectl create -f mysql.yaml \n  848  kubectl get pods\n  849  kubectl exec -ti data -- mysql -uroot -p\n  850  kubectl get pods\n  851  kubectl delete pods data\n  852  kubectl get pods\n  853  kubectl get pods\n  854  kubectl get pods\n  855  kubectl get pods\n  856  minikube ssh\n  857  kubectl get pods\n  858  kubectl create -f mysql.yaml \n  859  kubectl get pods\n  860  kubectl exec -ti data -- mysql -uroot -p\n  861  clear\n  862  kubectl get pods\n  863  kubectl delete deployments ghost game busybox\n  864  which helm\n  865  helm\n  866  helm init\n  867  kubectl get pods -n kube-system\n  868  kubectl get pods -n kube-system -w\n  869  helm ls\n  870  helm repo list\n  871  helm search minio\n  872  helm inspect stable/minio\n  873  helm install stable/minio\n  874  kubectl get pods\n  875  helm ls\n  876  helm delete plinking-lambkin\n  877  helm ls\n  878  kubectl get pods\n  879  helm create oreilly\n  880  cd oreilly/\n  881  ls -l\n  882  tree .\n  883  cd templates/\n  884  ls -l\n  885  more service.yaml \n  886  cat ../values.yaml \n  887  cd ..\n  890  cd ..\n  892  cd ..\n  894  cd 07-crd/\n  897  more fruit.yml \n  898  kubectl get fruits\n  899  kubectl create -f fruit.yml \n  900  kubectl get fruits\n  901  more mango.yaml \n  902  more database.yml \n  903  vi mango.yaml \n  904  cat db.yml \n  905  vi mango.yaml \n  906  cat mango.yaml \n  907  kubectl create -f mango.yaml \n  908  kubectl get fruits\n  909  kubectl get fruit\n  910  kubectl get fr\n  911  kubectl get fr mango -o yaml"
  },
  {
    "path": "history/7062018/history.txt",
    "content": "  589  kubectl get pods\n  590  kubectl logs redis-dff85b6f4-4cs7w\n  591  kubectl exec -ti redis-dff85b6f4-4cs7w -- /bin/sh\n  592  clear\n  593  vi pod.yaml\n  594  cat pod.yaml \n  595  kubectl create -f pod.yaml \n  596  kubectl get pods\n  597  kubectl exec -ti oreilly -- redis-cli\n  598  cler\n  599  clear\n  600  kubectl get pods\n  601  kubectl scale deployments redis --replicas 4\n  602  kubectl get pods\n  603  kubectl get pods\n  604  kubectl scale deployments redis --replicas 2\n  605  kubectl get pods\n  606  kubectl get pods\n  607  clear\n  608  kubectl get pods\n  609  kubectl delete pods redis-dff85b6f4-4cs7w\n  610  kubectl get pods\n  611  kubectl delete pods oreilly\n  612  kubectl get pods\n  613  kubectl get pods\n  614  kubectl get replicasets\n  615  clear\n  616  vi rs.yaml\n  617  cat rs.yaml \n  618  kubectl get pods\n  619  kubectl create -f rs.yaml \n  620  kubectl get pods\n  621  kubectl delete pods oreilly-pw2rw\n  622  kubectl get pods\n  623  kubectl get pods\n  624  kubectl edit rs oreilly\n  625  kubectl get pods\n  626  kubectl get pods\n  627  kubectl get pods --show-labels\n  628  kubectl get pods -l app=oreilly\n  629  clear\n  630  more pod.yaml \n  631  more rs.yaml \n  632  clear\n  633  kubectl get\n  634  clear\n  635  curl localhost:8001\n  636  curl localhost:8001/api/v1\n  637  kubectl get pods -v=9\n  638  clear\n  639  curl localhost:8001/api/v1/namespaces/default/pods\n  640  curl localhost:8001/api/v1/namespaces/default/pods | jq -r\n  641  curl localhost:8001/api/v1/namespaces/default/pods | jq -r .items\n  642  curl localhost:8001/api/v1/namespaces/default/pods | jq -r .items[].metadata.name\n  643  curl -XDELETE localhost:8001/api/v1/namespaces/default/pods/oreilly-d6rdc\n  644  kubectl get pods\n  645  clear\n  646  clear\n  647  kubectl get pods\n  648  kubectl create -f pod.yaml \n  649  kubectl get pods\n  650  vi pod.yaml \n  651  kubectl create -f pod.yaml \n  652  kubectl get pods\n  653  kubectl create ns oreilly\n  654  vi pod.yaml \n  655  kubectl create -f pod.yaml \n  656  clear\n  657  kubectl get pods\n  658  kubectl get pods --namespace oreilly\n  659  kubectl get pods\n  660  kubectl get pods --namespace oreilly -v=9\n  661  kubectl get pods -v=9\n  662  clear\n  663  kubectl get ns\n  664  kubectl get namespace\n  665  kubectl create ns foobar\n  666  kubectl get namespace\n  667  kubectl get pods --all-namespaces\n  668  clear\n  669  kubectl create resourcequota test --hard=pods=6\n  670  kubectl get resourcequota\n  671  kubectl get pods\n  672  vi pod.yaml \n  673  kubectl create -f pod.yaml \n  674  kubectl create -f pod.yaml -n oreilly\n  675  clear\n  676  kubectl get pods\n  677  kubectl logs oreilly\n  678  kubectl describe pods oreilly\n  679  kubectl get pods -n oreilly\n  680  kubectl logs oreilly -n oreilly\n  681  kubectl describe oreilly -n oreilly\n  682  kubectl describe pods oreilly -n oreilly\n  683  kubectl logs oreilly -n oreilly\n  684  kubectl get pods -n oreilly\n  685  clear\n  686  kubectl get pods\n  687  kubectl delete rs oreilly redis\n  688  kubectl delete pods oreilly\n  689  kubectl get pods\n  690  clear\n  691  vi pod.yaml \n  692  kubectl create -f pod.yaml \n  693  kubectl get pods\n  694  vi svc.yaml\n  695  cat svc.yaml \n  696  kubectl create -f svc.yaml \n  697  kubectl get endpoints\n  698  kubectl get svc\n  699  kubectl get service\n  700  kubectl get endpoints\n  701  kubectl get pods\n  702  kubectl get pods --show-labels\n  703  kubectl label pods game app=game\n  704  kubectl get pods --show-labels\n  705  kubectl get endpoints\n  706  kubectl get svc\n  707  minikube service game\n  708  clear\n  709  kubectl get svc\n  710  kubectl run -it busybox --image=busybox -- /bin/sh\n  711  clest\n  712  clear\n  713  minikube stop\n  714  lear\n  715  clear\n  716  ls -l\n  717  clear\n  718  minikube delete\n  719  minikube start\n  720  cler\n  721  clear\n  722  kubectl get nodes\n  723  clear\n  724  clear\n  725  minikube status\n  726  kubectl get nodes\n  727  kubectl get nodes -v=9\n  728  clear\n  729  kubectl get pods\n  730  kubectl get rs\n  731  kubectl get svc\n  732  clear\n  733  minikube dashboard\n  734  kubectl get pods --all-namespaces\n  735  kubectl run game --image=runseb/2048\n  736  kubectl get pods\n  737  kubectl get rs\n  738  kubectl expose deployments game --port 80 --type NodePort\n  739  kubectl get svc\n  740  minikube service game\n  741  clear\n  742  kubectl run ghost --image=ghost\n  743  kubectl expose deployment ghost --port 2368 --type NodePort\n  744  kubectl get pods\n  745  kubectl get pods --show-labels\n  746  kubectl get pods --show-labels\n  747  kubectl get svc\n  748  minikube service ghost\n  749  clear\n  750  more pod.yaml \n  751  more rs.yaml \n  752  more svc.yaml \n  753  kubectl run ghost --image=ghost --help\n  754  clear\n  755  more rs.yaml \n  756  clear\n  757  kubectl get pods\n  758  kubectl get pods -o yaml game-755c6b9b8c-rm68v --export\n  759  clear\n  760  kubectl run --generator=v1/pod game --image=ghost\n  761  kubectl run --generator=run-pod/v1 game --image=ghost\n  762  kubectl run --generator=run-pod/v1 game --image=ghost --dry-run\n  763  kubectl run --generator=run-pod/v1 game --image=ghost --dry-run -o yaml\n  764  kubectl run ghost --image=ghost \n  765  kubectl expose deployment ghost --port 2368 --type NodePort\n  766  cler\n  767  clear\n  768  kubectl create ns oreilly\n  769  kubectl run ghost --image=ghost -n oreilly\n  770  kubectl get pods --all-namespaces\n  771  kubectl delete ns oreilly\n  772  clear\n  773  kubectl get pods\n  774  kubectl delete pods game\n  775  kubectl get pods\n  776  kubectl set image deployment game game=runseb/4096\n  777  kubectl get pods\n  778  kubectl scale deployment game --replicas 4\n  779  kubectl get pods\n  780  kubectl get pods\n  781  kubectl set image deployment game game=nginx\n  782  kubectl get pods -w\n  783  kubectl get pods\n  784  clear\n  785  kubectl get pods\n  786  kubectl get pods\n  787  kubectl set image deployment game game=runseb/4096\n  788  kubectl get pods\n  789  kubectl get pods\n  790  kubectl get pods\n  791  kubectl get pods -o json |jq -r .items[]\n  792  kubectl get pods -o json |jq -r .items[].spec\n  793  kubectl get pods -o json |jq -r .items[].spec.containers[0]\n  794  kubectl get pods -o json |jq -r .items[].spec.containers[0].image\n  795  kubectl get pods\n  796  clear\n  797  kubectl get deployments\n  798  kubectl get rs\n  799  kubectl get rs game-5fb9959fbc -o yaml\n  800  kubectl get rs\n  801  kubectl get rs game-755c6b9b8c -o yaml\n  802  kubectl get rs\n  803  kubectl get rs game-d4bfc6874 -o yaml\n  804  kubectl get rs\n  805  kubectl rollout history deployment game\n  806  kubectl rollout undo deployment game --to-revision 1\n  807  kubectl get rs\n  808  kubectl get rs\n  809  kubectl get rs -w\n  810  kubectl get pods\n  811  kubectl get pods -o json |jq -r .items[].spec.containers[0].image\n  812  kubectl rollout history deployment game\n  813  kubectl rollout undo deployment game --to-revision 3\n  814  kubectl get rs -w\n  815  kubectl get pods\n  816  kubectl rollout history deployment game\n  817  kubectl rollout undo deployment game --to-revision 4\n  818  kubectl get pods\n  819  kubectl get rs\n  820  clear\n  821  kubectl get deployment\n  822  kubectl get deployment game -o yaml\n  823  clear\n  824  kubectl create -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/guestbook-all-in-one.yaml\n  825  kubectl get deployments\n  826  kubectl get svc\n  827  kubectl get pods\n  828  kubectl get pods\n  829  kubectl rollout history deployment game\n  830  kubectl rollout undo deployment game --to-revision 5\n  831  kubectl get pods\n  832  kubectl scale deployment game --replicas 1\n  833  kubectl get pods\n  834  kubectl exec -ti redis-master-55db5f7567-vhnzb -- redis-cli\n  835  kubectl scale deployment redis-slave --replicas 5\n  836  kubectl exec -ti redis-master-55db5f7567-vhnzb -- redis-cli info\n  837  kubectl getpods\n  838  kubectl get pods\n  839  kubectl get svc\n  840  kubectl edit svc frontend\n  841  kubectl get svc\n  842  minikube service frontend\n  843  kubectl edit svc frontend\n  844  clear\n  845  kubectl get deployments\n  846  kubectl get deployments game -o yaml\n  847  clear\n  848  minikube addons list\n  849  kubectl get pods -n kube-system\n  850  pwd\n  851  ls -l\n  852  cd manifests\n  853  ls -l\n  854  cd 05-ingress-controller/\n  855  ls -l\n  856  kubectl get svc\n  857  kubectl edit svc frontend\n  858  kubectl edit svc game\n  859  kubectl edit svc ghost\n  860  kubectl get svc\n  861  ls -l\n  862  more game.yaml \n  863  more ghost.yaml \n  864  more frontend.yaml \n  865  kubectl create -f game.yaml -f ghost.yaml -f frontend.yaml \n  866  kubectl get ingress\n  867  nslookup frontend.192.168.99.100.nip.io\n  868  nslookup game.192.168.99.100.nip.io\n  869  kubectl get ingress\n  870  clear\n  871  kubectl get ingress\n  872  kubectl get pods -n kube-system\n  873  kubectl edit ingress frontend\n  874  ls -l\n  875  vi frontend.yaml \n  876  kubectl replace -f frontend.yaml \n  877  kubectl get pods\n  878  kubectl get ingress\n  879  clear\n  880  kubectl get pods -n kube-system\n  881  kubectl get pods nginx-ingress-controller-rnncf -o yaml -n kube-system\n  882  kubectl exec -ti nginx-ingress-controller-rnncf -- /bin/sh -n kube-system\n  883  kubectl exec -ti nginx-ingress-controller-rnncf -n kube-system -- /bin/sh\n  884  clear\n  885  kubectl run mysql --image=mysql:5.5 --env MYSQL_ROOT_PASSWORD=root\n  886  kubectl expose deployment mysql --port 3306\n  887  kubectl run wordpress --image=wordpress --env WORDPRESS_DB_HOST=mysql --env WORDPRESS_DB_PASSWORD=root\n  888  kubectl expose deployment wordpress --port 80\n  889  kubectl get pods\n  890  mkdir pkg\n  891  cd pkg\n  892  ls -l\n  893  kubectl get deployment wordpress -o yaml --export > wordpress.yaml\n  894  vi wordpress.yaml \n  895  clear\n  896  cd ..\n  897  kubectl get pods\n  898  ls -l\n  899  more wordpress.yaml \n  900  kubectl create -f wordpress.yaml \n  901  kubectl get ingress\n  902  kubectl get ingress\n  903  kubectl get ingress\n  904  kubectl get ingress\n  905  clear\n  906  kubectl get deployments\n  907  kubectl get pods\n  908  kubectl exec -ti mysql-55d65b64bb-fhqbd -- mysql -uroot -p\n  909  clear\n  910  kubectl get pods\n  911  kubectl get pods mysql-55d65b64bb-fhqbd -o yaml\n  912  clear\n  913  kubectl create secret generic mysql --from-literal=password=root\n  914  kubectl get secrets\n  915  kubectl get secrets mysql -o yaml\n  916  ls -l\n  917  kubectl create configmap foobar --from-file=ghost.yaml \n  918  kubectl get configmap\n  919  kubectl get cm\n  920  kubectl get cm foobar -o yaml\n  921  clear\n  922  kubectl get secret\n  923  kubectl get cm\n  924  kubectl get secret mysql -o yaml\n  925  echo \"cm9vdA==\" | base64 -D\n  926  ls -l\n  927  cd ..\n  928  ls -l\n  929  cd wordpress/\n  930  ls -l\n  931  pwd\n  932  more mysql-secret.yaml \n  933  kubectl get secrets\n  934  vi mysql-secret.yaml \n  935  kubectl create -f mysql-secret.yaml \n  936  kubectl get pods\n  937  kubectl exec -ti mysql-secret -- mysql -uroot -p\n  938  more mysql\n  939  more mysql-secret.yaml \n  940  ls -l\n  941  cd //\n  942  pwd\n  943  cd ~/gitforks/oreilly-kubernetes/\n  944  ls -l\n  945  cd manifests/\n  946  ls -l\n  947  cd configmaps/\n  948  ls -l\n  949  pwd\n  950  more configmap.yaml \n  951  ls -l\n  952  more pod.yaml \n  953  clear\n  954  kubectl get pods\n  955  kubectl delete pods mysql-55d65b64bb-fhqbd\n  956  kubectl get pods\n  957  kubectl get pods\n  958  cd ..\n  959  ls -l\n  960  clear\n  961  cd 06-volumes/\n  962  ls -l\n  963  more volumes.yaml \n  964  kubectl create -f volumes.yaml \n  965  kubectl get pods\n  966  kubectl get pods\n  967  kubectl delete deployments redis-master redis-slave frontend \n  968  kubectl delete pods mysql-secret\n  969  clear\n  970  kubectl get pods\n  971  kubectl exec -ti vol -c box -- ls -l /box\n  972  kubectl exec -ti vol -c busy -- ls -l /busy\n  973  kubectl exec -ti vol -c busy -- touch /box/foobar\n  974  kubectl exec -ti vol -c busy -- ls -l /busy\n  975  kubectl exec -ti vol -c busy -- /bin/sh\n  976  kubectl exec -ti vol -c busy -- ls -l /busy\n  977  kubectl exec -ti vol -c box -- ls -l /box\n  978  kubectl exec -ti vol -c box -- cat /box/foobar\n  979  more volumes.yaml \n  980  ls -l\n  981  clear\n  982  kubectl get pv\n  983  kubectl get pvc\n  984  ls -l\n  985  more pvc.yaml \n  986  kubectl create -f pvc.yaml \n  987  kubectl get pvc\n  988  kubectl get pv\n  989  kubectl get storageclass\n  990  kubectl get storageclass standard -o yaml\n  991  kubectl get pv\n  992  kubectl get pv pvc-5d651b5c-6a5f-11e8-825a-08002750803a -o yaml\n  993  ls -l\n  994  more mysql.yaml \n  995  kubectl get pvc\n  996  clear\n  997  kubectl get pvc\n  998  kubectl get pv\n  999  kubectl create -f mysql.yaml \n 1000  kubectl get pods\n 1001  kubectl exec -ti data -- mysql -uroot -p\n 1002  kubectl get pods\n 1003  kubectl delete pods data\n 1004  kubectl get pods\n 1005  kubectl get pv\n 1006  kubectl get pods\n 1007  kubectl create -f mysql.yaml \n 1008  kubectl get pods\n 1009  kubectl get pods\n 1010  kubectl exec -ti data -- mysql -uroot -p\n 1011  clear\n 1012  python\n 1013  ipyton\n 1014  ipython\n 1015  y\n 1016  clear\n 1017  cd ..\n 1018  ls -l\n 1019  cd 07-crd/\n 1020  ls -l\n 1021  clear\n 1022  l s-l\n 1023  ls -l\n 1024  more fruit.yml \n 1025  kubectl get fruits\n 1026  kubectl create -f fruit.yml \n 1027  kubectl get fruits\n 1028  more mango.yaml \n 1029  kubectl create -f mango.yaml \n 1030  kubectl get fruits\n 1031  kubectl get fruits mango -o yaml\n 1032  kubectl get fruits -v=9\n 1033  clear\n 1034  kubectl get fruits\n 1035  kubectl get fruit\n 1036  pwd\n 1037  ls -l\n 1038  clear\n 1039  which helm\n 1040  helm repo list\n 1041  helm search redis\n 1042  helm install stable/redis\n 1043  helm init\n 1044  kubectl get pods --all-namespaces\n 1045  kubectl get pods --all-namespaces\n 1046  kubectl get pods --all-namespaces\n 1047  kubectl get pods --all-namespaces\n 1048  clear\n 1049  helm install stable/redis\n 1050  kubectl get pods\n 1051  helm ls\n 1052  ls -l\n 1053  cd ..\n 1054  ls-l\n 1055  ls -\n 1056  ls -l\n 1057  helm create oreilly\n 1058  cd oreilly/\n 1059  tree\n 1060  cat templates/service.yaml \n 1061  more values.yaml \n"
  },
  {
    "path": "kusto/base/kustomization.yaml",
    "content": "resources:\n- pod.yaml\n"
  },
  {
    "path": "kusto/base/pod.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: kusto\nspec:\n  containers:\n  - name: test\n    image: nginx\n"
  },
  {
    "path": "kusto/overlays/dev/kustomization.yaml",
    "content": "resources:\n- ../../base\ncommonLabels:\n  stage: dev\n"
  },
  {
    "path": "kusto/overlays/prod/kustomization.yaml",
    "content": "resources:\n- ../../base\ncommonLabels:\n  stage: prod\n"
  },
  {
    "path": "manifests/01-pod/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Pod chapter\nFirst lab is just a basic pod running a redis cache image.\nFile : redis.yaml\n\nSecond lab adds a namespace \"oreilly\" and a ResourceQuota\n> kubectl create ns oreilly\nFile : rq.yaml\n"
  },
  {
    "path": "manifests/01-pod/busybox.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: busybox\n  namespace: default\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    imagePullPolicy: IfNotPresent\n    name: busybox\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/01-pod/foobar.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: foobar\n  namespace: default\nspec:\n  containers:\n    - image: ghost\n      name: ghost\n"
  },
  {
    "path": "manifests/01-pod/lifecycle.yaml",
    "content": "kind:                   Deployment\napiVersion:             apps/v1beta1\nmetadata:\n  name:                 loap\nspec:\n  replicas:             1\n  template:\n    metadata:\n      labels:\n        app:            loap\n    spec:\n      initContainers:\n      - name:           init\n        image:          busybox\n        command:       ['sh', '-c', 'echo $(date +%s): INIT >> /loap/timing']\n        volumeMounts:\n        - mountPath:    /loap\n          name:         timing\n      containers:\n      - name:           main\n        image:          busybox\n        command:       ['sh', '-c', 'echo $(date +%s): START >> /loap/timing;\nsleep 10; echo $(date +%s): END >> /loap/timing;']\n        volumeMounts:\n        - mountPath:    /loap\n          name:         timing\n        livenessProbe:\n          exec:\n            command:   ['sh', '-c', 'echo $(date +%s): LIVENESS >> /loap/timing']\n        readinessProbe:\n          exec:\n            command:   ['sh', '-c', 'echo $(date +%s): READINESS >> /loap/timing']\n        lifecycle:\n          postStart:\n            exec:\n              command:   ['sh', '-c', 'echo $(date +%s): POST-START >> /loap/timing']\n          preStop:\n            exec:\n              command:  ['sh', '-c', 'echo $(date +%s): PRE-HOOK >> /loap/timing']\n      volumes:\n      - name:           timing\n        hostPath:\n          path:         /tmp/loap"
  },
  {
    "path": "manifests/01-pod/multi.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: multi\n  namespace: oreilly\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    imagePullPolicy: IfNotPresent\n    name: busybox\n  - image: redis\n    name: redis\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/01-pod/redis.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: redis\nspec:\n  containers:\n  - image: redis:3.2\n    imagePullPolicy: IfNotPresent\n    name: mysql\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/02-quota/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Quota chapter\nFirst example file is quota.yaml, to create a simple ResourceQuota\n\nSecond example, rq.yaml, is more complex and will create a namespace, a ResourceQuota and a pod\n\n"
  },
  {
    "path": "manifests/02-quota/quota.yaml",
    "content": "apiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: counts\n  namespace: oreilly\nspec:\n  hard:\n    pods: \"1\"\n"
  },
  {
    "path": "manifests/02-quota/rq.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: oreilly\n---\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: counts\n  namespace: oreilly\nspec:\n  hard:\n    pods: \"1\"\n---\napiVersion: v1\nkind: Pod\nmetadata:\n  name: redis\n  namespace: oreilly\nspec:\n  containers:\n  - image: redis:3.2\n    imagePullPolicy: IfNotPresent\n    name: mysql\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/02-quota/rq.yaml.fmn",
    "content": "apiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: object-counts\nspec:\n  hard:\n    pods: 1\n"
  },
  {
    "path": "manifests/03-rs/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# ReplicaSet \nTwo files presented as an example.\nredis-rc.yaml starts a RS with a redis image, with 2 replicas\nrs-example.yaml starts a RS with an nginx image, with 3 replicas\n\nBoth are using a label to identify the pods, either an app label, or a \"color\" label.\n"
  },
  {
    "path": "manifests/03-rs/redis-rc.yaml",
    "content": "apiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: redis\n  namespace: default\nspec:\n  replicas: 2\n  selector:\n    app: redis\n  template:\n    metadata:\n      name: redis\n      labels:\n        app: redis\n    spec:\n      containers:\n      - image: redis:3.2\n        name: redis\n\n"
  },
  {
    "path": "manifests/03-rs/rs-example.yml",
    "content": "apiVersion: extensions/v1beta1\nkind: ReplicaSet\nmetadata:\n  name: foo\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      red: blue\n  template:\n    metadata:\n      name: foo\n      labels:\n        red: blue\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n"
  },
  {
    "path": "manifests/03-rs/rs.yaml",
    "content": "apiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n  name: lodh\nspec:\n  replicas: 5\n  selector:\n    matchLabels:\n      bank: lodh\n  template:\n    metadata:\n      name: pod\n      labels:\n        bank: lodh\n    spec:\n      containers:\n      - name: one\n        image: redis\n"
  },
  {
    "path": "manifests/04-services/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Services chapter\nsvc.yaml creates a simple Service that exposes port 80 of a pod that matches the selector \"red: blue\"\n"
  },
  {
    "path": "manifests/04-services/headless.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: myexternaldb\n  namespace: default\nspec:\n  ports:\n    - protocol: TCP\n      port: 3306\n      targetPort: 3306\n---\napiVersion: v1\nkind: Endpoints\nmetadata:\n  name: myexternaldb\nsubsets:\n  - addresses:\n    ips:\n    - 1.2.3.4\n    ports:\n    - port: 3306\n\n"
  },
  {
    "path": "manifests/04-services/svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  namespace: default\nspec:\n  selector:\n    red: blue\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n  type: NodePort\n"
  },
  {
    "path": "manifests/05-ingress-controller/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Ingress Controller chapter\nThis chapter includes the creation of a simple Ingress controler attached to an nginx pod\nghost.yaml is just an example of a controller creation\ningress.yaml will create the full deployment\n\nwordpress.yaml and game.yaml will create each an ingress controller, servicing port 80 for services named game and ingress\nfrontend.yaml creates also an ingress controller for port 80 of an nginx service\n\nbackend.yaml is a more complex example, which will create two Replication controllers, http-backend and nginx-ingress-controller, along with the necessary services."
  },
  {
    "path": "manifests/05-ingress-controller/backend.yaml",
    "content": "# https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/default-backend.yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: default-http-backend\nspec:\n  replicas: 1\n  selector:\n    app: default-http-backend\n  template:\n    metadata:\n      labels:\n        app: default-http-backend\n    spec:\n      terminationGracePeriodSeconds: 60      \n      containers:\n      - name: default-http-backend\n        # Any image is permissable as long as:\n        # 1. It serves a 404 page at /\n        # 2. It serves 200 on a /healthz endpoint\n        image: gcr.io/google_containers/defaultbackend:1.0\n        livenessProbe:\n          httpGet:\n            path: /healthz\n            port: 8080\n            scheme: HTTP\n          initialDelaySeconds: 30\n          timeoutSeconds: 5\n        ports:\n        - containerPort: 8080\n        resources:\n          limits:\n            cpu: 10m\n            memory: 20Mi\n          requests:\n            cpu: 10m\n            memory: 20Mi\n---\n# create a service for the default backend\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: default-http-backend\n  name: default-http-backend\nspec:\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 8080\n  selector:\n    app: default-http-backend\n  sessionAffinity: None\n  type: ClusterIP\n---\n# Replication controller for the load balancer\napiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: nginx-ingress-controller\n  labels:\n    k8s-app: nginx-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    k8s-app: nginx-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: nginx-ingress-lb\n        name: nginx-ingress-lb\n    spec:\n      terminationGracePeriodSeconds: 60      \n      containers:\n      - image: gcr.io/google_containers/nginx-ingress-controller:0.8.2\n        name: nginx-ingress-lb\n        imagePullPolicy: Always\n        livenessProbe:\n          httpGet:\n            path: /healthz\n            port: 10249\n            scheme: HTTP\n          initialDelaySeconds: 30\n          timeoutSeconds: 5\n        # use downward API\n        env:\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n        ports:\n        - containerPort: 80\n          hostPort: 80\n        - containerPort: 443\n          hostPort: 443\n        args:\n        - /nginx-ingress-controller\n        - --default-backend-service=default/default-http-backend\n"
  },
  {
    "path": "manifests/05-ingress-controller/frontend.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: frontend\nspec:\n  rules:\n  - host: frontend.192.168.99.100.nip.io\n    http:\n      paths:\n      - backend:\n          serviceName: frontend\n          servicePort: 80\n"
  },
  {
    "path": "manifests/05-ingress-controller/game.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: game\nspec:\n  rules:\n  - host: game.192.168.99.100.nip.io\n    http:\n      paths:\n      - backend:\n          serviceName: game\n          servicePort: 80\n"
  },
  {
    "path": "manifests/05-ingress-controller/ghost.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: ghost\nspec:\n  rules:\n  - host: ghost.192.168.99.100.nip.io\n    http:\n      paths:\n      - backend:\n          serviceName: ghost\n          servicePort: 2368\n"
  },
  {
    "path": "manifests/05-ingress-controller/ingress.yaml",
    "content": "\n# ghost app\napiVersion: v1\nkind: Pod\nmetadata:\n  name: nginx\n  labels:\n    run: nginx\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n    ports:\n    - containerPort: 80\n      protocol: TCP\n---\n# ghost service #1\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n  selector:\n    run: nginx\n---\n# Create the ingress resource\n# https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/ingress.yaml\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  rules:\n  - host: nginx.192.168.99.100.nip.io\n    http:\n      paths:\n      - backend:\n          serviceName: nginx\n          servicePort: 80\n"
  },
  {
    "path": "manifests/05-ingress-controller/wordpress.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: wordpress\nspec:\n  rules:\n  - host: wordpress.192.168.99.100.nip.io\n    http:\n      paths:\n      - backend:\n          serviceName: wordpress\n          servicePort: 80\n"
  },
  {
    "path": "manifests/06-volumes/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Volumes Controller chapter\nThe first Volumes exercice is to create a shared volume between two pods, and experiment with emptyDir & mountPath\nFile : volumes.yaml\n\nSecond step is to work with Persistent volumes & claims : pcv.yaml\n\nOther files provided for discussion"
  },
  {
    "path": "manifests/06-volumes/cm-vol.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: oreilly\n  labels:\n    app: vol\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    volumeMounts:\n    - mountPath: /oreilly\n      name: test\n    imagePullPolicy: IfNotPresent\n    name: busybox\n  restartPolicy: Always\n  volumes:\n  - name: test\n    configMap:\n      name: foobar\n"
  },
  {
    "path": "manifests/06-volumes/configmap.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: cm-test\n  labels:\n    app: vol\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    volumeMounts:\n    - mountPath: /velocity\n      name: test\n    imagePullPolicy: IfNotPresent\n    name: busybox\n  restartPolicy: Always\n  volumes:\n  - name: test\n    configMap:\n      name: foobar\n"
  },
  {
    "path": "manifests/06-volumes/foobar.md",
    "content": "# this is a file\n\nthis is an example\n"
  },
  {
    "path": "manifests/06-volumes/hostpath.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: hostpath\nspec:\n  containers:\n  - image: busybox\n    name: busybox\n    command:\n    - sleep\n    - \"3600\"\n    volumeMounts:\n    - mountPath: /bitnami\n      name: foobar\n  volumes:\n  - name: foobar\n    persistentVolumeClaim:\n      claimName: myclaim\n"
  },
  {
    "path": "manifests/06-volumes/mysql.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: data\nspec:\n  containers:\n  - image: mysql:5.5\n    name: db\n    volumeMounts:\n    - mountPath: /var/lib/mysql\n      name: barfoo\n    env:\n      - name: MYSQL_ROOT_PASSWORD\n        value: root\n  volumes:\n  - name: barfoo\n    persistentVolumeClaim:\n      claimName: foobar\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/.helmignore",
    "content": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation (prefixed with !). Only one pattern per line.\n.DS_Store\n# Common VCS dirs\n.git/\n.gitignore\n.bzr/\n.bzrignore\n.hg/\n.hgignore\n.svn/\n# Common backup files\n*.swp\n*.bak\n*.tmp\n*~\n# Various IDEs\n.project\n.idea/\n*.tmproj\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/Chart.yaml",
    "content": "apiVersion: v1\nappVersion: \"1.0\"\ndescription: A Helm chart for Kubernetes\nname: oreilly\nversion: 0.1.0\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/templates/NOTES.txt",
    "content": "1. Get the application URL by running these commands:\n{{- if .Values.ingress.enabled }}\n{{- range .Values.ingress.hosts }}\n  http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}\n{{- end }}\n{{- else if contains \"NodePort\" .Values.service.type }}\n  export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath=\"{.spec.ports[0].nodePort}\" services {{ template \"oreilly.fullname\" . }})\n  export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath=\"{.items[0].status.addresses[0].address}\")\n  echo http://$NODE_IP:$NODE_PORT\n{{- else if contains \"LoadBalancer\" .Values.service.type }}\n     NOTE: It may take a few minutes for the LoadBalancer IP to be available.\n           You can watch the status of by running 'kubectl get svc -w {{ template \"oreilly.fullname\" . }}'\n  export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template \"oreilly.fullname\" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')\n  echo http://$SERVICE_IP:{{ .Values.service.port }}\n{{- else if contains \"ClusterIP\" .Values.service.type }}\n  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l \"app={{ template \"oreilly.name\" . }},release={{ .Release.Name }}\" -o jsonpath=\"{.items[0].metadata.name}\")\n  echo \"Visit http://127.0.0.1:8080 to use your application\"\n  kubectl port-forward $POD_NAME 8080:80\n{{- end }}\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/templates/_helpers.tpl",
    "content": "{{/* vim: set filetype=mustache: */}}\n{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"oreilly.name\" -}}\n{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n\n{{/*\nCreate a default fully qualified app name.\nWe truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).\nIf release name contains chart name it will be used as a full name.\n*/}}\n{{- define \"oreilly.fullname\" -}}\n{{- if .Values.fullnameOverride -}}\n{{- .Values.fullnameOverride | trunc 63 | trimSuffix \"-\" -}}\n{{- else -}}\n{{- $name := default .Chart.Name .Values.nameOverride -}}\n{{- if contains $name .Release.Name -}}\n{{- .Release.Name | trunc 63 | trimSuffix \"-\" -}}\n{{- else -}}\n{{- printf \"%s-%s\" .Release.Name $name | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n{{- end -}}\n{{- end -}}\n\n{{/*\nCreate chart name and version as used by the chart label.\n*/}}\n{{- define \"oreilly.chart\" -}}\n{{- printf \"%s-%s\" .Chart.Name .Chart.Version | replace \"+\" \"_\" | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/templates/deployment.yaml",
    "content": "apiVersion: apps/v1beta2\nkind: Deployment\nmetadata:\n  name: {{ template \"oreilly.fullname\" . }}\n  labels:\n    app: {{ template \"oreilly.name\" . }}\n    chart: {{ template \"oreilly.chart\" . }}\n    release: {{ .Release.Name }}\n    heritage: {{ .Release.Service }}\nspec:\n  replicas: {{ .Values.replicaCount }}\n  selector:\n    matchLabels:\n      app: {{ template \"oreilly.name\" . }}\n      release: {{ .Release.Name }}\n  template:\n    metadata:\n      labels:\n        app: {{ template \"oreilly.name\" . }}\n        release: {{ .Release.Name }}\n    spec:\n      containers:\n        - name: {{ .Chart.Name }}\n          image: \"{{ .Values.image.repository }}:{{ .Values.image.tag }}\"\n          imagePullPolicy: {{ .Values.image.pullPolicy }}\n          ports:\n            - name: http\n              containerPort: 80\n              protocol: TCP\n          livenessProbe:\n            httpGet:\n              path: /\n              port: http\n          readinessProbe:\n            httpGet:\n              path: /\n              port: http\n          resources:\n{{ toYaml .Values.resources | indent 12 }}\n    {{- with .Values.nodeSelector }}\n      nodeSelector:\n{{ toYaml . | indent 8 }}\n    {{- end }}\n    {{- with .Values.affinity }}\n      affinity:\n{{ toYaml . | indent 8 }}\n    {{- end }}\n    {{- with .Values.tolerations }}\n      tolerations:\n{{ toYaml . | indent 8 }}\n    {{- end }}\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/templates/ingress.yaml",
    "content": "{{- if .Values.ingress.enabled -}}\n{{- $fullName := include \"oreilly.fullname\" . -}}\n{{- $servicePort := .Values.service.port -}}\n{{- $ingressPath := .Values.ingress.path -}}\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: {{ $fullName }}\n  labels:\n    app: {{ template \"oreilly.name\" . }}\n    chart: {{ template \"oreilly.chart\" . }}\n    release: {{ .Release.Name }}\n    heritage: {{ .Release.Service }}\n{{- with .Values.ingress.annotations }}\n  annotations:\n{{ toYaml . | indent 4 }}\n{{- end }}\nspec:\n{{- if .Values.ingress.tls }}\n  tls:\n  {{- range .Values.ingress.tls }}\n    - hosts:\n      {{- range .hosts }}\n        - {{ . }}\n      {{- end }}\n      secretName: {{ .secretName }}\n  {{- end }}\n{{- end }}\n  rules:\n  {{- range .Values.ingress.hosts }}\n    - host: {{ . }}\n      http:\n        paths:\n          - path: {{ $ingressPath }}\n            backend:\n              serviceName: {{ $fullName }}\n              servicePort: http\n  {{- end }}\n{{- end }}\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/templates/service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: {{ template \"oreilly.fullname\" . }}\n  labels:\n    app: {{ template \"oreilly.name\" . }}\n    chart: {{ template \"oreilly.chart\" . }}\n    release: {{ .Release.Name }}\n    heritage: {{ .Release.Service }}\nspec:\n  type: {{ .Values.service.type }}\n  ports:\n    - port: {{ .Values.service.port }}\n      targetPort: http\n      protocol: TCP\n      name: http\n  selector:\n    app: {{ template \"oreilly.name\" . }}\n    release: {{ .Release.Name }}\n"
  },
  {
    "path": "manifests/06-volumes/oreilly/values.yaml",
    "content": "# Default values for oreilly.\n# This is a YAML-formatted file.\n# Declare variables to be passed into your templates.\n\nreplicaCount: 1\n\nimage:\n  repository: nginx\n  tag: stable\n  pullPolicy: IfNotPresent\n\nservice:\n  type: ClusterIP\n  port: 80\n\ningress:\n  enabled: false\n  annotations: {}\n    # kubernetes.io/ingress.class: nginx\n    # kubernetes.io/tls-acme: \"true\"\n  path: /\n  hosts:\n    - chart-example.local\n  tls: []\n  #  - secretName: chart-example-tls\n  #    hosts:\n  #      - chart-example.local\n\nresources: {}\n  # We usually recommend not to specify default resources and to leave this as a conscious\n  # choice for the user. This also increases chances charts run on environments with little\n  # resources, such as Minikube. If you do want to specify resources, uncomment the following\n  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.\n  # limits:\n  #  cpu: 100m\n  #  memory: 128Mi\n  # requests:\n  #  cpu: 100m\n  #  memory: 128Mi\n\nnodeSelector: {}\n\ntolerations: []\n\naffinity: {}\n"
  },
  {
    "path": "manifests/06-volumes/pv.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: pvfoo\nspec:\n  capacity:\n    storage: 1Gi\n  accessModes:\n    - ReadWriteOnce\n  hostPath:\n    path: \"/tmp/foo0001\"\n"
  },
  {
    "path": "manifests/06-volumes/pvc.yaml",
    "content": "kind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: foobar\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 1Gi\n"
  },
  {
    "path": "manifests/06-volumes/volumes.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: vol\n  labels:\n    app: vol\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    volumeMounts:\n    - mountPath: /busy\n      name: test\n    imagePullPolicy: IfNotPresent\n    name: busy\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    volumeMounts:\n    - mountPath: /box\n      name: test\n    imagePullPolicy: IfNotPresent\n    name: box\n  restartPolicy: Always\n  volumes:\n  - name: test\n    emptyDir: {}\n"
  },
  {
    "path": "manifests/07-crd/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Custom Resources Definition chapter\nSeveral examples are provided.\ndatabase.yaml is the one from the syllabus, but as you may create any type of CRD, the other manifests are here to broaden your mind :-)"
  },
  {
    "path": "manifests/07-crd/bd.yml",
    "content": "apiVersion: foo.bar/v1 \nkind: DataBase\nmetadata:\n  name: crazy\ndata:\n  oracle: mysql\n"
  },
  {
    "path": "manifests/07-crd/database.yml",
    "content": "apiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n  name: databases.foo.bar\nspec:\n  group: foo.bar\n  version: v1\n  scope: Namespaced\n  names:\n    plural: databases\n    singular: database\n    kind: DataBase\n    shortNames:\n    - db\n"
  },
  {
    "path": "manifests/07-crd/db.yml",
    "content": "apiVersion: foo.bar/v1 \nkind: DataBase\nmetadata:\n  name: my-new-db \nspec:\n  type: mysql\n"
  },
  {
    "path": "manifests/08-security/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Security and RBAC chapter\nThis example states how you can restrict what a container is allowed to do within a k8s cluster : test.yaml"
  },
  {
    "path": "manifests/08-security/nginx.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: nginxsec\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n    securityContext:\n      runAsNonRoot: true\n"
  },
  {
    "path": "manifests/08-security/test.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: redis\nspec:\n  containers:\n  - image: bitnami/redis\n    imagePullPolicy: IfNotPresent\n    env:\n    - name: ALLOW_EMPTY_PASSWORD\n      value: \"yes\"\n    name: redis\n    securityContext:\n      runAsNonRoot: true\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/README.md",
    "content": "![oreilly-logo](./images/oreilly.png) ![k8s](./images/k8s.png)\n\n# Manifests\nThis dir hosts all the K8s manifests used during the training.\n\nSome of the chapter are well defined and numbered.\nThe rest are for specific examples and demonstration, and may not relate to a specific chapter in the training.\n"
  },
  {
    "path": "manifests/canary/README.md",
    "content": "# Canary Demo\n\nThis demonstrates using 2 deployments to test new code alongside operational code. Each deployment can be scaled to change the proportion of data sent to each.\n\nThe static files are placed in sub-directories such that configMap can give them the correct filenames by default. Mounting these is demonstrated in `configmap.sh`.\n\n```\n./configmap.sh\nkubectl create -f blue-deploy.yaml\nkubectl create -f red-deploy.yaml\nkubectl create -f redblue-svc.yaml\nkubectl create -f redblue-ingress.yaml\n```\n\nRunning this demo includes the following steps:\n* Create the configMaps.\n* Create the redblue service.\n* Create the red deployment which will be captured by the redblue service.\n* Scale the red deployment (if desired).\n* Create the blue deployment which will also be captured by the redblue service due to its label, and will be roundrobinned alongside the red deployment.\n* Scale the blue deployment (if desired).\n* Scale down the red deployment (to 0) and delete it.\n* Continue with the new deployment.\n\nN.B: This approach was taken as editing the bluered deployment to update labels left a floating rs and pods. This approach managed the pods more 'nicely'\n"
  },
  {
    "path": "manifests/canary/blue-deploy.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    run: test\n  name: blue\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      run: test\n  template:\n    metadata:\n      labels:\n        run: test\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        volumeMounts:\n        - mountPath: /usr/share/nginx/html\n          name: blue\n      volumes:\n      - name: blue\n        configMap:\n          name: blue\n"
  },
  {
    "path": "manifests/canary/blue-files/index.html",
    "content": "<html>\n<body bgcolor=\"#0000FF\">\n</body>\n</html>\n"
  },
  {
    "path": "manifests/canary/configmap.sh",
    "content": "#!/bin/sh\nkubectl create configmap red --from-file=red-files/index.html\nkubectl create configmap blue --from-file=blue-files/index.html\n"
  },
  {
    "path": "manifests/canary/red-deploy.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    run: test\n  name: red\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      run: test\n  template:\n    metadata:\n      labels:\n        run: test\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        volumeMounts:\n        - mountPath: /usr/share/nginx/html\n          name: red\n      volumes:\n      - name: red\n        configMap:\n          name: red\n"
  },
  {
    "path": "manifests/canary/red-files/index.html",
    "content": "<html>\n<body bgcolor=\"#FF0000\">\n</body>\n</html>\n"
  },
  {
    "path": "manifests/canary/redblue-ingress.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: redblue\nspec:\n  rules:\n  - host: redblue.192.168.99.100.nip.io\n    http:\n      paths:\n      - backend:\n          serviceName: redblue\n          servicePort: 80\n"
  },
  {
    "path": "manifests/canary/redblue-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: redblue\nspec:\n  selector:\n    run: test\n  ports:\n  - port: 80\n    protocol: TCP\n"
  },
  {
    "path": "manifests/configmaps/README.md",
    "content": "#### A small demo of how a configmap gets updated inside a running pod.\n\nSteps:\n \n1- Create the configmap:\n\nkubectl create -f configmap.yaml\n\n2- Create the pod\n\nkubectl create -f pod.yaml\n\n3- Get the logs from the pod:\n\nkubectl logs -f busybox\n\n4- on another terminal run the update-configmap.sh script\n\n./update-configmap.sh\n\n5- check the logs to see how the confimap data changes inside the running pod"
  },
  {
    "path": "manifests/configmaps/configmap.yaml",
    "content": "apiVersion: v1\ndata:\n  config.yaml: |\n    version: 4\n    host: www.example.com\n    ports:\n    - 80\n    - 9090\nkind: ConfigMap\nmetadata:\n  name: config-file\n"
  },
  {
    "path": "manifests/configmaps/foobar.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: busybox\n  namespace: default\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    name: busybox\n    volumeMounts:\n      - name: test\n        mountPath: /tmp/test\n  volumes:\n    - name: test\n      configMap:\n        name: foobar\n"
  },
  {
    "path": "manifests/configmaps/pod.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: busybox\n  namespace: default\nspec:\n  containers:\n  - image: busybox\n    command:\n      - watch\n      - \"cat /etc/config/config.yaml\"\n    imagePullPolicy: IfNotPresent\n    name: busybox\n    volumeMounts:\n      - name: config-volume\n        mountPath: /etc/config\n  volumes:\n    - name: config-volume\n      configMap:\n        name: config-file\n"
  },
  {
    "path": "manifests/configmaps/update-configmap.sh",
    "content": "#!/bin/bash\nsed -i 's/\\-\\s9090/- 8888/' configmap.yaml\nkubectl apply -f configmap.yaml"
  },
  {
    "path": "manifests/init-container/init.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: init-demo2\n  labels:\n    topic: initdemo2\nspec:\n  containers:\n  - name: python\n    image: python:2.7-alpine\n    workingDir: /tmp/init\n    command:\n      - python\n      - \"-m\"\n      - SimpleHTTPServer\n    volumeMounts:\n    - name: path\n      mountPath: /tmp/init\n  initContainers:\n  - name: busybox\n    image: busybox\n    command:\n    - wget\n    - \"-O\"\n    - \"/tmp/init/index.html\"\n    - http://google.com \n    volumeMounts:\n    - name: path\n      mountPath: /tmp/init\n  volumes:\n  - name: path\n    emptyDir: {}\n"
  },
  {
    "path": "manifests/logging/allinone.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: logging\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: efk\n  namespace: logging\n---\nkind: PersistentVolume\napiVersion: v1\nmetadata:\n  name: prom001\n  labels:\n    type: local\n  namespace: logging\nspec:\n  capacity:\n    storage: 1Gi\n  accessModes:\n    - ReadWriteOnce\n  persistentVolumeReclaimPolicy: Recycle\n  hostPath:\n    path: \"/mnt/sda1/data/data00\"\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: promclaim1\n  namespace: logging\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 800M\n---\napiVersion: v1\nkind: Service\nmetadata:\n  annotations:\n    prometheus.io/scrape: 'true'\n  labels:\n    component: efk\n    name: prometheus\n  name: prometheus\n  namespace: logging\nspec:\n  selector:\n    component: efk\n    app: prometheus\n  type: NodePort\n  ports:\n  - name: prometheus\n    protocol: TCP\n    port: 9090\n    targetPort: 9090\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: prometheus\n  namespace: logging\ndata:\n  prometheus.yml: |-\n    # A scrape configuration for running Prometheus on a Kubernetes cluster.\n    # This uses separate scrape configs for cluster components (i.e. API server, node)\n    # and services to allow each to use different authentication configs.\n    #\n    # Kubernetes labels will be added as Prometheus labels on metrics via the\n    # `labelmap` relabeling action.\n    # Scrape config for cluster components.\n    scrape_configs:\n    - job_name: 'kubernetes-cluster'\n      scheme: https\n      tls_config:\n        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n        insecure_skip_verify: true\n      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token\n      kubernetes_sd_configs:\n      - api_servers:\n        - 'https://kubernetes.default.svc'\n        in_cluster: true\n        role: apiserver\n    - job_name: 'kubernetes-nodes'\n      scheme: https\n      tls_config:\n        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n        insecure_skip_verify: true\n      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token\n      kubernetes_sd_configs:\n      - api_servers:\n        - 'https://kubernetes.default.svc'\n        in_cluster: true\n        role: node\n      relabel_configs:\n      - action: labelmap\n        regex: __meta_kubernetes_node_label_(.+)\n    - job_name: 'kubernetes-service-endpoints'\n      kubernetes_sd_configs:\n      - api_servers:\n        - 'https://kubernetes.default.svc'\n        in_cluster: true\n        role: endpoint\n      relabel_configs:\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]\n        action: keep\n        regex: true\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]\n        action: replace\n        target_label: __scheme__\n        regex: (https?)\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]\n        action: replace\n        target_label: __metrics_path__\n        regex: (.+)\n      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]\n        action: replace\n        target_label: __address__\n        regex: (.+)(?::\\d+);(\\d+)\n        replacement: $1:$2\n      - action: labelmap\n        regex: __meta_kubernetes_endpoint_label_(.+)\n      - source_labels: [__meta_kubernetes_service_namespace]\n        action: replace\n        target_label: kubernetes_namespace\n      - source_labels: [__meta_kubernetes_service_name]\n        action: replace\n        target_label: kubernetes_name\n    - job_name: 'kubernetes-service-probes'\n      metrics_path: /probe\n      params:\n        module: [http_2xx]\n      kubernetes_sd_configs:\n      - api_servers:\n        - 'https://kubernetes.default.svc'\n        in_cluster: true\n        role: service\n      relabel_configs:\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]\n        action: keep\n        regex: true\n      - source_labels: [__address__]\n        regex: (.*)(:80)?\n        target_label: __param_target\n        replacement: ${1}\n      - source_labels: [__param_target]\n        regex: (.*)\n        target_label: instance\n        replacement: ${1}\n      - source_labels: []\n        regex: .*\n        target_label: __address__\n        replacement: blackbox:9115  # Blackbox exporter.\n      - action: labelmap\n        regex: __meta_kubernetes_service_label_(.+)\n      - source_labels: [__meta_kubernetes_service_namespace]\n        target_label: kubernetes_namespace\n      - source_labels: [__meta_kubernetes_service_name]\n        target_label: kubernetes_name\n---\napiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  name: prometheus\n  namespace: logging\n  labels:\n    component: efk\n    app: prometheus\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      component: efk\n      app: prometheus\n  template:\n    metadata:\n      name: prometheus\n      labels:\n        component: efk\n        app: prometheus\n    spec:\n      serviceAccount: efk\n      containers:\n      - name: prometheus\n        image: prom/prometheus:latest\n        args:\n          - '-config.file=/etc/prometheus/prometheus.yml'\n        ports:\n        - name: web\n          containerPort: 9090\n        livenessProbe:\n          httpGet:\n            path: /metrics\n            port: 9090\n          initialDelaySeconds: 15\n          timeoutSeconds: 1\n        volumeMounts:\n        - name: config-volume\n          mountPath: /etc/prometheus\n        - name: prompd\n          mountPath: \"/prometheus/data\"\n      volumes:\n      - name: config-volume\n        configMap:\n          name: prometheus\n      - name: prompd\n        persistentVolumeClaim:\n          claimName: promclaim1\n---\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    component: efk\n    name: grafana\n  name: grafana\n  namespace: logging\nspec:\n  selector:\n    component: efk\n    app: grafana\n  type: NodePort\n  ports:\n  - name: grafana\n    protocol: TCP\n    port: 3000\n    targetPort: 3000\n---\napiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  name: grafana\n  namespace: logging\n  labels:\n    component: efk\n    app: grafana\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      component: efk\n      app: grafana\n  template:\n    metadata:\n      name: grafana\n      labels:\n        component: efk\n        app: grafana\n    spec:\n      serviceAccount: efk\n      containers:\n      - name: grafana\n        image: grafana/grafana\n        ports:\n        - name: web\n          containerPort: 3000\n        volumeMounts:\n        - name: config-volume\n          mountPath: /var/lib/grafana/dashboards\n        - name: ini-volume\n          mountPath: /etc/grafana\n      volumes:\n      - name: config-volume\n        configMap:\n          name: grafana\n      - name: ini-volume\n        configMap:\n          name: ini"
  },
  {
    "path": "manifests/logging/configs.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: ini\n  namespace: logging\ndata:\n  grafana.ini: |-\n    ##################### Grafana Configuration Example #####################\n    #\n    # Everything has defaults so you only need to uncomment things you want to\n    # change\n\n    # possible values : production, development\n    ; app_mode = production"
  },
  {
    "path": "manifests/logging/dashboards.json",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: grafana\n  namespace: logging\ndata:\n  dashboards.json: |-\n"
  },
  {
    "path": "manifests/logging/dashboards.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: grafana\n  namespace: logging\ndata:\n  dashboards.json: |-\n    {\n      \"__inputs\": [\n        {\n          \"name\": \"DS_PROMETHEUS\",\n          \"label\": \"prometheus\",\n          \"description\": \"\",\n          \"type\": \"datasource\",\n          \"pluginId\": \"prometheus\",\n          \"pluginName\": \"Prometheus\"\n        }\n      ],\n      \"__requires\": [\n        {\n          \"type\": \"panel\",\n          \"id\": \"singlestat\",\n          \"name\": \"Singlestat\",\n          \"version\": \"\"\n        },\n        {\n          \"type\": \"panel\",\n          \"id\": \"graph\",\n          \"name\": \"Graph\",\n          \"version\": \"\"\n        },\n        {\n          \"type\": \"grafana\",\n          \"id\": \"grafana\",\n          \"name\": \"Grafana\",\n          \"version\": \"3.1.1\"\n        },\n        {\n          \"type\": \"datasource\",\n          \"id\": \"prometheus\",\n          \"name\": \"Prometheus\",\n          \"version\": \"1.0.0\"\n        }\n      ],\n      \"id\": null,\n      \"title\": \"Kubernetes\",\n      \"tags\": [],\n      \"style\": \"dark\",\n      \"timezone\": \"browser\",\n      \"editable\": true,\n      \"hideControls\": false,\n      \"sharedCrosshair\": false,\n      \"rows\": [\n        {\n          \"title\": \"New row\",\n          \"height\": \"250px\",\n          \"editable\": true,\n          \"collapse\": false,\n          \"panels\": [\n            {\n              \"title\": \"Number of Pods\",\n              \"error\": false,\n              \"span\": 12,\n              \"editable\": true,\n              \"type\": \"singlestat\",\n              \"isNew\": true,\n              \"id\": 2,\n              \"targets\": [\n                {\n                  \"refId\": \"A\",\n                  \"expr\": \"count(count(container_start_time_seconds{io_kubernetes_pod_name!=\\\"\\\"}) by (io_kubernetes_pod_name))\",\n                  \"intervalFactor\": 2,\n                  \"step\": 240\n                }\n              ],\n              \"links\": [],\n              \"datasource\": \"${DS_PROMETHEUS}\",\n              \"maxDataPoints\": 100,\n              \"interval\": null,\n              \"cacheTimeout\": null,\n              \"format\": \"none\",\n              \"prefix\": \"\",\n              \"postfix\": \"\",\n              \"nullText\": null,\n              \"valueMaps\": [\n                {\n                  \"value\": \"null\",\n                  \"op\": \"=\",\n                  \"text\": \"N/A\"\n                }\n              ],\n              \"mappingTypes\": [\n                {\n                  \"name\": \"value to text\",\n                  \"value\": 1\n                },\n                {\n                  \"name\": \"range to text\",\n                  \"value\": 2\n                }\n              ],\n              \"rangeMaps\": [\n                {\n                  \"from\": \"null\",\n                  \"to\": \"null\",\n                  \"text\": \"N/A\"\n                }\n              ],\n              \"mappingType\": 1,\n              \"nullPointMode\": \"connected\",\n              \"valueName\": \"avg\",\n              \"prefixFontSize\": \"50%\",\n              \"valueFontSize\": \"80%\",\n              \"postfixFontSize\": \"50%\",\n              \"thresholds\": \"20,25\",\n              \"colorBackground\": false,\n              \"colorValue\": false,\n              \"colors\": [\n                \"rgba(50, 172, 45, 0.97)\",\n                \"rgba(237, 129, 40, 0.89)\",\n                \"rgba(245, 54, 54, 0.9)\"\n              ],\n              \"sparkline\": {\n                \"show\": false,\n                \"full\": false,\n                \"lineColor\": \"rgb(31, 120, 193)\",\n                \"fillColor\": \"rgba(31, 118, 189, 0.18)\"\n              },\n              \"gauge\": {\n                \"show\": true,\n                \"minValue\": 0,\n                \"maxValue\": 30,\n                \"thresholdMarkers\": true,\n                \"thresholdLabels\": false\n              }\n            }\n          ]\n        },\n        {\n          \"collapse\": false,\n          \"editable\": true,\n          \"height\": \"250px\",\n          \"panels\": [\n            {\n              \"aliasColors\": {},\n              \"bars\": false,\n              \"datasource\": \"${DS_PROMETHEUS}\",\n              \"editable\": true,\n              \"error\": false,\n              \"fill\": 1,\n              \"grid\": {\n                \"threshold1\": null,\n                \"threshold1Color\": \"rgba(216, 200, 27, 0.27)\",\n                \"threshold2\": null,\n                \"threshold2Color\": \"rgba(234, 112, 112, 0.22)\"\n              },\n              \"id\": 1,\n              \"isNew\": true,\n              \"legend\": {\n                \"avg\": false,\n                \"current\": false,\n                \"max\": false,\n                \"min\": false,\n                \"show\": true,\n                \"total\": false,\n                \"values\": false\n              },\n              \"lines\": true,\n              \"linewidth\": 2,\n              \"links\": [],\n              \"nullPointMode\": \"connected\",\n              \"percentage\": false,\n              \"pointradius\": 5,\n              \"points\": false,\n              \"renderer\": \"flot\",\n              \"seriesOverrides\": [],\n              \"span\": 12,\n              \"stack\": false,\n              \"steppedLine\": false,\n              \"targets\": [\n                {\n                  \"expr\": \"count(count(container_start_time_seconds{io_kubernetes_pod_name!=\\\"\\\"}) by (io_kubernetes_pod_name))\",\n                  \"intervalFactor\": 2,\n                  \"legendFormat\": \"\",\n                  \"refId\": \"A\",\n                  \"step\": 20\n                }\n              ],\n              \"timeFrom\": null,\n              \"timeShift\": null,\n              \"title\": \"Pods\",\n              \"tooltip\": {\n                \"msResolution\": true,\n                \"shared\": true,\n                \"sort\": 0,\n                \"value_type\": \"cumulative\"\n              },\n              \"type\": \"graph\",\n              \"xaxis\": {\n                \"show\": true\n              },\n              \"yaxes\": [\n                {\n                  \"format\": \"short\",\n                  \"label\": null,\n                  \"logBase\": 1,\n                  \"max\": null,\n                  \"min\": null,\n                  \"show\": true\n                },\n                {\n                  \"format\": \"short\",\n                  \"label\": null,\n                  \"logBase\": 1,\n                  \"max\": null,\n                  \"min\": null,\n                  \"show\": true\n                }\n              ]\n            }\n          ],\n          \"title\": \"Row\"\n        }\n      ],\n      \"time\": {\n        \"from\": \"now-3h\",\n        \"to\": \"now\"\n      },\n      \"timepicker\": {\n        \"refresh_intervals\": [\n          \"5s\",\n          \"10s\",\n          \"30s\",\n          \"1m\",\n          \"5m\",\n          \"15m\",\n          \"30m\",\n          \"1h\",\n          \"2h\",\n          \"1d\"\n        ],\n        \"time_options\": [\n          \"5m\",\n          \"15m\",\n          \"1h\",\n          \"6h\",\n          \"12h\",\n          \"24h\",\n          \"2d\",\n          \"7d\",\n          \"30d\"\n        ]\n      },\n      \"templating\": {\n        \"list\": []\n      },\n      \"annotations\": {\n        \"list\": []\n      },\n      \"refresh\": \"5s\",\n      \"schemaVersion\": 12,\n      \"version\": 5,\n      \"links\": [],\n      \"gnetId\": null\n    }"
  },
  {
    "path": "manifests/logging/grafana.ini",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: ini\n  namespace: logging\ndata:\n  grafana.ini: |-\n    ##################### Grafana Configuration Example #####################\n    #\n    # Everything has defaults so you only need to uncomment things you want to\n    # change\n\n    # possible values : production, development\n    ; app_mode = production\n\n    # instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty\n    ; instance_name = ${HOSTNAME}\n\n    #################################### Paths ####################################\n    [paths]\n    # Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)\n    #\n    ;data = /var/lib/grafana\n    #\n    # Directory where grafana can store logs\n    #\n    ;logs = /var/log/grafana\n    #\n    # Directory where grafana will automatically scan and look for plugins\n    #\n    ;plugins = /var/lib/grafana/plugins\n\n    #\n    #################################### Server ####################################[server]\n    # Protocol (http or https)\n    ;protocol = http\n\n    # The ip address to bind to, empty will bind to all interfaces\n    ;http_addr =\n\n    # The http port  to use\n    ;http_port = 3000\n\n    # The public facing domain name used to access grafana from a browser\n    ;domain = localhost\n\n    # Redirect to correct domain if host header does not match domain\n    # Prevents DNS rebinding attacks\n    ;enforce_domain = false\n\n    # The full public facing url\n    ;root_url = %(protocol)s://%(domain)s:%(http_port)s/\n\n    # Log web requests\n    ;router_logging = false\n\n    # the path relative working path\n    ;static_root_path = public\n\n    # enable gzip\n    ;enable_gzip = false\n\n    # https certs & key file\n    ;cert_file =\n    ;cert_key =\n\n    #################################### Database ####################################\n    [database]\n    # Either \"mysql\", \"postgres\" or \"sqlite3\", it's your choice\n    ;type = sqlite3\n    ;host = 127.0.0.1:3306\n    ;name = grafana\n    ;user = root \n    ;password =\n\n    # For \"postgres\" only, either \"disable\", \"require\" or \"verify-full\"\n    ;ssl_mode = disable\n\n    # For \"sqlite3\" only, path relative to data_path setting\n    ;path = grafana.db\n\n    #################################### Session ####################################\n    [session]\n    # Either \"memory\", \"file\", \"redis\", \"mysql\", \"postgres\", default is \"file\"\n    ;provider = file\n\n    # Provider config options\n    # memory: not have any config yet\n    # file: session dir path, is relative to grafana data_path\n    # redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana`\n    # mysql: go-sql-driver/mysql dsn config string, e.g. `user:password@tcp(127.0.0.1:3306)/database_name`\n    # postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable\n    ;provider_config = sessions\n\n    # Session cookie name\n    ;cookie_name = grafana_sess\n\n    # If you use session in https only, default is false\n    ;cookie_secure = false\n\n    # Session life time, default is 86400\n    ;session_life_time = 86400\n\n    #################################### Analytics ####################################\n    [analytics]\n    # Server reporting, sends usage counters to stats.grafana.org every 24 hours.\n    # No ip addresses are being tracked, only simple counters to track\n    # running instances, dashboard and error counts. It is very helpful to us.\n    # Change this option to false to disable reporting.\n    ;reporting_enabled = true\n\n    # Set to false to disable all checks to https://grafana.net\n    # for new vesions (grafana itself and plugins), check is used\n    # in some UI views to notify that grafana or plugin update exists\n    # This option does not cause any auto updates, nor send any information\n    # only a GET request to http://grafana.net to get latest versions\n    check_for_updates = true\n\n    # Google Analytics universal tracking code, only enabled if you specify an id here\n    ;google_analytics_ua_id =\n\n    #################################### Security ####################################\n    [security]\n    # default admin user, created on startup\n    ;admin_user = admin\n\n    # default admin password, can be changed before first start of grafana,  or in profile settings\n    ;admin_password = admin\n\n    # used for signing\n    ;secret_key = SW2YcwTIb9zpOOhoPsMm\n\n    # Auto-login remember days\n    ;login_remember_days = 7\n    ;cookie_username = grafana_user\n    ;cookie_remember_name = grafana_remember\n\n    # disable gravatar profile images\n    ;disable_gravatar = false\n\n    # data source proxy whitelist (ip_or_domain:port separated by spaces)\n    ;data_source_proxy_whitelist =\n\n    [snapshots]\n    # snapshot sharing options\n    ;external_enabled = true\n    ;external_snapshot_url = https://snapshots-origin.raintank.io\n    ;external_snapshot_name = Publish to snapshot.raintank.io\n\n    #################################### Users ####################################\n    [users]\n    # disable user signup / registration\n    ;allow_sign_up = true\n\n    # Allow non admin users to create organizations\n    ;allow_org_create = true\n\n    # Set to true to automatically assign new users to the default organization (id 1)\n    ;auto_assign_org = true\n\n    # Default role new users will be automatically assigned (if disabled above is set to true)\n    ;auto_assign_org_role = Viewer\n\n    # Background text for the user field on the login page\n    ;login_hint = email or username\n\n    # Default UI theme (\"dark\" or \"light\")\n    ;default_theme = dark\n\n    #################################### Anonymous Auth ##########################\n    [auth.anonymous]\n    # enable anonymous access\n    ;enabled = false\n\n    # specify organization name that should be used for unauthenticated users\n    ;org_name = Main Org.\n\n    # specify role for unauthenticated users\n    ;org_role = Viewer\n\n    #################################### Github Auth ##########################\n    [auth.github]\n    ;enabled = false\n    ;allow_sign_up = false\n    ;client_id = some_id\n    ;client_secret = some_secret\n    ;scopes = user:email,read:org\n    ;auth_url = https://github.com/login/oauth/authorize\n    ;token_url = https://github.com/login/oauth/access_token\n    ;api_url = https://api.github.com/user\n    ;team_ids =\n    ;allowed_organizations =\n\n    #################################### Google Auth ##########################\n    [auth.google]\n    ;enabled = false\n    ;allow_sign_up = false\n    ;client_id = some_client_id\n    ;client_secret = some_client_secret\n    ;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email\n    ;auth_url = https://accounts.google.com/o/oauth2/auth\n    ;token_url = https://accounts.google.com/o/oauth2/token\n    ;api_url = https://www.googleapis.com/oauth2/v1/userinfo\n    ;allowed_domains =\n\n    #################################### Auth Proxy ##########################\n    [auth.proxy]\n    ;enabled = false\n    ;header_name = X-WEBAUTH-USER\n    ;header_property = username\n    ;auto_sign_up = true\n\n    #################################### Basic Auth ##########################\n    [auth.basic]\n    ;enabled = true\n\n    #################################### Auth LDAP ##########################\n    [auth.ldap]\n    ;enabled = false\n    ;config_file = /etc/grafana/ldap.toml\n\n    #################################### SMTP / Emailing ##########################\n    [smtp]\n    ;enabled = false\n    ;host = localhost:25\n    ;user =\n    ;password =\n    ;cert_file =\n    ;key_file =\n    ;skip_verify = false\n    ;from_address = admin@grafana.localhost\n\n    [emails]\n    ;welcome_email_on_sign_up = false\n\n    #################################### Logging ##########################\n    [log]\n    # Either \"console\", \"file\", \"syslog\". Default is console and  file\n    # Use space to separate multiple modes, e.g. \"console file\"\n    ;mode = console, file\n\n    # Either \"trace\", \"debug\", \"info\", \"warn\", \"error\", \"critical\", default is \"info\"\n    ;level = info\n\n    # For \"console\" mode only\n    [log.console]\n    ;level =\n\n    # log line format, valid options are text, console and json\n    ;format = console\n\n    # For \"file\" mode only\n    [log.file]\n    ;level =\n\n    # log line format, valid options are text, console and json\n    ;format = text\n\n    # This enables automated log rotate(switch of following options), default is true\n    ;log_rotate = true\n\n    # Max line number of single file, default is 1000000\n    ;max_lines = 1000000\n\n    # Max size shift of single file, default is 28 means 1 << 28, 256MB\n    ;max_size_shift = 28\n\n    # Segment log daily, default is true\n    ;daily_rotate = true\n\n    # Expired days of log file(delete after max days), default is 7\n    ;max_days = 7\n\n    [log.syslog]\n    ;level =\n\n    # log line format, valid options are text, console and json\n    ;format = text\n\n    # Syslog network type and address. This can be udp, tcp, or unix. If left blank, the default unix endpoints will be used.\n    ;network =\n    ;address =\n\n    # Syslog facility. user, daemon and local0 through local7 are valid.\n    ;facility =\n\n    # Syslog tag. By default, the process' argv[0] is used.\n    ;tag =\n\n                 \n    #################################### AMQP Event Publisher ##########################\n    [event_publisher]\n    ;enabled = false\n    ;rabbitmq_url = amqp://localhost/\n    ;exchange = grafana_events\n\n    ;#################################### Dashboard JSON files ##########################\n    [dashboards.json]\n    ;enabled = true\n    ;path = /var/lib/grafana/dashboards\n\n    #################################### Internal Grafana Metrics ##########################\n    # Metrics available at HTTP API Url /api/metrics\n    [metrics]\n    # Disable / Enable internal metrics\n    ;enabled           = true\n\n    # Publish interval\n    ;interval_seconds  = 10\n                 \n    # Send internal metrics to Graphite\n    ; [metrics.graphite]\n    ; address = localhost:2003\n    ; prefix = prod.grafana.%(instance_name)s.\n\n    #################################### Internal Grafana Metrics ##########################\n    # Url used to to import dashboards directly from Grafana.net\n    [grafana_net]\n    url = https://grafana.net"
  },
  {
    "path": "manifests/logging/grafana.json",
    "content": "{\n  \"__inputs\": [\n    {\n      \"name\": \"DS_PROMETHEUS\",\n      \"label\": \"prometheus\",\n      \"description\": \"\",\n      \"type\": \"datasource\",\n      \"pluginId\": \"prometheus\",\n      \"pluginName\": \"Prometheus\"\n    }\n  ],\n  \"__requires\": [\n    {\n      \"type\": \"panel\",\n      \"id\": \"graph\",\n      \"name\": \"Graph\",\n      \"version\": \"\"\n    },\n    {\n      \"type\": \"grafana\",\n      \"id\": \"grafana\",\n      \"name\": \"Grafana\",\n      \"version\": \"3.1.1\"\n    },\n    {\n      \"type\": \"datasource\",\n      \"id\": \"prometheus\",\n      \"name\": \"Prometheus\",\n      \"version\": \"1.0.0\"\n    }\n  ],\n  \"id\": null,\n  \"title\": \"Kubernetes\",\n  \"tags\": [],\n  \"style\": \"dark\",\n  \"timezone\": \"browser\",\n  \"editable\": true,\n  \"hideControls\": false,\n  \"sharedCrosshair\": false,\n  \"rows\": [\n    {\n      \"collapse\": false,\n      \"editable\": true,\n      \"height\": \"250px\",\n      \"panels\": [\n        {\n          \"aliasColors\": {},\n          \"bars\": false,\n          \"datasource\": \"${DS_PROMETHEUS}\",\n          \"editable\": true,\n          \"error\": false,\n          \"fill\": 1,\n          \"grid\": {\n            \"threshold1\": null,\n            \"threshold1Color\": \"rgba(216, 200, 27, 0.27)\",\n            \"threshold2\": null,\n            \"threshold2Color\": \"rgba(234, 112, 112, 0.22)\"\n          },\n          \"id\": 1,\n          \"isNew\": true,\n          \"legend\": {\n            \"avg\": false,\n            \"current\": false,\n            \"max\": false,\n            \"min\": false,\n            \"show\": true,\n            \"total\": false,\n            \"values\": false\n          },\n          \"lines\": true,\n          \"linewidth\": 2,\n          \"links\": [],\n          \"nullPointMode\": \"connected\",\n          \"percentage\": false,\n          \"pointradius\": 5,\n          \"points\": false,\n          \"renderer\": \"flot\",\n          \"seriesOverrides\": [],\n          \"span\": 12,\n          \"stack\": false,\n          \"steppedLine\": false,\n          \"targets\": [\n            {\n              \"expr\": \"count(count(container_start_time_seconds{io_kubernetes_pod_name!=\\\"\\\"}) by (io_kubernetes_pod_name))\",\n              \"intervalFactor\": 2,\n              \"legendFormat\": \"\",\n              \"refId\": \"A\",\n              \"step\": 20\n            }\n          ],\n          \"timeFrom\": null,\n          \"timeShift\": null,\n          \"title\": \"Pods\",\n          \"tooltip\": {\n            \"msResolution\": true,\n            \"shared\": true,\n            \"sort\": 0,\n            \"value_type\": \"cumulative\"\n          },\n          \"type\": \"graph\",\n          \"xaxis\": {\n            \"show\": true\n          },\n          \"yaxes\": [\n            {\n              \"format\": \"short\",\n              \"label\": null,\n              \"logBase\": 1,\n              \"max\": null,\n              \"min\": null,\n              \"show\": true\n            },\n            {\n              \"format\": \"short\",\n              \"label\": null,\n              \"logBase\": 1,\n              \"max\": null,\n              \"min\": null,\n              \"show\": true\n            }\n          ]\n        }\n      ],\n      \"title\": \"Row\"\n    }\n  ],\n  \"time\": {\n    \"from\": \"now-3h\",\n    \"to\": \"now\"\n  },\n  \"timepicker\": {\n    \"refresh_intervals\": [\n      \"5s\",\n      \"10s\",\n      \"30s\",\n      \"1m\",\n      \"5m\",\n      \"15m\",\n      \"30m\",\n      \"1h\",\n      \"2h\",\n      \"1d\"\n    ],\n    \"time_options\": [\n      \"5m\",\n      \"15m\",\n      \"1h\",\n      \"6h\",\n      \"12h\",\n      \"24h\",\n      \"2d\",\n      \"7d\",\n      \"30d\"\n    ]\n  },\n  \"templating\": {\n    \"list\": []\n  },\n  \"annotations\": {\n    \"list\": []\n  },\n  \"refresh\": \"5s\",\n  \"schemaVersion\": 12,\n  \"version\": 3,\n  \"links\": [],\n  \"gnetId\": null\n}\n"
  },
  {
    "path": "manifests/logging/grafana2.json",
    "content": "{\n  \"__inputs\": [\n    {\n      \"name\": \"DS_PROMETHEUS\",\n      \"label\": \"prometheus\",\n      \"description\": \"\",\n      \"type\": \"datasource\",\n      \"pluginId\": \"prometheus\",\n      \"pluginName\": \"Prometheus\"\n    }\n  ],\n  \"__requires\": [\n    {\n      \"type\": \"panel\",\n      \"id\": \"singlestat\",\n      \"name\": \"Singlestat\",\n      \"version\": \"\"\n    },\n    {\n      \"type\": \"panel\",\n      \"id\": \"graph\",\n      \"name\": \"Graph\",\n      \"version\": \"\"\n    },\n    {\n      \"type\": \"grafana\",\n      \"id\": \"grafana\",\n      \"name\": \"Grafana\",\n      \"version\": \"3.1.1\"\n    },\n    {\n      \"type\": \"datasource\",\n      \"id\": \"prometheus\",\n      \"name\": \"Prometheus\",\n      \"version\": \"1.0.0\"\n    }\n  ],\n  \"id\": null,\n  \"title\": \"Kubernetes\",\n  \"tags\": [],\n  \"style\": \"dark\",\n  \"timezone\": \"browser\",\n  \"editable\": true,\n  \"hideControls\": false,\n  \"sharedCrosshair\": false,\n  \"rows\": [\n    {\n      \"title\": \"New row\",\n      \"height\": \"250px\",\n      \"editable\": true,\n      \"collapse\": false,\n      \"panels\": [\n        {\n          \"title\": \"Number of Pods\",\n          \"error\": false,\n          \"span\": 12,\n          \"editable\": true,\n          \"type\": \"singlestat\",\n          \"isNew\": true,\n          \"id\": 2,\n          \"targets\": [\n            {\n              \"refId\": \"A\",\n              \"expr\": \"count(count(container_start_time_seconds{io_kubernetes_pod_name!=\\\"\\\"}) by (io_kubernetes_pod_name))\",\n              \"intervalFactor\": 2,\n              \"step\": 240\n            }\n          ],\n          \"links\": [],\n          \"datasource\": \"${DS_PROMETHEUS}\",\n          \"maxDataPoints\": 100,\n          \"interval\": null,\n          \"cacheTimeout\": null,\n          \"format\": \"none\",\n          \"prefix\": \"\",\n          \"postfix\": \"\",\n          \"nullText\": null,\n          \"valueMaps\": [\n            {\n              \"value\": \"null\",\n              \"op\": \"=\",\n              \"text\": \"N/A\"\n            }\n          ],\n          \"mappingTypes\": [\n            {\n              \"name\": \"value to text\",\n              \"value\": 1\n            },\n            {\n              \"name\": \"range to text\",\n              \"value\": 2\n            }\n          ],\n          \"rangeMaps\": [\n            {\n              \"from\": \"null\",\n              \"to\": \"null\",\n              \"text\": \"N/A\"\n            }\n          ],\n          \"mappingType\": 1,\n          \"nullPointMode\": \"connected\",\n          \"valueName\": \"avg\",\n          \"prefixFontSize\": \"50%\",\n          \"valueFontSize\": \"80%\",\n          \"postfixFontSize\": \"50%\",\n          \"thresholds\": \"20,25\",\n          \"colorBackground\": false,\n          \"colorValue\": false,\n          \"colors\": [\n            \"rgba(50, 172, 45, 0.97)\",\n            \"rgba(237, 129, 40, 0.89)\",\n            \"rgba(245, 54, 54, 0.9)\"\n          ],\n          \"sparkline\": {\n            \"show\": false,\n            \"full\": false,\n            \"lineColor\": \"rgb(31, 120, 193)\",\n            \"fillColor\": \"rgba(31, 118, 189, 0.18)\"\n          },\n          \"gauge\": {\n            \"show\": true,\n            \"minValue\": 0,\n            \"maxValue\": 30,\n            \"thresholdMarkers\": true,\n            \"thresholdLabels\": false\n          }\n        }\n      ]\n    },\n    {\n      \"collapse\": false,\n      \"editable\": true,\n      \"height\": \"250px\",\n      \"panels\": [\n        {\n          \"aliasColors\": {},\n          \"bars\": false,\n          \"datasource\": \"${DS_PROMETHEUS}\",\n          \"editable\": true,\n          \"error\": false,\n          \"fill\": 1,\n          \"grid\": {\n            \"threshold1\": null,\n            \"threshold1Color\": \"rgba(216, 200, 27, 0.27)\",\n            \"threshold2\": null,\n            \"threshold2Color\": \"rgba(234, 112, 112, 0.22)\"\n          },\n          \"id\": 1,\n          \"isNew\": true,\n          \"legend\": {\n            \"avg\": false,\n            \"current\": false,\n            \"max\": false,\n            \"min\": false,\n            \"show\": true,\n            \"total\": false,\n            \"values\": false\n          },\n          \"lines\": true,\n          \"linewidth\": 2,\n          \"links\": [],\n          \"nullPointMode\": \"connected\",\n          \"percentage\": false,\n          \"pointradius\": 5,\n          \"points\": false,\n          \"renderer\": \"flot\",\n          \"seriesOverrides\": [],\n          \"span\": 12,\n          \"stack\": false,\n          \"steppedLine\": false,\n          \"targets\": [\n            {\n              \"expr\": \"count(count(container_start_time_seconds{io_kubernetes_pod_name!=\\\"\\\"}) by (io_kubernetes_pod_name))\",\n              \"intervalFactor\": 2,\n              \"legendFormat\": \"\",\n              \"refId\": \"A\",\n              \"step\": 20\n            }\n          ],\n          \"timeFrom\": null,\n          \"timeShift\": null,\n          \"title\": \"Pods\",\n          \"tooltip\": {\n            \"msResolution\": true,\n            \"shared\": true,\n            \"sort\": 0,\n            \"value_type\": \"cumulative\"\n          },\n          \"type\": \"graph\",\n          \"xaxis\": {\n            \"show\": true\n          },\n          \"yaxes\": [\n            {\n              \"format\": \"short\",\n              \"label\": null,\n              \"logBase\": 1,\n              \"max\": null,\n              \"min\": null,\n              \"show\": true\n            },\n            {\n              \"format\": \"short\",\n              \"label\": null,\n              \"logBase\": 1,\n              \"max\": null,\n              \"min\": null,\n              \"show\": true\n            }\n          ]\n        }\n      ],\n      \"title\": \"Row\"\n    }\n  ],\n  \"time\": {\n    \"from\": \"now-3h\",\n    \"to\": \"now\"\n  },\n  \"timepicker\": {\n    \"refresh_intervals\": [\n      \"5s\",\n      \"10s\",\n      \"30s\",\n      \"1m\",\n      \"5m\",\n      \"15m\",\n      \"30m\",\n      \"1h\",\n      \"2h\",\n      \"1d\"\n    ],\n    \"time_options\": [\n      \"5m\",\n      \"15m\",\n      \"1h\",\n      \"6h\",\n      \"12h\",\n      \"24h\",\n      \"2d\",\n      \"7d\",\n      \"30d\"\n    ]\n  },\n  \"templating\": {\n    \"list\": []\n  },\n  \"annotations\": {\n    \"list\": []\n  },\n  \"refresh\": \"5s\",\n  \"schemaVersion\": 12,\n  \"version\": 5,\n  \"links\": [],\n  \"gnetId\": null\n}\n"
  },
  {
    "path": "manifests/nodeselector/pod-to-arch-amd64.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: busybox-to-amd64\n  namespace: default\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    imagePullPolicy: IfNotPresent\n    name: busybox\n  nodeSelector:\n    beta.kubernetes.io/arch: amd64\n"
  },
  {
    "path": "manifests/old/1605207/configmap.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: bleiman\n  labels:\n    app: vol\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    volumeMounts:\n    - mountPath: /kubecon\n      name: test\n    imagePullPolicy: IfNotPresent\n    name: busybox\n  restartPolicy: Always\n  volumes:\n  - name: test\n    configMap:\n      name: foobar\n"
  },
  {
    "path": "manifests/old/1605207/foobar.yml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: foobar\n  namespace: oreilly\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n"
  },
  {
    "path": "manifests/old/1605207/game-svc.yml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: game\n  namespace: default\nspec:\n  selector:\n    app: game\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n  type: NodePort\n"
  },
  {
    "path": "manifests/old/1605207/game.yml",
    "content": "apiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  name: game\n  namespace: default\nspec:\n  replicas: 2\n  template:\n    metadata:\n      name: game\n      namespace: default\n      labels:\n        app: game\n    spec:\n      containers:\n        - image: runseb/2048\n          name: game\n"
  },
  {
    "path": "manifests/old/1605207/hostpath.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: hostpath\nspec:\n  containers:\n  - image: busybox\n    name: busybox\n    command:\n    - sleep\n    - \"3600\"\n    volumeMounts:\n    - mountPath: /oreilly\n      name: hostpath\n  volumes:\n  - name: hostpath\n    hostPath:\n      path: /data\n"
  },
  {
    "path": "manifests/old/1605207/mysql.yml",
    "content": "kind: Pod\napiVersion: v1\nmetadata:\n  name: mysql-pvc\nspec:\n  volumes:\n    - name: data\n      persistentVolumeClaim:\n        claimName: myclaim\n  containers:\n    - name: mysql-pvc\n      image: \"mysql:5.5\"\n      env:\n      - name: MYSQL_ROOT_PASSWORD\n        value: root\n      volumeMounts:\n      - mountPath: \"/var/lib/mysql\"\n        name: data\n"
  },
  {
    "path": "manifests/old/1605207/nb.yml",
    "content": "apiVersion: cool.io/v1\nkind: NoteBook\nmetadata:\n  name: crazy\n  labels:\n    kubernetes: rocks\n"
  },
  {
    "path": "manifests/old/1605207/notebooks.yml",
    "content": "apiVersion: extensions/v1beta1\nkind: ThirdPartyResource\nmetadata:\n  name: note-book.cool.io\ndescription: \"A notebook\"\nversions:\n- name: v1\n"
  },
  {
    "path": "manifests/old/1605207/pvc.yaml",
    "content": "kind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: myclaim\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 1Gi\n"
  },
  {
    "path": "manifests/old/1605207/volumes.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: vol\n  labels:\n    app: vol\nspec:\n  containers:\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    volumeMounts:\n    - mountPath: /busy\n      name: test\n    imagePullPolicy: IfNotPresent\n    name: busy\n  - image: busybox\n    command:\n      - sleep\n      - \"3600\"\n    volumeMounts:\n    - mountPath: /box\n      name: test\n    imagePullPolicy: IfNotPresent\n    name: box\n  restartPolicy: Always\n  volumes:\n  - name: test\n    emptyDir: {}\n"
  },
  {
    "path": "manifests/scheduling/README.md",
    "content": "\n\n```\ncurl -H \"Content-Type:application/json\" -X POST --data @binding.json http://localhost:8080/api/v1/namespaces/default/pods/foobar-sched/binding/\n```\n"
  },
  {
    "path": "manifests/scheduling/binding.json",
    "content": "{\n  \"apiVersion\": \"v1\",\n  \"kind\": \"Binding\",\n  \"metadata\": {\n    \"name\": \"foobar\"\n  },\n  \"target\": {\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Node\",\n    \"name\": \"minikube\"\n  }\n}\n"
  },
  {
    "path": "manifests/scheduling/foobar.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: toto\nspec:\n  schedulerName: foobar\n  containers:\n  - name: redis\n    image: redis\n"
  },
  {
    "path": "manifests/scheduling/redis-sched.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: foobar-sched\nspec:\n  schedulerName: foobar\n  containers:\n  - name: redis\n    image: redis\n"
  },
  {
    "path": "manifests/scheduling/redis-selector.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: foobar-node\nspec:\n  containers:\n  - name: redis\n    image: redis\n  nodeSelector:\n    foo: bar\n"
  },
  {
    "path": "manifests/scheduling/redis.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: foobar\nspec:\n  containers:\n  - name: redis\n    image: redis\n"
  },
  {
    "path": "manifests/scheduling/scheduler.py",
    "content": "#!/usr/bin/env python\n\nimport time\nimport random\nimport json\n\nfrom kubernetes import client, config, watch\n\nconfig.load_kube_config()\nv1=client.CoreV1Api()\n\nscheduler_name = \"foobar\"\n\ndef nodes_available():\n    ready_nodes = []\n    for n in v1.list_node().items:\n            for status in n.status.conditions:\n                if status.status == \"True\" and status.type == \"Ready\":\n                    ready_nodes.append(n.metadata.name)\n    return ready_nodes\n\ndef scheduler(name, node, namespace=\"default\"):\n    body=client.V1Binding()\n        \n    target=client.V1ObjectReference()\n    target.kind=\"Node\"\n    target.apiVersion=\"v1\"\n    target.name= node\n    \n    meta=client.V1ObjectMeta()\n    meta.name=name\n    \n    body.target=target\n    body.metadata=meta\n    \n    return v1.create_namespaced_binding(namespace, body)\n\ndef main():\n    w = watch.Watch()\n    for event in w.stream(v1.list_namespaced_pod, \"default\"):\n        if event['object'].status.phase == \"Pending\" and event['object'].spec.scheduler_name == scheduler_name:\n            try:\n                res = scheduler(event['object'].metadata.name, random.choice(nodes_available()))\n            except client.rest.ApiException as e:\n                print json.loads(e.body)['message']\n                    \nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "manifests/security/openssl-generate-certs.sh",
    "content": "#!/usr/bin/env bash\n\nminicube_dir=.minikube\nclient_cert_dir=k8s_client_crts\nclient_username=employee\n\n\n\nif openssl_bin=$(which openssl) ; then\n    # Test minicube certs\n    for i in crt key ; do\n        if ! [ -f $HOME/$minicube_dir/ca.$i ] ; then\n            echo \"Unable to find ca.$i\"\n            exit 1\n        else\n            echo \"OK: Found ca.$i\"\n        fi \n    done\n\n    # Create cert directory\n    if mkdir -p $HOME/$client_cert_dir ; then\n\n        set -e\n        # Generate certs\n        $openssl_bin genrsa -out $HOME/$client_cert_dir/$client_username.key 2048 \n        $openssl_bin req -new -key $HOME/$client_cert_dir/$client_username.key -out $HOME/$client_cert_dir/$client_username.csr -subj \"/CN=$client_username/O=bitnami\"  \n        $openssl_bin x509 -req -in $HOME/$client_cert_dir/$client_username.csr -CA $HOME/$minicube_dir/ca.crt -CAkey $HOME/$minicube_dir/ca.key -CAcreateserial -out $HOME/$client_cert_dir/$client_username.crt -days 500 \n\n        echo -e \"\\nCreated in $HOME/$client_cert_dir\"  \n        ls -1 $HOME/$client_cert_dir/*\n        exit 0\n    else\n        echo \"Unable to create $HOME/$client_cert_dir\"\n        exit 1\n    fi   \nelse\n    echo \"Unable to find openssl binary in PATH\"\n    exit 1\nfi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n#!/usr/bin/env bash\n\nif openssl_bin=$(which openssl) ; then\n    if [ -z $SUDO_USER ] ; then\n        username=$USER\n    else\n        username=$SUDO\n    \nelse\n    echo \"Sorry, unable to find openssl binary in path\"\n\n    mkdir -p $HOME/k8s_client_certificates\n    \nelse\n    echo \"Sorry, unable to find openssl binary in path\"\nfi\n\n\n\n\n\n openssl genrsa -out employee.key 2048\n openssl req -new -key employee.key -out employee.csr -subj\n openssl req -new -key employee.key -out employee.csr -sub \"/CN=employee/O=bitnami\"\n openssl req -new -key employee.key -out employee.csr -subj \"/CN=employee/O=bitnami\"\n openssl x509 -req -in employee.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out employee.crt -days 500\n openssl x509 -req -in employee.csr -CA /home/wire/.minikube/ca.crt -CAkey /home/wire/.minikube/ca.key -CAcreateserial -out employee.crt -days 500\n  601  openssl rsa -check -in rv-osiris.key \n  602  openssl rsa -check -in rv-osiris.crt\n 1967  openssl genrsa -out employee.key 2048\n 1968  openssl req -new -key employee.key -out employee.csr -subj\n 1969  openssl req -new -key employee.key -out employee.csr -sub \"/CN=employee/O=bitnami\"\n 1970  openssl req -new -key employee.key -out employee.csr -subj \"/CN=employee/O=bitnami\"\n 1971  openssl x509 -req -in employee.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out employee.crt -days 500\n 1972  openssl x509 -req -in employee.csr -CA /home/wire/.minikube/ca.crt -CAkey /home/wire/.minikube/ca.key -CAcreateserial -out employee.crt -days 500\n 1998  history | grep openssl\n 1999  history | grep openssl > napsat-script\n 2013  mv napsat-script openssl-generate-certs.sh\n 2014  vim openssl-generate-certs.sh \n 2015  test openssl\n 2017  test openssl1\n 2019  vim openssl-generate-certs.sh \n 2021  vim openssl-generate-certs.sh \n 2023  history | grep openssl \n 2024  history | grep openssl  >> openssl-generate-certs.sh \n"
  },
  {
    "path": "manifests/security/pawn.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: pawn\nspec:\n  containers:\n  - image: busybox\n    command:\n    - sleep\n    - \"3600\"\n    name: pawn\n    securityContext:\n      privileged: true\n  hostNetwork: true\n  hostPID: true\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/wordpress/march13/mysql-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: mysql\n  name: mysql\n  namespace: oreilly\n  selfLink: /api/v1/namespaces/default/services/mysql\nspec:\n  ports:\n  - port: 3306\n    protocol: TCP\n    targetPort: 3306\n  selector:\n    run: mysql\n  sessionAffinity: None\n  type: ClusterIP\nstatus:\n  loadBalancer: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/mysql.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  annotations:\n    deployment.kubernetes.io/revision: \"1\"\n  creationTimestamp: null\n  generation: 1\n  labels:\n    run: mysql\n  name: mysql\n  namespace: oreilly\n  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/mysql\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      run: mysql\n  strategy:\n    rollingUpdate:\n      maxSurge: 1\n      maxUnavailable: 1\n    type: RollingUpdate\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        run: mysql\n    spec:\n      containers:\n      - env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: root\n        image: mysql:5.5\n        imagePullPolicy: IfNotPresent\n        name: mysql\n        resources: {}\n        terminationMessagePath: /dev/termination-log\n        terminationMessagePolicy: File\n      dnsPolicy: ClusterFirst\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      securityContext: {}\n      terminationGracePeriodSeconds: 30\nstatus: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/quota.yaml",
    "content": "apiVersion: v1\nkind: ResourceQuota\nmetadata:\n  creationTimestamp: null\n  name: wordpress\n  namespace: oreilly\n  selfLink: /api/v1/namespaces/oreilly/resourcequotas/wordpress\nspec:\n  hard:\n    pods: \"2\"\nstatus: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/.helmignore",
    "content": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation (prefixed with !). Only one pattern per line.\n.DS_Store\n# Common VCS dirs\n.git/\n.gitignore\n.bzr/\n.bzrignore\n.hg/\n.hgignore\n.svn/\n# Common backup files\n*.swp\n*.bak\n*.tmp\n*~\n# Various IDEs\n.project\n.idea/\n*.tmproj\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/Chart.yaml",
    "content": "apiVersion: v1\nappVersion: \"1.0\"\ndescription: A wordpress chart for fun\nname: wordpress\nversion: 0.9.0\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/templates/mysql-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: mysql\n  name: mysql\n  namespace: oreilly\n  selfLink: /api/v1/namespaces/default/services/mysql\nspec:\n  ports:\n  - port: 3306\n    protocol: TCP\n    targetPort: 3306\n  selector:\n    run: mysql\n  sessionAffinity: None\n  type: ClusterIP\nstatus:\n  loadBalancer: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/templates/mysql.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  annotations:\n    deployment.kubernetes.io/revision: \"1\"\n  creationTimestamp: null\n  generation: 1\n  labels:\n    run: mysql\n  name: mysql\n  namespace: oreilly\n  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/mysql\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      run: mysql\n  strategy:\n    rollingUpdate:\n      maxSurge: 1\n      maxUnavailable: 1\n    type: RollingUpdate\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        run: mysql\n    spec:\n      containers:\n      - env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: root\n        image: mysql:5.5\n        imagePullPolicy: IfNotPresent\n        name: mysql\n        resources: {}\n        terminationMessagePath: /dev/termination-log\n        terminationMessagePolicy: File\n      dnsPolicy: ClusterFirst\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      securityContext: {}\n      terminationGracePeriodSeconds: 30\nstatus: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/templates/quota.yaml",
    "content": "apiVersion: v1\nkind: ResourceQuota\nmetadata:\n  creationTimestamp: null\n  name: wordpress\n  namespace: oreilly\n  selfLink: /api/v1/namespaces/oreilly/resourcequotas/wordpress\nspec:\n  hard:\n    pods: \"2\"\nstatus: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/templates/wordpress-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: wordpress\n  name: wordpress\n  namespace: oreilly\n  selfLink: /api/v1/namespaces/default/services/wordpress\nspec:\n  externalTrafficPolicy: Cluster\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n  selector:\n    run: wordpress\n  sessionAffinity: None\n  type: NodePort\nstatus:\n  loadBalancer: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/templates/wordpress.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  annotations:\n    deployment.kubernetes.io/revision: \"1\"\n  creationTimestamp: null\n  generation: 1\n  labels:\n    run: wordpress\n  name: wordpress\n  namespace: oreilly\n  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/wordpress\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      run: wordpress\n  strategy:\n    rollingUpdate:\n      maxSurge: 1\n      maxUnavailable: 1\n    type: RollingUpdate\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        run: wordpress\n    spec:\n      containers:\n      - env:\n        - name: WORDPRESS_DB_HOST\n          value: mysql\n        - name: WORDPRESS_DB_PASSWORD\n          value: root\n        image: wordpress\n        imagePullPolicy: Always\n        name: wordpress\n        resources: {}\n        terminationMessagePath: /dev/termination-log\n        terminationMessagePolicy: File\n      dnsPolicy: ClusterFirst\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      securityContext: {}\n      terminationGracePeriodSeconds: 30\nstatus: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress/values.yaml",
    "content": "# this is a wordpress chart\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress-ns.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: oreilly\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: wordpress\n  name: wordpress\n  namespace: oreilly\n  selfLink: /api/v1/namespaces/default/services/wordpress\nspec:\n  externalTrafficPolicy: Cluster\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n  selector:\n    run: wordpress\n  sessionAffinity: None\n  type: NodePort\nstatus:\n  loadBalancer: {}\n"
  },
  {
    "path": "manifests/wordpress/march13/wordpress.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  annotations:\n    deployment.kubernetes.io/revision: \"1\"\n  creationTimestamp: null\n  generation: 1\n  labels:\n    run: wordpress\n  name: wordpress\n  namespace: oreilly\n  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/wordpress\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      run: wordpress\n  strategy:\n    rollingUpdate:\n      maxSurge: 1\n      maxUnavailable: 1\n    type: RollingUpdate\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        run: wordpress\n    spec:\n      containers:\n      - env:\n        - name: WORDPRESS_DB_HOST\n          value: mysql\n        - name: WORDPRESS_DB_PASSWORD\n          value: root\n        image: wordpress\n        imagePullPolicy: Always\n        name: wordpress\n        resources: {}\n        terminationMessagePath: /dev/termination-log\n        terminationMessagePolicy: File\n      dnsPolicy: ClusterFirst\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      securityContext: {}\n      terminationGracePeriodSeconds: 30\nstatus: {}\n"
  },
  {
    "path": "manifests/wordpress/mysql-secret.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: mysql-secret\nspec:\n  containers:\n  - image: mysql:5.5\n    env:\n    - name: MYSQL_ROOT_PASSWORD\n      valueFrom:\n        secretKeyRef:\n          name: foobar\n          key: password\n    imagePullPolicy: IfNotPresent\n    name: mysql\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/wordpress/mysql.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: mysql\nspec:\n  containers:\n  - image: mysql:5.5\n    env:\n    - name: MYSQL_ROOT_PASSWORD\n      value: root\n    imagePullPolicy: IfNotPresent\n    name: mysql\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/wordpress/secret.json",
    "content": "{\n    \"kind\": \"Secret\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"name\": \"mysql\",\n        \"creationTimestamp\": null\n    },\n    \"data\": {\n        \"password\": \"cm9vdA==\"\n    }\n}\n"
  },
  {
    "path": "manifests/wordpress/wordpress/mysql-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: mysql\n  name: mysql\n  namespace: wordpress\nspec:\n  ports:\n    - port: 3306\n  type: ClusterIP\n  selector:\n    app: mysql\n"
  },
  {
    "path": "manifests/wordpress/wordpress/mysql.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\n  namespace: wordpress\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  template:\n    metadata:\n      labels:\n        app: mysql\n    spec:\n      containers:\n      - name: mysql\n        image: mysql:5.5\n        ports:\n        - containerPort: 3306\n        env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: root\n"
  },
  {
    "path": "manifests/wordpress/wordpress/wp-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: wordpress\n  name: wordpress\n  namespace: wordpress\nspec:\n  ports:\n    - port: 80\n  type: ClusterIP\n  selector:\n    app: wordpress\n"
  },
  {
    "path": "manifests/wordpress/wordpress/wp.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: wordpress\n  namespace: wordpress\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: wordpress\n  template:\n    metadata:\n      labels:\n        app: wordpress\n    spec:\n      containers:\n      - name: wordpress\n        image: wordpress\n        ports:\n        - containerPort: 80\n        env:\n        - name: WORDPRESS_DB_PASSWORD\n          value: root\n"
  },
  {
    "path": "manifests/wordpress/wordpress-secret.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: wp\n  labels:\n    app: wp\nspec:\n  containers:\n  - image: wordpress\n    env:\n    - name: WORDPRESS_DB_PASSWORD\n      valueFrom:\n        secretKeyRef:\n          name: mysql\n          key: password\n    - name: WORDPRESS_DB_HOST\n      value: mysql\n    imagePullPolicy: IfNotPresent\n    name: wordpress\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/wordpress/wordpress.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: wp\n  labels:\n    app: wp\nspec:\n  containers:\n  - image: wordpress\n    env:\n    - name: WORDPRESS_DB_PASSWORD\n      value: root\n    - name: WORDPRESS_DB_HOST\n      value: 127.0.0.1\n    imagePullPolicy: IfNotPresent\n    name: wordpress\n  - image: mysql:5.5\n    env:\n    - name: MYSQL_ROOT_PASSWORD\n      value: root\n    imagePullPolicy: IfNotPresent\n    name: mysql\n  restartPolicy: Always\n"
  },
  {
    "path": "manifests/wordpress/wp-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: wp\n  name: wp\nspec:\n  ports:\n    - port: 80\n  type: NodePort\n  selector:\n    app: wp\n"
  },
  {
    "path": "manifests/wordpress/wp.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: wordpress\n---\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: counts\n  namespace: wordpress\nspec:\n  hard:\n    pods: \"4\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: mysql\n  name: mysql\n  namespace: wordpress\nspec:\n  ports:\n    - port: 3306\n  type: ClusterIP\n  selector:\n    app: mysql\n---\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: wordpress\n  name: wordpress\n  namespace: wordpress\nspec:\n  ports:\n    - port: 80\n  type: ClusterIP\n  selector:\n    app: wordpress\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: wordpress\n  namespace: wordpress\nspec:\n  rules:\n    - host: wordpress.192.168.99.100.nip.io\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: wordpress\n                port:\n                  number: 80\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\n  namespace: wordpress\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  template:\n    metadata:\n      name: mysql\n      labels:\n        app: mysql\n    spec:\n      containers:\n      - name: mysql\n        image: mysql:5.5\n        ports:\n        - containerPort: 3306\n        env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: root\n        - name: MYSQL_DATABASE\n          value: wordpress\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: wordpress\n  namespace: wordpress\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: wordpress\n  template:\n    metadata:\n      name: wordpress\n      labels:\n        app: wordpress\n    spec:\n      containers:\n      - name: wordpress\n        image: wordpress\n        ports:\n        - containerPort: 80\n        env:\n        - name: WORDPRESS_DB_USER\n          value: root\n        - name: WORDPRESS_DB_PASSWORD\n          value: root\n        - name: WORDPRESS_DB_HOST\n          value: mysql\n        - name: WORDPRESS_DB_NAME\n          value: wordpress\n"
  },
  {
    "path": "monitoring/grafana-statefulset.yaml",
    "content": "apiVersion: apps/v1beta1\nkind: StatefulSet\nmetadata:\n  name: grafana\n  namespace: monitoring\n  labels:\n    name: grafana\nspec:\n  replicas: 1\n  serviceName: grafana\n  template:\n    metadata:\n      labels:\n        name: grafana\n    spec:\n      containers:\n      - image: grafana/grafana:4.5.2\n        name: grafana\n        imagePullPolicy: IfNotPresent\n        # env:\n        resources:\n          # keep request = limit to keep this container in guaranteed class\n          limits:\n            cpu: 200m\n            memory: 100Mi\n          requests:\n            cpu: 100m\n            memory: 100Mi\n        env:\n          - name: GF_AUTH_BASIC_ENABLED\n            value: \"true\"\n          - name: GF_AUTH_ANONYMOUS_ENABLED\n            value: \"true\"\n          - name: GF_AUTH_ANONYMOUS_ORG_ROLE\n            value: Viewer\n          - name: GF_LOG_LEVEL\n            value: warn\n          - name: GF_LOG_MODE\n            value: console\n          - name: GF_METRICS_ENABLED\n            value: \"true\"\n          - name: GF_SERVER_ROOT_URL\n            value: \"%(protocol)s://%(domain)s:%(http_port)s/api/v1/proxy/namespaces/monitoring/services/grafana:3000/\"\n        readinessProbe:\n          httpGet:\n            path: /api/org\n            port: 3000\n          # initialDelaySeconds: 30\n          # timeoutSeconds: 1\n        volumeMounts:\n        - name: grafana-data\n          mountPath: /var/lib/grafana\n#      volumes:\n#      - name: grafana-data\n#        hostPath:\n#          path: /srv/var/lib/grafana\n  volumeClaimTemplates:\n  - apiVersion: v1\n    kind: PersistentVolumeClaim\n    metadata:\n      name: grafana-data\n      namespace: monitoring\n    spec:\n      accessModes:\n      - ReadWriteOnce\n      resources:\n        requests:\n          storage: 1Gi\n\n"
  },
  {
    "path": "monitoring/grafana-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: grafana\n  namespace: monitoring\n  annotations:\n    prometheus.io/scrape: 'true'\n  labels:\n    name: grafana\nspec:\n  type: NodePort\n  ports:\n    - port: 3000\n      protocol: TCP\n      name: webui\n      nodePort: 30100\n  selector:\n    name: grafana\n"
  },
  {
    "path": "monitoring/monitoring-namespace.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: monitoring\n"
  },
  {
    "path": "monitoring/node-exporter-daemonset.yaml",
    "content": "---\napiVersion: extensions/v1beta1\nkind: DaemonSet\nmetadata:\n  labels:\n    name: node-exporter\n  name: node-exporter\n  namespace: monitoring\nspec:\n  template:\n    metadata:\n      labels:\n        name: node-exporter\n    spec:\n      containers:\n      - args:\n        - --collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($|/)\n        - --collector.procfs=/host/proc\n        - --collector.sysfs=/host/sys\n        env: []\n        image: prom/node-exporter:v0.14.0\n        livenessProbe:\n          httpGet:\n            path: /\n            port: scrape\n        name: node-exporter\n        ports:\n        - containerPort: 9100\n          name: scrape\n        readinessProbe:\n          httpGet:\n            path: /\n            port: scrape\n          successThreshold: 2\n        volumeMounts:\n        - mountPath: /host/proc\n          name: procfs\n          readOnly: true\n        - mountPath: /rootfs\n          name: root\n          readOnly: true\n        - mountPath: /host/sys\n          name: sysfs\n          readOnly: true\n      hostNetwork: true\n      hostPID: true\n      tolerations:\n      - effect: NoSchedule\n        key: node-role.kubernetes.io/master\n      volumes:\n      - hostPath:\n          path: /proc\n        name: procfs\n      - hostPath:\n          path: /\n        name: root\n      - hostPath:\n          path: /sys\n        name: sysfs\n"
  },
  {
    "path": "monitoring/node-exporter-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    name: node-exporter\n    namespace: monitoring\n  annotations:\n    prometheus.io/scrape: 'true'\n  name: node-exporter\n  namespace: monitoring\nspec:\n  type: ClusterIP\n  clusterIP: None\n  ports:\n  - name: http-metrics\n    port: 9100\n    protocol: TCP\n  selector:\n    name: node-exporter\n"
  },
  {
    "path": "monitoring/prometheus-config.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: prometheus-config\n  namespace: monitoring\ndata:\n  prometheus.yml: |\n    global:\n      scrape_interval: 10s\n      scrape_timeout: 10s\n      evaluation_interval: 10s\n    rule_files:\n    - \"/etc/prometheus-config/*.rules\"\n    #\n    # Adapted from\n    #   https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml\n    #\n    # A scrape configuration for running Prometheus on a Kubernetes cluster.\n    # This uses separate scrape configs for cluster components (i.e. API server, node)\n    # and services to allow each to use different authentication configs.\n    #\n    # Kubernetes labels will be added as Prometheus labels on metrics via the\n    # `labelmap` relabeling action.\n    #\n    # If you are using Kubernetes 1.7.2 or earlier, please take note of the comments\n    # for the kubernetes-cadvisor job; you will need to edit or remove this job.\n    \n    # Scrape config for API servers.\n    #\n    # Kubernetes exposes API servers as endpoints to the default/kubernetes\n    # service so this uses `endpoints` role and uses relabelling to only keep\n    # the endpoints associated with the default/kubernetes service using the\n    # default named port `https`. This works for single API server deployments as\n    # well as HA API server deployments.\n    scrape_configs:\n    - job_name: 'kubernetes-apiservers'\n    \n      kubernetes_sd_configs:\n      - role: endpoints\n    \n      # Default to scraping over https. If required, just disable this or change to\n      # `http`.\n      scheme: https\n    \n      # This TLS & bearer token file config is used to connect to the actual scrape\n      # endpoints for cluster components. This is separate to discovery auth\n      # configuration because discovery & scraping are two separate concerns in\n      # Prometheus. The discovery auth config is automatic if Prometheus runs inside\n      # the cluster. Otherwise, more config options have to be provided within the\n      # <kubernetes_sd_config>.\n      tls_config:\n        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n        # If your node certificates are self-signed or use a different CA to the\n        # master CA, then disable certificate verification below. Note that\n        # certificate verification is an integral part of a secure infrastructure\n        # so this should only be disabled in a controlled environment. You can\n        # disable certificate verification by uncommenting the line below.\n        #\n        insecure_skip_verify: true\n      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token\n    \n      # Keep only the default/kubernetes service endpoints for the https port. This\n      # will add targets for each API server which Kubernetes adds an endpoint to\n      # the default/kubernetes service.\n      relabel_configs:\n      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]\n        action: keep\n        regex: default;kubernetes;https\n    \n    # Scrape config for nodes (kubelet).\n    #\n    - job_name: \"kubernetes-nodes\"\n      kubernetes_sd_configs:\n      - role: node\n      relabel_configs:\n      - source_labels:\n        - __address__\n        regex: '(.*):10250'\n        replacement: '${1}:10255'\n        target_label: __address__\n    \n    # Scrape config for Kubelet cAdvisor.\n    #\n    # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics\n    # (those whose names begin with 'container_') have been removed from the\n    # Kubelet metrics endpoint.  This job scrapes the cAdvisor endpoint to\n    # retrieve those metrics.\n    - job_name: \"kubernetes-cadvisor\"\n      kubernetes_sd_configs:\n      - role: node\n      relabel_configs:\n      - action: labelmap\n        regex: __meta_kubernetes_node_label_(.+)\n      - source_labels:\n        - __meta_kubernetes_node_name\n        regex: (.+)\n        replacement: /metrics/cadvisor\n        target_label: __metrics_path__\n      - source_labels:\n        - __address__\n        regex: '(.*):10250'\n        replacement: '${1}:10255'\n        target_label: __address__\n    \n    # Scrape config for service endpoints.\n    #\n    # The relabeling allows the actual service scrape endpoint to be configured\n    # via the following annotations:\n    #\n    # * `prometheus.io/scrape`: Only scrape services that have a value of `true`\n    # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need\n    # to set this to `https` & most likely set the `tls_config` of the scrape config.\n    # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.\n    # * `prometheus.io/port`: If the metrics are exposed on a different port to the\n    # service then set this appropriately.\n    - job_name: 'kubernetes-service-endpoints'\n    \n      kubernetes_sd_configs:\n      - role: endpoints\n    \n      relabel_configs:\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]\n        action: keep\n        regex: true\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]\n        action: replace\n        target_label: __scheme__\n        regex: (https?)\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]\n        action: replace\n        target_label: __metrics_path__\n        regex: (.+)\n      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]\n        action: replace\n        target_label: __address__\n        regex: ([^:]+)(?::\\d+)?;(\\d+)\n        replacement: $1:$2\n      - action: labelmap\n        regex: __meta_kubernetes_service_label_(.+)\n      - source_labels: [__meta_kubernetes_namespace]\n        action: replace\n        target_label: kubernetes_namespace\n      - source_labels: [__meta_kubernetes_service_name]\n        action: replace\n        target_label: kubernetes_name\n    \n    # Example scrape config for probing services via the Blackbox Exporter.\n    #\n    # The relabeling allows the actual service scrape endpoint to be configured\n    # via the following annotations:\n    #\n    # * `prometheus.io/probe`: Only probe services that have a value of `true`\n    - job_name: 'kubernetes-services'\n    \n      metrics_path: /probe\n      params:\n        module: [http_2xx]\n    \n      kubernetes_sd_configs:\n      - role: service\n    \n      relabel_configs:\n      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]\n        action: keep\n        regex: true\n      - source_labels: [__address__]\n        target_label: __param_target\n      - target_label: __address__\n        replacement: blackbox-exporter.example.com:9115\n      - source_labels: [__param_target]\n        target_label: instance\n      - action: labelmap\n        regex: __meta_kubernetes_service_label_(.+)\n      - source_labels: [__meta_kubernetes_namespace]\n        target_label: kubernetes_namespace\n      - source_labels: [__meta_kubernetes_service_name]\n        target_label: kubernetes_name\n    \n    # Example scrape config for probing ingresses via the Blackbox Exporter.\n    #\n    # The relabeling allows the actual ingress scrape endpoint to be configured\n    # via the following annotations:\n    #\n    # * `prometheus.io/probe`: Only probe services that have a value of `true`\n    # - job_name: 'kubernetes-ingresses'\n    # \n    #   metrics_path: /probe\n    #   params:\n    #     module: [http_2xx]\n    # \n    #   kubernetes_sd_configs:\n    #     - role: ingress\n    # \n    #   relabel_configs:\n    #     - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]\n    #       action: keep\n    #       regex: true\n    #     - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]\n    #       regex: (.+);(.+);(.+)\n    #       replacement: ${1}://${2}${3}\n    #       target_label: __param_target\n    #     - target_label: __address__\n    #       replacement: blackbox-exporter.example.com:9115\n    #     - source_labels: [__param_target]\n    #       target_label: instance\n    #     - action: labelmap\n    #       regex: __meta_kubernetes_ingress_label_(.+)\n    #     - source_labels: [__meta_kubernetes_namespace]\n    #       target_label: kubernetes_namespace\n    #     - source_labels: [__meta_kubernetes_ingress_name]\n    #       target_label: kubernetes_name\n    \n    # Example scrape config for pods\n    #\n    # The relabeling allows the actual pod scrape endpoint to be configured via the\n    # following annotations:\n    #\n    # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`\n    # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.\n    # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the\n    # pod's declared ports (default is a port-free target if none are declared).\n    - job_name: 'kubernetes-pods'\n    \n      kubernetes_sd_configs:\n      - role: pod\n    \n      relabel_configs:\n      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]\n        action: keep\n        regex: true\n      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]\n        action: replace\n        target_label: __metrics_path__\n        regex: (.+)\n      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]\n        action: replace\n        regex: ([^:]+)(?::\\d+)?;(\\d+)\n        replacement: $1:$2\n        target_label: __address__\n      - action: labelmap\n        regex: __meta_kubernetes_pod_label_(.+)\n      - source_labels: [__meta_kubernetes_namespace]\n        action: replace\n        target_label: kubernetes_namespace\n      - source_labels: [__meta_kubernetes_pod_name]\n        action: replace\n        target_label: kubernetes_pod_name\n"
  },
  {
    "path": "monitoring/prometheus-rbac.yaml",
    "content": "---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRoleBinding\nmetadata:\n  name: prometheus\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: prometheus\nsubjects:\n- kind: ServiceAccount\n  name: prometheus-k8s\n  namespace: monitoring\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRole\nmetadata:\n  name: prometheus\nrules:\n- apiGroups: [\"\"]\n  resources:\n  - nodes\n  - services\n  - endpoints\n  - pods\n  verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"\"]\n  resources:\n  - configmaps\n  verbs: [\"get\"]\n- nonResourceURLs: [\"/metrics\"]\n  verbs: [\"get\"]\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: prometheus-k8s\n  namespace: monitoring\n"
  },
  {
    "path": "monitoring/prometheus-statefulset.yaml",
    "content": "apiVersion: apps/v1beta1\nkind: StatefulSet\nmetadata:\n  labels:\n    name: prometheus\n  name: prometheus\n  namespace: monitoring\nspec:\n  replicas: 1\n  serviceName: prometheus\n  selector:\n    matchLabels:\n      name: prometheus\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        name: prometheus\n      annotations:\n        prometheus.io/scrape: \"true\"\n        prometheus.io/port: \"9090\"\n    spec:\n      serviceAccountName: prometheus-k8s\n      containers:\n      - args:\n        - --config.file=/etc/prometheus-config/prometheus.yml\n        - --storage.local.path=/prometheus\n        - --storage.local.retention=720h\n        - --web.console.libraries=/etc/prometheus/console_libraries\n        - --web.console.templates=/etc/prometheus/consoles\n        - --web.external-url=http://127.0.0.1:8001/api/v1/proxy/namespaces/monitoring/services/prometheus:9090/\n        - --web.route-prefix=/\n        image: prom/prometheus:v1.7.2\n        imagePullPolicy: IfNotPresent\n        name: prometheus\n        ports:\n        - containerPort: 9090\n          protocol: TCP\n        resources:\n          limits:\n            cpu: 500m\n            memory: 2500Mi\n          requests:\n            cpu: 500m\n            memory: 1500Mi\n        volumeMounts:\n        - mountPath: /etc/prometheus-config\n          name: config\n          readOnly: true\n        - mountPath: /prometheus\n          name: prometheus-data\n      - args:\n        - --volume-dir=/etc/config\n        - --webhook-url=http://localhost:9090/-/reload\n        image: jimmidyson/configmap-reload:v0.1\n        imagePullPolicy: IfNotPresent\n        name: configmap-reload\n        volumeMounts:\n        - mountPath: /etc/config\n          name: config\n          readOnly: true\n      restartPolicy: Always\n      securityContext: {}\n      terminationGracePeriodSeconds: 30\n      volumes:\n      - configMap:\n          defaultMode: 420\n          name: prometheus-config\n        name: config\n  volumeClaimTemplates:\n  - apiVersion: v1\n    kind: PersistentVolumeClaim\n    metadata:\n      name: prometheus-data\n      namespace: monitoring\n    spec:\n      accessModes:\n      - ReadWriteOnce\n      resources:\n        requests:\n          storage: 8Gi\n\n"
  },
  {
    "path": "monitoring/prometheus-svc.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: prometheus\n  namespace: monitoring\n  labels:\n    name: prometheus\n  annotations:\n    prometheus.io/scrape: 'true'\nspec:\n  type: NodePort\n  ports:\n    - port: 9090\n      protocol: TCP\n      name: webui\n      nodePort: 30090\n  selector:\n    name: prometheus\n"
  },
  {
    "path": "scripts/create_binding.py",
    "content": "#!/usr/bin/env python\n\nimport time\nimport random\nimport json\n\nfrom kubernetes import client, config, watch\n\nconfig.load_kube_config()\nv1=client.CoreV1Api()\n\nscheduler_name = \"foobar\"\n\ndef nodes_available():\n    ready_nodes = []\n    for n in v1.list_node().items:\n            for status in n.status.conditions:\n                if status.status == \"True\" and status.type == \"Ready\":\n                    ready_nodes.append(n.metadata.name)\n    return ready_nodes\n\ndef scheduler(name, node, namespace=\"default\"):\n    body=client.V1Binding()\n        \n    target=client.V1ObjectReference()\n    target.kind=\"Node\"\n    target.apiVersion=\"v1\"\n    target.name= node\n    \n    meta=client.V1ObjectMeta()\n    meta.name=name\n    \n    body.target=target\n    body.metadata=meta\n    \n    return v1.create_namespaced_binding_binding(name, namespace, body)\n\ndef main():\n    w = watch.Watch()\n    for event in w.stream(v1.list_namespaced_pod, \"default\"):\n        if event['object'].status.phase == \"Pending\" and event['object'].spec.scheduler_name == scheduler_name:\n            try:\n                res = scheduler(event['object'].metadata.name, random.choice(nodes_available()))\n            except client.rest.ApiException as e:\n                print json.loads(e.body)['message']\n                    \nif __name__ == '__main__':\n    main()"
  },
  {
    "path": "scripts/create_cronjob.py",
    "content": "import jinja2\nimport kubernetes\n\nsvc =\"\"\"\napiVersion: batch/v2alpha1\nkind: CronJob\nmetadata:\n  name: {{ name }}\nspec:\n  schedule: \"*/1 * * * *\"\n  jobTemplate:\n    spec:\n      template:\n        spec:\n          containers:\n          - name: kubecfg\n            image: kubecfg\n            imagePullPolicy: Never\n            workingDir: /tmp/cookie/opencompose-jsonnet\n            env:\n            - name: TOKEN\n              valueFrom:\n                secretKeyRef:\n                  name: {{ token_name }}\n                  key: token\n            command: [\"kubecfg\"]\n            args:\n            - --certificate-authority\n            - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n            - --token\n            - $(TOKEN)\n            - --server\n            - https://kubernetes:443\n            - update\n            - {{ app }}\n            volumeMounts:\n            - name: cookie\n              mountPath: /tmp/cookie\n          restartPolicy: OnFailure\n          volumes:\n          - name: cookie\n            gitRepo:\n              repository: {{ repo }}\n\"\"\"\n\ndef render(name, app, repo):\n    template = jinja2.Template(svc)\n    app = template.render(name=name, app=app, repo=repo, token_name=get_sa_secret())\n    return app.decode()\n    \n    \ndef get_sa_secret():\n    kubernetes.config.load_kube_config()\n    v1 = kubernetes.client.CoreV1Api()\n    for sa in v1.list_namespaced_service_account(namespace=\"default\").items:\n        if sa.metadata.name == \"default\":\n            return sa.secrets[0].name\n"
  },
  {
    "path": "scripts/create_pod.py",
    "content": "#!/usr/bin/env python\n\nfrom kubernetes import client, config\n\nconfig.load_kube_config()\n\nv1=client.CoreV1Api()\n\npod = client.V1Pod()\npod.metadata = client.V1ObjectMeta(name=\"busybox\")\n\ncontainer = client.V1Container()\ncontainer.image = \"busybox\"\ncontainer.args = [\"sleep\", \"3600\"]\ncontainer.name = \"busybox\"\n\nspec = client.V1PodSpec()\nspec.containers = [container]\npod.spec = spec\n\nv1.create_namespaced_pod(namespace=\"default\",body=pod)\n"
  },
  {
    "path": "scripts/k3d.sh",
    "content": "#!/bin/sh\nset -e -x\n\napt-get --yes --quiet update\napt-get --yes --quiet install apt-transport-https ca-certificates curl software-properties-common\n\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -\n\nadd-apt-repository \\\n   \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \\\n   $(lsb_release -cs) \\\n   stable\"\n\napt-get update\n\napt-get --yes --quiet install docker-ce\n\nusermod -aG docker ubuntu \n\ncurl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash\n\nk3d create\n\ncurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -\ncat <<EOF >/etc/apt/sources.list.d/kubernetes.list\ndeb http://apt.kubernetes.io/ kubernetes-xenial main\nEOF\napt-get update\napt-get install --yes kubectl\n\nwget https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz\ntar xzvf helm-v2.14.1-linux-amd64.tar.gz\nmv ./linux-amd64/helm /usr/local/bin\n\nexport ISTIO_VERSION=1.1.3\ncurl -L https://git.io/getLatestIstio | sh -\ncd istio-1.1.3/\nfor i in install/kubernetes/helm/istio-init/files/crd*yaml; do KUBECONFIG=$(k3d get-kubeconfig) kubectl apply -f $i; done\n\ncat <<EOF | KUBECONFIG=$(k3d get-kubeconfig) kubectl apply -f -\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: istio-system\n  labels:\n    istio-injection: disabled\nEOF\n\nhelm template --namespace=istio-system \\\n--set prometheus.enabled=false \\\n--set mixer.enabled=false \\\n--set mixer.policy.enabled=false \\\n--set mixer.telemetry.enabled=false \\\n`# Pilot doesn't need a sidecar.` \\\n--set pilot.sidecar=false \\\n--set pilot.resources.requests.memory=128Mi \\\n`# Disable galley (and things requiring galley).` \\\n--set galley.enabled=false \\\n--set global.useMCP=false \\\n`# Disable security / policy.` \\\n--set security.enabled=false \\\n--set global.disablePolicyChecks=true \\\n`# Disable sidecar injection.` \\\n--set sidecarInjectorWebhook.enabled=false \\\n--set global.proxy.autoInject=disabled \\\n--set global.omitSidecarInjectorConfigMap=true \\\n`# Set gateway pods to 1 to sidestep eventual consistency / readiness problems.` \\\n--set gateways.istio-ingressgateway.autoscaleMin=1 \\\n--set gateways.istio-ingressgateway.autoscaleMax=1 \\\n`# Set pilot trace sampling to 100%` \\\n--set pilot.traceSampling=100 \\\ninstall/kubernetes/helm/istio \\\n> ./istio-lean.yaml\n\nKUBECONFIG=$(k3d get-kubeconfig) kubectl apply -f istio-lean.yaml\n\n"
  },
  {
    "path": "scripts/k8s.sh",
    "content": "#!/bin/sh\nset -e -x\n\napt-get --yes --quiet update\napt-get --yes --quiet install apt-transport-https ca-certificates curl software-properties-common\n\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -\n\nadd-apt-repository \\\n   \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \\\n   $(lsb_release -cs) \\\n   stable\"\n\napt-get update\n\napt-get --yes --quiet install docker-ce\n\nusermod -aG docker ubuntu \n\ncurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -\ncat <<EOF >/etc/apt/sources.list.d/kubernetes.list\ndeb http://apt.kubernetes.io/ kubernetes-xenial main\nEOF\napt-get update\napt-get install --yes kubelet kubeadm kubectl\napt-mark hold kubelet kubeadm kubectl\n\nkubeadm init\n\n# for the ubuntu user\nmkdir -p /home/ubuntu/.kube\ncp -i /etc/kubernetes/admin.conf /home/ubuntu/.kube/config\nchown 1000:1000 /home/ubuntu/.kube/config\n\n# for the root user\nmkdir -p $HOME/.kube\ncp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nchown $(id -u):$(id -g) $HOME/.kube/config\n\nkubectl taint nodes --all node-role.kubernetes.io/master-\n\nsysctl net.bridge.bridge-nf-call-iptables=1\nkubectl apply -f \"https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\\n')\"\n"
  },
  {
    "path": "scripts/kk8s.sh",
    "content": "#!/bin/sh\nset -e -x\n\napt-get --yes --quiet update\napt-get --yes --quiet install apt-transport-https ca-certificates curl software-properties-common\n\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -\n\nadd-apt-repository \\\n   \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \\\n   $(lsb_release -cs) \\\n   stable\"\n\napt-get update\n\napt-get --yes --quiet install docker-ce\n\nusermod -aG docker ubuntu \n\ncurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -\ncat <<EOF >/etc/apt/sources.list.d/kubernetes.list\ndeb http://apt.kubernetes.io/ kubernetes-xenial main\nEOF\napt-get update\napt-get install --yes kubelet kubeadm kubectl\napt-mark hold kubelet kubeadm kubectl\n\nkubeadm init\n\n# for the ubuntu user\nmkdir -p /home/ubuntu/.kube\ncp -i /etc/kubernetes/admin.conf /home/ubuntu/.kube/config\nchown 1000:1000 /home/ubuntu/.kube/config\n\n# for the root user\nmkdir -p $HOME/.kube\ncp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nchown $(id -u):$(id -g) $HOME/.kube/config\n\nkubectl taint nodes --all node-role.kubernetes.io/master-\n\nsysctl net.bridge.bridge-nf-call-iptables=1\nkubectl apply -f \"https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\\n')\"\n\nwget https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz\ntar xzvf helm-v2.14.1-linux-amd64.tar.gz\nmv ./linux-amd64/helm /usr/local/bin\n\nexport ISTIO_VERSION=1.1.3\ncurl -L https://git.io/getLatestIstio | sh -\ncd istio-1.1.3/\nfor i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done\n\ncat <<EOF | kubectl apply -f -\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: istio-system\n  labels:\n    istio-injection: disabled\nEOF\n\nhelm template --namespace=istio-system \\\n--set prometheus.enabled=false \\\n--set mixer.enabled=false \\\n--set mixer.policy.enabled=false \\\n--set mixer.telemetry.enabled=false \\\n`# Pilot doesn't need a sidecar.` \\\n--set pilot.sidecar=false \\\n--set pilot.resources.requests.memory=128Mi \\\n`# Disable galley (and things requiring galley).` \\\n--set galley.enabled=false \\\n--set global.useMCP=false \\\n`# Disable security / policy.` \\\n--set security.enabled=false \\\n--set global.disablePolicyChecks=true \\\n`# Disable sidecar injection.` \\\n--set sidecarInjectorWebhook.enabled=false \\\n--set global.proxy.autoInject=disabled \\\n--set global.omitSidecarInjectorConfigMap=true \\\n`# Set gateway pods to 1 to sidestep eventual consistency / readiness problems.` \\\n--set gateways.istio-ingressgateway.autoscaleMin=1 \\\n--set gateways.istio-ingressgateway.autoscaleMax=1 \\\n`# Set pilot trace sampling to 100%` \\\n--set pilot.traceSampling=100 \\\ninstall/kubernetes/helm/istio \\\n> ./istio-lean.yaml\n\nkubectl apply -f istio-lean.yaml\n"
  },
  {
    "path": "scripts/kopf/README.md",
    "content": "# Kopf minimal example\n\nCreate the CRD\n\n```\nkubectl apply -f crd.yaml\n```\n\nStart the operator:\n\n```bash\nkopf run example.py --verbose\n```\n\nCreate the custome object\n\n```bash\nkubectl apply -f obj.yaml\nkubectl get book\n```\n"
  },
  {
    "path": "scripts/kopf/crd.yaml",
    "content": "apiVersion: apiextensions.k8s.io/v1beta1\nkind: CustomResourceDefinition\nmetadata:\n  name: books.oreilly.com\nspec:\n  scope: Namespaced\n  group: oreilly.com\n  versions:\n    - name: v1alpha1\n      served: true\n      storage: true\n  names:\n    kind: Book\n    plural: books\n    singular: book\n    shortNames:\n      - bk\n  additionalPrinterColumns:\n    - name: Title\n      type: string\n      priority: 0\n      JSONPath: .spec.title\n      description: Title of the Book\n    - name: Abstract\n      type: string\n      priority: 0\n      JSONPath: .spec.abstract\n      description: Abstract of the Book\n    - name: Message\n      type: string\n      priority: 0\n      JSONPath: .status.message\n      description: As returned from the handler (sometimes).\n"
  },
  {
    "path": "scripts/kopf/example.py",
    "content": "import kopf\n\n\n@kopf.on.create('oreilly.com', 'v1alpha1', 'book')\ndef create_fn(spec, **kwargs):\n    print(f\"And here we are! Creating: {spec}\")\n    return {'message': 'hello world'}  # will be the new status\n\n#@kopf.on.update('oreilly.com', 'v1alpha1', 'book')\n#def update_fn(old, new, diff, **kwargs):\n#    print('UPDATED')\n#    print(f\"The following object got updated: {spec}\")\n#    return {'message': 'updated'}\n\n#@kopf.on.delete('oreilly.com', 'v1alpha1', 'book')\n#def delete_fn(metadata, **kwargs):\n"
  },
  {
    "path": "scripts/kopf/obj.yaml",
    "content": "apiVersion: oreilly.com/v1alpha1\nkind: Book\nmetadata:\n  name: Moby\n  labels:\n    safari: book\nspec:\n  title: mobydick  \n  abstract: a whale and a captain\n"
  },
  {
    "path": "scripts/kubeadminit.sh",
    "content": "kubeadm init\n\n# for the ubuntu user\nmkdir -p /home/ubuntu/.kube\ncp -i /etc/kubernetes/admin.conf /home/ubuntu/.kube/config\nchown 1000:1000 /home/ubuntu/.kube/config\n\n# for the root user\nmkdir -p $HOME/.kube\ncp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nchown $(id -u):$(id -g) $HOME/.kube/config\n\nkubectl taint nodes --all node-role.kubernetes.io/master-\n\nsysctl net.bridge.bridge-nf-call-iptables=1\nkubectl apply -f \"https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\\n')\"\n"
  },
  {
    "path": "template/jinja-test.py",
    "content": "#!/usr/bin/env python\n\nimport jinja2\nimport kubernetes\n\nsvc =\"\"\"\napiVersion: batch/v2alpha1\nkind: CronJob\nmetadata:\n  name: {{ name }}\nspec:\n  schedule: \"*/1 * * * *\"\n  jobTemplate:\n    spec:\n      template:\n        spec:\n          containers:\n          - name: kubecfg\n            image: kubecfg\n\"\"\"\n\ndef render(name):\n    template = jinja2.Template(svc)\n    app = template.render(name=name)\n    return app\n\nprint(render(\"foobar\"))\n"
  }
]