[
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "README.md",
    "content": "# 一、K8S攻略\n- [Kubernetes架构介绍](docs/Kubernetes架构介绍.md)\n- [Kubernetes集群环境准备](docs/Kubernetes集群环境准备.md)\n- [Docker安装](docs/docker-install.md)\n- [CA证书制作](docs/ca.md)\n- [ETCD集群部署](docs/etcd-install.md)\n- [Master节点部署](docs/master.md)\n- [Node节点部署](docs/node.md)\n- [Flannel部署](docs/flannel.md)\n- [应用创建](docs/app.md)\n- [问题汇总](docs/k8s-error-resolution.md)\n- [常用手册](docs/operational.md)\n- [Envoy 的架构与基本术语](docs/Envoy的架构与基本术语.md)\n- [K8S学习手册](docs/Kubernetes学习笔记.md)\n- [K8S重启pod](docs/k8s%E9%87%8D%E5%90%AFpod.md)\n- [K8S清理](docs/delete.md)\n- [外部访问K8s中Pod的几种方式](docs/外部访问K8s中Pod的几种方式.md)\n- [应用测试](docs/app2.md)\n- [PVC](docs/k8s_pv_local.md)\n- [dashboard操作](docs/dashboard_op.md)\n\n\n# 使用手册\n<table border=\"0\">\n    <tr>\n        <td><strong>手动部署</strong></td>\n        <td><a href=\"docs/Kubernetes集群环境准备.md\">1.Kubernetes集群环境准备</a></td>\n        <td><a href=\"docs/docker-install.md\">2.Docker安装</a></td>\n        <td><a href=\"docs/ca.md\">3.CA证书制作</a></td>\n        <td><a href=\"docs/etcd-install.md\">4.ETCD集群部署</a></td>\n        <td><a href=\"docs/master.md\">5.Master节点部署</a></td>\n        <td><a href=\"docs/node.md\">6.Node节点部署</a></td>\n        <td><a href=\"docs/flannel.md\">7.Flannel部署</a></td>\n        <td><a href=\"docs/app.md\">8.应用创建</a></td>\n    </tr>\n    <tr>\n        <td><strong>必备插件</strong></td>\n        <td><a href=\"docs/coredns.md\">1.CoreDNS部署</a></td>\n        <td><a href=\"docs/dashboard.md\">2.Dashboard部署</a></td>\n        <td><a href=\"docs/heapster.md\">3.Heapster部署</a></td>\n        <td><a href=\"docs/ingress.md\">4.Ingress部署</a></td>\n        <td><a href=\"https://github.com/unixhot/devops-x\">5.CI/CD</a></td>\n        <td><a href=\"docs/helm.md\">6.Helm部署</a></td>\n        <td><a href=\"docs/helm.md\">6.Helm部署</a></td>\n    </tr>\n</table>\n\n# 二、k8s资源清理\n```\n1、# svc清理\n$ kubectl delete svc $(kubectl get svc -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace\nservice \"mysql-production\" deleted\nservice \"nginx-test\" deleted\nservice \"redis-cluster\" deleted\nservice \"redis-production\" deleted\n\n2、# deployment清理\n$ kubectl delete deployment $(kubectl get deployment -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace\ndeployment.extensions \"centos7-app\" deleted\n\n3、# configmap清理\n$ kubectl delete cm $(kubectl get cm -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace\n```\n\n\nhttps://www.xiaodianer.net/index.php/kubernetes/istio/41-istio-https-demo\n\nhttps://mp.weixin.qq.com/s/jnVn6_cyRUILBQ0cBhBNyQ  Kubernetes v1.18.2 二进制高可用部署\n"
  },
  {
    "path": "apps/README.md",
    "content": "\n"
  },
  {
    "path": "apps/nginx/README.md",
    "content": "\n"
  },
  {
    "path": "apps/ops/README.md",
    "content": "\n"
  },
  {
    "path": "apps/wordpress/README.md",
    "content": "\n"
  },
  {
    "path": "apps/wordpress/基于PV_PVC部署Wordpress 示例.md",
    "content": "# 一、PV（PersistentVolume）\n\nPersistentVolume (PV) 是外部存储系统中的一块存储空间，由管理员创建和维护。与 Volume 一样，PV 具有持久性，生命周期独立于 Pod。\n\n1、PV和PVC是一一对应关系，当有PV被某个PVC所占用时，会显示banding，其它PVC不能再使用绑定过的PV。\n\n2、PVC一旦绑定PV，就相当于是一个存储卷，此时PVC可以被多个Pod所使用。（PVC支不支持被多个Pod访问，取决于访问模型accessMode的定义）。\n\n3、PVC若没有找到合适的PV时，则会处于pending状态。\n\n4、PV的reclaim policy选项：\n\n    默认是Retain保留，保留生成的数据。\n    可以改为recycle回收，删除生成的数据，回收pv\n    delete，删除，pvc解除绑定后，pv也就自动删除。\n\n# 二、PVC\n\nPersistentVolumeClaim (PVC) 是对 PV 的申请 (Claim)。PVC 通常由普通用户创建和维护。需要为 Pod 分配存储资源时，用户可以创建一个 PVC，指明存储资源的容量大小和访问模式（比如只读）等信息，Kubernetes 会查找并提供满足条件的 PV。\n\n有了 PersistentVolumeClaim，用户只需要告诉 Kubernetes 需要什么样的存储资源，而不必关心真正的空间从哪里分配，如何访问等底层细节信息。这些 Storage Provider 的底层信息交给管理员来处理，只有管理员才应该关心创建 PersistentVolume 的细节信息。\n\n## PVC资源需要指定：\n\n1、accessMode：访问模型；对象列表：\n\n    ReadWriteOnce – the volume can be mounted as read-write by a single node：  RWO - ReadWriteOnce  一人读写\n    ReadOnlyMany – the volume can be mounted read-only by many nodes：          ROX - ReadOnlyMany   多人只读\n    ReadWriteMany – the volume can be mounted as read-write by many nodes：     RWX - ReadWriteMany  多人读写\n    \n2、resource：资源限制（比如：定义5GB空间，我们期望对应的存储空间至少5GB。）  \n\n3、selector：标签选择器。不加标签，就会在所有PV找最佳匹配。\n\n4、storageClassName：存储类名称：\n\n5、volumeMode：指后端存储卷的模式。可以用于做类型限制，哪种类型的PV可以被当前claim所使用。\n\n6、volumeName：卷名称，指定后端PVC（相当于绑定）\n\n   \n# 三、两者差异\n\n1、PV是属于集群级别的，不能定义在名称空间中\n\n2、PVC时属于名称空间级别的。\n\n参考文档：\n\nhttps://blog.csdn.net/weixin_42973226/article/details/86501693  基于rook-ceph部署wordpress\n\nhttps://www.cnblogs.com/benjamin77/p/9944268.html  k8s的持久化存储PV&&PVC\n"
  },
  {
    "path": "apps/wordpress/部署Wordpress 示例.md",
    "content": "# 一、简述\n\n&#8195;Wordpress应用主要涉及到两个镜像：wordpress 和 mysql，wordpress 是应用的核心程序，mysql 是用于数据存储的。现在我们来看看如何来部署我们的这个wordpress应用。这个服务主要有2个pod资源，优先使用Deployment来管理我们的Pod。\n\n# 二、创建一个MySQL的Deployment对象\n\n- 1、创建namespace空间,并使用Service暴露服务给集群内部使用\n\n```bash\n# 清理wordpress-db资源\nkubectl delete -f wordpress-db.yaml\n\n# 编写mysql的deployment文件\ncat > wordpress-db.yaml <<\\EOF\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: blog\n\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: mysql-deploy\n  namespace: blog\n  labels:\n    app: mysql\nspec:\n  template:\n    metadata:\n      labels:\n        app: mysql\n    spec:\n      containers:\n      - name: mysql\n        image: mysql:5.7\n        imagePullPolicy: IfNotPresent\n        ports:\n        - containerPort: 3306\n          name: dbport\n        env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: rootPassW0rd\n        - name: MYSQL_DATABASE\n          value: wordpress\n        - name: MYSQL_USER\n          value: wordpress\n        - name: MYSQL_PASSWORD\n          value: wordpress\n        volumeMounts:\n        - name: db\n          mountPath: /var/lib/mysql\n      volumes:\n      - name: db\n        hostPath:\n          path: /var/lib/mysql\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: wordpress-mysql\n  namespace: blog\nspec:\n  selector:\n    app: mysql\n  ports:\n  - name: mysqlport\n    protocol: TCP\n    port: 3306\n    targetPort: dbport\nEOF\n\n# 创建资源和服务\nkubectl create -f wordpress-db.yaml\n```\n\n- 2、查看创建的svc服务\n\n```bash\n$ kubectl describe svc wordpress-mysql -n blog\nName:              wordpress-mysql\nNamespace:         blog\nLabels:            <none>\nAnnotations:       <none>\nSelector:          app=mysql\nType:              ClusterIP\nIP:                10.104.88.234\nPort:              mysqlport  3306/TCP\nTargetPort:        dbport/TCP\nEndpoints:         10.244.1.115:3306\nSession Affinity:  None\nEvents:            <none>\n```\n\n- 3、验证创建的mysql资源服务可用性\n\n```bash\n# 命令行跑一个centos7的bash基础容器\n$ kubectl run mysql-test --rm -it --image=alpine /bin/sh\nkubectl run centos7-app --rm -it --image=centos:7.2.1511 -n blog\n\n# 进入到容器\nkubectl exec `kubectl get pods -n blog|grep centos7-app|awk '{print $1}'` -it /bin/bash -n blog\n\n# 安装mysql客户端\nyum install vim net-tools telnet nc -y\nyum install -y mariadb.x86_64 mariadb-libs.x86_64\n\n# 测试mysql服务端口是否OK\nnc -zv wordpress-mysql 3306\n\n# 连接测试\nmysql -h'wordpress-mysql' -u'root' -p'rootPassW0rd'  # 这里使用域名测试\n\nmysql -h'10.104.88.234' -u'root' -p'rootPassW0rd'   # 这里使用集群IP测试，这个经常会变\n\nmysql -h'10.244.1.115' -u'root' -p'rootPassW0rd'   # 这里使用Endpoints IP测试,这个经常会变\n```\n\n# 三、创建Wordpress服务Deployment对象\n\n```bash\n# 清理wordpress资源\nkubectl delete -f wordpress.yaml\n\n# 编写wordpress的deployment文件\ncat > wordpress.yaml <<\\EOF\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: wordpress-deploy\n  namespace: blog\n  labels:\n    app: wordpress\nspec:\n  template:\n    metadata:\n      labels:\n        app: wordpress\n    spec:\n      containers:\n      - name: wordpress\n        image: wordpress\n        imagePullPolicy: IfNotPresent\n        ports:\n        - containerPort: 80\n          name: wdport\n        env:\n        - name: WORDPRESS_DB_HOST\n          value: wordpress-mysql:3306\n        - name: WORDPRESS_DB_USER\n          value: wordpress\n        - name: WORDPRESS_DB_PASSWORD\n          value: wordpress\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: wordpress-service\n  namespace: blog\nspec:\n  type: NodePort\n  selector:\n    app: wordpress\n  ports:\n  - name: wordpressport\n    protocol: TCP\n    port: 80\n    targetPort: wdport\n    nodePort: 32380     #新增这一行，指定固定node端口\nEOF\n\n# 创建资源和服务\nkubectl create -f wordpress.yaml\n\n# 查看创建的pod资源\nkubectl get pods -n blog\n\n# 查看创建的svc资源\nkubectl get svc -n blog\n\nNAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE\nwordpress-mysql     ClusterIP   10.104.88.234    <none>        3306/TCP       3m36s\nwordpress-service   NodePort    10.111.212.108   <none>        80:32380/TCP   12s\n```\n\n# 四、访问测试\n\n```bash\n#可以看到wordpress服务产生了一个32380的端口，现在我们是不是就可以通过任意节点的NodeIP加上32255端口，就可以访问我们的wordpress应用了，在浏览器中打开，如果看到wordpress跳转到了安装页面，证明我们的嗯安装是没有任何问题的了，如果没有出现预期的效果，那么就需要去查看下Pod的日志来查看问题了：\n\nhttp://192.168.56.11:32380/\n```\n\n![wordpress](https://github.com/Lancger/opsfull/blob/master/images/wordpress-01.png)\n\n\n# 五、提高稳定性（进阶）\n\n`1、当你使用kuberentes的时候，有没有遇到过Pod在启动后一会就挂掉然后又重新启动这样的恶性循环？你有没有想过kubernetes是如何检测pod是否还存活？虽然容器已经启动，但是kubernetes如何知道容器的进程是否准备好对外提供服务了呢？让我们通过kuberentes官网的这篇文章[Configure Liveness and Readiness Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)，来一探究竟。`\n\n`2、Kubelet使用liveness probe（存活探针）来确定何时重启容器。例如，当应用程序处于运行状态但无法做进一步操作，liveness探针将捕获到deadlock，重启处于该状态下的容器，使应用程序在存在bug的情况下依然能够继续运行下去（谁的程序还没几个bug呢）。`\n\n`3、Kubelet使用readiness probe（就绪探针）来确定容器是否已经就绪可以接受流量。只有当Pod中的容器都处于就绪状态时kubelet才会认定该Pod处于就绪状态。该信号的作用是控制哪些Pod应该作为service的后端。如果Pod处于非就绪状态，那么它们将会被从service的load balancer中移除。`\n`\n\n现在wordpress应用已经部署成功了，那么就万事大吉了吗？如果我们的网站访问量突然变大了怎么办，如果我们要更新我们的镜像该怎么办？如果我们的mysql服务挂掉了怎么办？\n\n所以要保证我们的网站能够非常稳定的提供服务，我们做得还不够，我们可以通过做些什么事情来提高网站的稳定性呢？\n\n## 第一. 增加健康检测\n\n我们前面说过liveness probe和rediness probe是提高应用稳定性非常重要的方法:\n\n```bash\nlivenessProbe:\n  tcpSocket:\n    port: 80\n  initialDelaySeconds: 3\n  periodSeconds: 3\nreadinessProbe:\n  tcpSocket:\n    port: 80\n  initialDelaySeconds: 5\n  periodSeconds: 10\n\n#增加上面两个探针，每10s检测一次应用是否可读，每3s检测一次应用是否存活\n```\n\n## 第二. 增加 HPA\n\n让我们的应用能够自动应对流量高峰期：\n\n```bash\n1、创建HPA资源（一定要设置Pod的资源限制参数: request, 否则HPA不会工作）\n\n$ kubectl autoscale deployment wordpress-deploy --cpu-percent=10 --min=1 --max=10 -n blog\n\ndeployment \"wordpress-deploy\" autoscaled\n\n# 我们用kubectl autoscale命令为我们的wordpress-deploy创建一个HPA对象，最小的 pod 副本数为1，最大为10，HPA会根据设定的 cpu使用率（10%）动态的增加或者减少pod数量。当然最好我们也为Pod声明一些资源限制：\n\nresources:\n  limits:\n    cpu: 200m\n    memory: 200Mi\n  requests:\n    cpu: 100m\n    memory: 100Mi\n    \n# 查看HPA\n$ kubectl get HorizontalPodAutoscaler -A \nNAMESPACE   NAME               REFERENCE                     TARGETS         MINPODS   MAXPODS   REPLICAS   AGE\nblog        wordpress-deploy   Deployment/wordpress-deploy   <unknown>/10%   1         10        1          4m19s\n\n2、更新Deployment后，我们可以可以来测试下上面的HPA是否会生效：\n$ kubectl run -i --tty load-generator --image=busybox /bin/sh\n\nIf you don't see a command prompt, try pressing enter.\n\nwhile true; do wget -q -O- http://wordpress:80; done\n\n3、观察Deployment的副本数是否有变化\n$ kubectl get deployment wordpress-deploy\n\nNAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\nwordpress-deploy   3         3         3            3           4d\n\n4、删除HPA\n$ kubectl delete HorizontalPodAutoscaler  wordpress-deploy -n blog\n\nhorizontalpodautoscaler.autoscaling \"wordpress-deploy\" deleted\n```\n\n## 第三. 增加滚动更新策略\n\n这样可以保证我们在更新应用的时候服务不会被中断：\n\n```bash\nreplicas: 2\nrevisionHistoryLimit: 10\nminReadySeconds: 5\nstrategy:\n  type: RollingUpdate\n  rollingUpdate:\n    maxSurge: 1\n    maxUnavailable: 1\n```\n\n## 第四. 使用Service的名称来代替host\n\n`如果mysql服务被重新创建了的话，它的clusterIP非常有可能就变化了，所以上面我们环境变量中的WORDPRESS_DB_HOST的值就会有问题，就会导致访问不了数据库服务了，这个地方我们可以直接使用Service的名称来代替host，这样即使clusterIP变化了，也不会有任何影响，这个我们会在后面的服务发现的章节和大家深入讲解的`\n\n```bash\nenv:\n- name: WORDPRESS_DB_HOST\n  value: wordpress-mysql:3306\n```\n\n## 第五. 容器启动顺序\n\n`在部署wordpress服务的时候，mysql服务以前启动起来了吗？如果没有启动起来是不是我们也没办法连接数据库了啊？该怎么办，是不是在启动wordpress应用之前应该去检查一下mysql服务，如果服务正常的话我们就开始部署应用了，这是不是就是InitContainer的用法`\n\n```bash\ninitContainers:\n- name: init-db\n  image: busybox\n  command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql service; sleep 2; done;']\n  \n# 直到mysql服务创建完成后，initContainer才结束，结束完成后我们才开始下面的部署。\n```\n\n# 六、优化文件合并\n\n```bash\nkubectl delete -f wordpress-all.yaml\n\ncat > wordpress-all.yaml <<\\EOF\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: blog\n\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: mysql-deploy\n  namespace: blog\n  labels:\n    app: mysql\nspec:\n  template:\n    metadata:\n      labels:\n        app: mysql\n    spec:\n      containers:\n      - name: mysql\n        image: mysql:5.7\n        ports:\n        - containerPort: 3306\n          name: dbport\n        env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: rootPassW0rd\n        - name: MYSQL_DATABASE\n          value: wordpress\n        - name: MYSQL_USER\n          value: wordpress\n        - name: MYSQL_PASSWORD\n          value: wordpress\n        volumeMounts:\n        - name: db\n          mountPath: /var/lib/mysql\n      volumes:\n      - name: db\n        hostPath:\n          path: /var/lib/mysql\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: wordpress-mysql\n  namespace: blog\nspec:\n  selector:\n    app: mysql\n  ports:\n  - name: mysqlport\n    protocol: TCP\n    port: 3306\n    targetPort: dbport\n\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: wordpress-deploy\n  namespace: blog\n  labels:\n    app: wordpress\nspec:\n  revisionHistoryLimit: 10\n  minReadySeconds: 5\n  strategy:\n    type: RollingUpdate\n    rollingUpdate:\n      maxSurge: 1\n      maxUnavailable: 1\n  template:\n    metadata:\n      labels:\n        app: wordpress\n    spec:\n      initContainers:\n      - name: init-db\n        image: busybox\n        command: ['sh', '-c', 'until nslookup wordpress-mysql; do echo waiting for mysql service; sleep 2; done;']\n      containers:\n      - name: wordpress\n        image: wordpress\n        imagePullPolicy: IfNotPresent\n        ports:\n        - containerPort: 80\n          name: wdport\n        env:\n        - name: WORDPRESS_DB_HOST\n          value: wordpress-mysql:3306\n        - name: WORDPRESS_DB_USER\n          value: wordpress\n        - name: WORDPRESS_DB_PASSWORD\n          value: wordpress\n        resources:\n          limits:\n            cpu: 200m\n            memory: 200Mi\n          requests:\n            cpu: 100m\n            memory: 100Mi\n\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: wordpress\n  namespace: blog\nspec:\n  selector:\n    app: wordpress\n  type: NodePort\n  ports:\n  - name: wordpressport\n    protocol: TCP\n    port: 80\n    nodePort: 32380\n    targetPort: wdport\nEOF\n\nkubectl apply -f wordpress-all.yaml\n\nwatch kubectl get pods -n blog\n\n# 检测mysql服务\n$ kubectl run mysql-test --rm -it --image=alpine /bin/sh -n blog\n\n$ nslookup wordpress-mysql\nName:      wordpress-mysql\nAddress 1: 10.99.230.27 wordpress-mysql.blog.svc.cluster.local\n\n$ ping wordpress-mysql\nPING wordpress-mysql (10.99.230.27): 56 data bytes\n64 bytes from 10.99.230.27: seq=0 ttl=64 time=0.124 ms\n64 bytes from 10.99.230.27: seq=0 ttl=64 time=0.124 ms\n\n```\n\n参考文档：\n\nhttps://www.qikqiak.com/k8s-book/docs/31.%E9%83%A8%E7%BD%B2%20Wordpress%20%E7%A4%BA%E4%BE%8B.html   \n\nhttps://blog.csdn.net/maoreyou/article/details/80050623  Kubernetes之路 3 - 解决服务依赖\n"
  },
  {
    "path": "components/README.md",
    "content": "\n# ingress\n\n# helm\n\nhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports  需要开放的端口\n"
  },
  {
    "path": "components/cronjob/README.md",
    "content": "参考资料：\n\nhttps://www.jianshu.com/p/62b4f0a3134b   Kubernetes对象之CronJob \n"
  },
  {
    "path": "components/dashboard/Kubernetes-Dashboard v2.0.0.md",
    "content": "```bash\n#安装\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml\n\n#卸载\nkubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml\n\n#账号授权\nkubectl delete -f admin.yaml\n\ncat > admin.yaml << \\EOF\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: admin\n  annotations:\n    rbac.authorization.kubernetes.io/autoupdate: \"true\"\nroleRef:\n  kind: ClusterRole\n  name: cluster-admin\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n  name: admin\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: admin\n  namespace: kube-system\n  labels:\n    kubernetes.io/cluster-service: \"true\"\n    addonmanager.kubernetes.io/mode: Reconcile\nEOF\n\nkubectl apply -f admin.yaml\n\nkubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kube-system\n```\n参考文档：\n\nhttp://www.mydlq.club/article/28/ \n"
  },
  {
    "path": "components/dashboard/README.md",
    "content": "# 一、安装dashboard v1.10.1\n\n## 1、使用NodePort方式暴露访问\n\n1、下载对应的yaml文件\n```\nwget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml\n\nvim kubernetes-dashboard.yaml\n\n1、# 修改镜像名称\n......\n    spec:\n      containers:\n      - name: kubernetes-dashboard\n        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 #这个换成阿里云的镜像\n        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1\n        ports:\n        - containerPort: 8443\n          protocol: TCP\n        args:\n          - --auto-generate-certificates\n......\n```\n\n2、# 修改Service为NodePort类型\n```\n......\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\nspec:\n  type: NodePort   # 新增这一行，指定为NodePort方式\n  ports:\n    - port: 443\n      targetPort: 8443\n      nodePort: 32370  #新增这一行，指定固定node端口\n  selector:\n    k8s-app: kubernetes-dashboard\n```\n\n3、dashboard最终文件\n\n```\ncat > kubernetes-dashboard.yaml << \\EOF\n# Copyright 2017 The Kubernetes Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# ------------------- Dashboard Secret ------------------- #\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-certs\n  namespace: kube-system\ntype: Opaque\n\n---\n# ------------------- Dashboard Service Account ------------------- #\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\n\n---\n# ------------------- Dashboard Role & Role Binding ------------------- #\n\nkind: Role\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: kubernetes-dashboard-minimal\n  namespace: kube-system\nrules:\n  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.\n- apiGroups: [\"\"]\n  resources: [\"secrets\"]\n  verbs: [\"create\"]\n  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.\n- apiGroups: [\"\"]\n  resources: [\"configmaps\"]\n  verbs: [\"create\"]\n  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.\n- apiGroups: [\"\"]\n  resources: [\"secrets\"]\n  resourceNames: [\"kubernetes-dashboard-key-holder\", \"kubernetes-dashboard-certs\"]\n  verbs: [\"get\", \"update\", \"delete\"]\n  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.\n- apiGroups: [\"\"]\n  resources: [\"configmaps\"]\n  resourceNames: [\"kubernetes-dashboard-settings\"]\n  verbs: [\"get\", \"update\"]\n  # Allow Dashboard to get metrics from heapster.\n- apiGroups: [\"\"]\n  resources: [\"services\"]\n  resourceNames: [\"heapster\"]\n  verbs: [\"proxy\"]\n- apiGroups: [\"\"]\n  resources: [\"services/proxy\"]\n  resourceNames: [\"heapster\", \"http:heapster:\", \"https:heapster:\"]\n  verbs: [\"get\"]\n\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: kubernetes-dashboard-minimal\n  namespace: kube-system\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: kubernetes-dashboard-minimal\nsubjects:\n- kind: ServiceAccount\n  name: kubernetes-dashboard\n  namespace: kube-system\n\n---\n# ------------------- Dashboard Deployment ------------------- #\n\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\nspec:\n  replicas: 1\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      k8s-app: kubernetes-dashboard\n  template:\n    metadata:\n      labels:\n        k8s-app: kubernetes-dashboard\n    spec:\n      containers:\n      - name: kubernetes-dashboard\n        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\n        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1\n        ports:\n        - containerPort: 8443\n          protocol: TCP\n        args:\n          - --auto-generate-certificates\n          # Uncomment the following line to manually specify Kubernetes API server Host\n          # If not specified, Dashboard will attempt to auto discover the API server and connect\n          # to it. Uncomment only if the default does not work.\n          # - --apiserver-host=http://my-address:port\n        volumeMounts:\n        - name: kubernetes-dashboard-certs\n          mountPath: /certs\n          # Create on-disk volume to store exec logs\n        - mountPath: /tmp\n          name: tmp-volume\n        livenessProbe:\n          httpGet:\n            scheme: HTTPS\n            path: /\n            port: 8443\n          initialDelaySeconds: 30\n          timeoutSeconds: 30\n      volumes:\n      - name: kubernetes-dashboard-certs\n        secret:\n          secretName: kubernetes-dashboard-certs\n      - name: tmp-volume\n        emptyDir: {}\n      serviceAccountName: kubernetes-dashboard\n      # Comment the following tolerations if Dashboard must not be deployed on master\n      tolerations:\n      - key: node-role.kubernetes.io/master\n        effect: NoSchedule\n\n---\n# ------------------- Dashboard Service ------------------- #\n\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\nspec:\n  type: NodePort  # 新增这一行，指定为NodePort方式\n  ports:\n    - port: 443\n      targetPort: 8443\n      nodePort: 32370  #新增这一行，指定固定node端口\n  selector:\n    k8s-app: kubernetes-dashboard\nEOF\n\nkubectl apply -f kubernetes-dashboard.yaml\n```\n\n4、然后创建一个具有全局所有权限的用户来登录Dashboard：(admin.yaml)\n```\ncat > admin.yaml << \\EOF\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: admin\n  annotations:\n    rbac.authorization.kubernetes.io/autoupdate: \"true\"\nroleRef:\n  kind: ClusterRole\n  name: cluster-admin\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n  name: admin\n  namespace: kube-system\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: admin\n  namespace: kube-system\n  labels:\n    kubernetes.io/cluster-service: \"true\"\n    addonmanager.kubernetes.io/mode: Reconcile\nEOF\n\nkubectl apply -f admin.yaml\n\nkubectl delete -f admin.yaml\n\n#获取token\nkubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin | awk '{print $1}')\n```\n5、访问测试 `https://nodeip:32370`\n\n\n## 2、使用Ingress方式访问\n\n```bash\n#清理NodePort方式的dashboard\nkubectl delete -f kubernetes-dashboard.yaml\n\nrm -f kubernetes-dashboard.yaml\n\nwget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml\n\nkubectl apply -n kube-system -f kubernetes-dashboard.yaml\n```\n\n1、创建和安装加密访问凭证\n\n通过https进行访问必需要使用证书和密钥，在Kubernetes中可以通过配置一个加密凭证（TLS secret）来提供。\n\n```bash\n#1、创建 tls secret\n\n#这里只是拿来自己使用，创建一个自己签名的证书。如果是公共服务，建议去数字证书颁发机构去申请一个正式的数字证书（需要一些服务费用）；或者使用Let's encrypt去申请一个免费的（后面有介绍）；如果使用Cloudflare可以自动生成证书和https转接服务，但是需要将域名迁移过去，高级功能是收费的。\n#https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/tls/README.md\n\nkubectl delete secret k8s-dashboard-secret -n kube-system\n\nrm -rf /etc/certs/ssl/\n\nmkdir -p /etc/certs/ssl/default\ncd /etc/certs/ssl/default/\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_default.key -out tls_default.crt -subj \"/CN=dashboard.devops.com\"\n\n#将会产生两个文件tls_default.key和tls_default.crt，你可以改成自己的文件名或放在特定的目录下（如果你是为公共服务器创建的，请保证这个不会被别人访问到）。后面的dashboard.devops.com是我的服务器IP地址，你可以改成自己的。\n```\n2、安装 tls secret\n\n```bash\n#下一步，将这两个文件的信息创建为一个Kubernetes的secret访问凭证，我将名称指定为 k8s-dashboard-secret ，这在后面的Ingress配置时将会用到。如果你修改了这个名字，注意后面的Ingress配置yaml文件也需要同步修改。\n\ncd /etc/certs/ssl/default/\n\nkubectl -n kube-system delete secret k8s-dashboard-secret\n\nkubectl -n kube-system create secret tls k8s-dashboard-secret --key=tls_default.key --cert=tls_default.crt\n\n#查看证书\nkubectl get secret k8s-dashboard-secret -n kube-system\n\nkubectl describe secret k8s-dashboard-secret -n kube-system\n\n#注意：\n    #上面命令的参数 -n 指定凭证安装的命名空间。\n    #为了安全考虑，Ingress所有的资源（凭证、路由、服务）必须在同一个命名空间。\n```\n\n3、配置Ingress 路由\n\n```bash\n#将下面的内容保存为文件dashboard-ingress.yaml。里面的 / 设定为访问Kubernetes dashboard服务，/web 只是为了测试和占位，如果没有安装nginx，将会返回找不到服务的消息。\n\ncat >dashboard-ingress.yaml<<\\EOF\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: k8s-dashboard\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  tls:\n   - secretName: traefik-cert   #注意这里需要跟traefik.toml文件设置的证书挂钩\n   #- secretName: k8s-dashboard-secret\n  rules:\n   - host: dashboard.devops.com\n     http:\n      paths:\n      - path: /\n        backend:\n          serviceName: kubernetes-dashboard\n          servicePort: 443\nEOF\n\nkubectl apply -n kube-system -f dashboard-ingress.yaml\n\n#注意\n    #上面的annotations部分是必须的，以提供https和https service的支持。不过，不同的Ingress Controller可能的实现（或版本）有所不同，需要安装相应的实现（版本）进行设置。\n    \n    #参见，#issue:https://github.com/kubernetes/ingress-nginx/issues/2460\n```\n\n\n参考资料：\n\nhttps://my.oschina.net/u/2306127/blog/1930169?from=timeline&isappinstalled=0  Kubernetes dashboard 通过 Ingress 提供HTTPS访问\n"
  },
  {
    "path": "components/external-storage/0、nfs服务端搭建.md",
    "content": "## 一、nfs服务端\n```bash\n#所有节点安装nfs\nyum install -y nfs-utils rpcbind\n\n#创建nfs目录\nmkdir -p /nfs/data/\n\n#修改权限\nchmod -R 666 /nfs/data\n\n#编辑export文件\nvim /etc/exports\n/nfs/data 192.168.56.0/24(rw,async,no_root_squash)\n#如果设置为 /nfs/data *(rw,async,no_root_squash) 则对所以的IP都有效\n\n常用选项：\n   ro：客户端挂载后，其权限为只读，默认选项；\n   rw:读写权限；\n   sync：同时将数据写入到内存与硬盘中；\n   async：异步，优先将数据保存到内存，然后再写入硬盘；\n   Secure：要求请求源的端口小于1024\n用户映射：\n   root_squash:当NFS客户端使用root用户访问时，映射到NFS服务器的匿名用户；\n   no_root_squash:当NFS客户端使用root用户访问时，映射到NFS服务器的root用户；\n   all_squash:全部用户都映射为服务器端的匿名用户；\n   anonuid=UID：将客户端登录用户映射为此处指定的用户uid；\n   anongid=GID：将客户端登录用户映射为此处指定的用户gid\n\n#配置生效\nexportfs -r\n\n#查看生效\nexportfs\n\n#启动rpcbind、nfs服务\nsystemctl restart rpcbind && systemctl enable rpcbind\nsystemctl restart nfs && systemctl enable nfs\n\n#查看 RPC 服务的注册状况  (注意/etc/hosts.deny 里面需要放开以下服务)\n$ rpcinfo -p localhost      \n   program vers proto   port  service\n    100000    4   tcp    111  portmapper\n    100000    3   tcp    111  portmapper\n    100000    2   tcp    111  portmapper\n    100000    4   udp    111  portmapper\n    100000    3   udp    111  portmapper\n    100000    2   udp    111  portmapper\n    100005    1   udp  20048  mountd\n    100005    1   tcp  20048  mountd\n    100005    2   udp  20048  mountd\n    100005    2   tcp  20048  mountd\n    100005    3   udp  20048  mountd\n    100005    3   tcp  20048  mountd\n    100024    1   udp  34666  status\n    100024    1   tcp   7951  status\n    100003    3   tcp   2049  nfs\n    100003    4   tcp   2049  nfs\n    100227    3   tcp   2049  nfs_acl\n    100003    3   udp   2049  nfs\n    100003    4   udp   2049  nfs\n    100227    3   udp   2049  nfs_acl\n    100021    1   udp  31088  nlockmgr\n    100021    3   udp  31088  nlockmgr\n    100021    4   udp  31088  nlockmgr\n    100021    1   tcp  27131  nlockmgr\n    100021    3   tcp  27131  nlockmgr\n    100021    4   tcp  27131  nlockmgr\n\n#修改/etc/hosts.allow放开rpcbind(nfs服务端和客户端都要加上)\nchattr -i /etc/hosts.allow\necho \"nfsd:all\" >>/etc/hosts.allow\necho \"rpcbind:all\" >>/etc/hosts.allow\necho \"mountd:all\" >>/etc/hosts.allow\nchattr +i /etc/hosts.allow\n\n#showmount测试\nshowmount -e 192.168.56.11\n\n#tcpdmatch测试\n$ tcpdmatch rpcbind 192.168.56.11\nclient:   address  192.168.56.11\nserver:   process  rpcbind\naccess:   granted\n```\n\n## 二、nfs客户端\n```bash\nyum install -y nfs-utils rpcbind\n\n#客户端创建目录，然后执行挂载\nmkdir -p /mnt/nfs   #(注意挂载成功后，/mnt下原有数据将会被隐藏，无法找到)\n\nmount -t nfs -o nolock,vers=4 192.168.56.11:/nfs/data /mnt/nfs\n```\n\n## 三、挂载nfs\n```bash\n#或者直接写到/etc/fstab文件中\nvim /etc/fstab\n192.168.56.11:/nfs/data /mnt/nfs/ nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0\n\n#挂载\nmount -a\n\n#卸载挂载\numount /mnt/nfs\n\n#查看nfs服务端信息\nnfsstat -s\n\n#查看nfs客户端信息\nnfsstat -c\n```\n\n参考文档：\n\nhttp://www.mydlq.club/article/3/  CentOS7 搭建 NFS 服务器\n\nhttps://blog.rot13.org/2012/05/rpcbind-is-new-portmap-or-how-to-make-nfs-secure.html   \n\nhttps://yq.aliyun.com/articles/694065\n\nhttps://www.crifan.com/linux_fstab_and_mount_nfs_syntax_and_parameter_meaning/  Linux中fstab的语法和参数含义和mount NFS时相关参数含义\n"
  },
  {
    "path": "components/external-storage/1、k8s的pv和pvc简述.md",
    "content": "# 一、PV（PersistentVolume）\n\nPersistentVolume (PV) 是外部存储系统中的一块存储空间，由管理员创建和维护。与 Volume 一样，PV 具有持久性，生命周期独立于 Pod。\n\n1、PV和PVC是一一对应关系，当有PV被某个PVC所占用时，会显示banding，其它PVC不能再使用绑定过的PV。\n\n2、PVC一旦绑定PV，就相当于是一个存储卷，此时PVC可以被多个Pod所使用。（PVC支不支持被多个Pod访问，取决于访问模型accessMode的定义）。\n\n3、PVC若没有找到合适的PV时，则会处于pending状态。\n\n4、PV的reclaim policy选项：\n\n    默认是Retain保留，保留生成的数据。\n    可以改为recycle回收，删除生成的数据，回收pv\n    delete，删除，pvc解除绑定后，pv也就自动删除。\n\n# 二、PVC\n\nPersistentVolumeClaim (PVC) 是对 PV 的申请 (Claim)。PVC 通常由普通用户创建和维护。需要为 Pod 分配存储资源时，用户可以创建一个 PVC，指明存储资源的容量大小和访问模式（比如只读）等信息，Kubernetes 会查找并提供满足条件的 PV。\n\n有了 PersistentVolumeClaim，用户只需要告诉 Kubernetes 需要什么样的存储资源，而不必关心真正的空间从哪里分配，如何访问等底层细节信息。这些 Storage Provider 的底层信息交给管理员来处理，只有管理员才应该关心创建 PersistentVolume 的细节信息。\n\n## PVC资源需要指定：\n\n1、accessMode：访问模型；对象列表：\n\n    ReadWriteOnce – the volume can be mounted as read-write by a single node：  RWO - ReadWriteOnce  一人读写\n    ReadOnlyMany – the volume can be mounted read-only by many nodes：          ROX - ReadOnlyMany   多人只读\n    ReadWriteMany – the volume can be mounted as read-write by many nodes：     RWX - ReadWriteMany  多人读写\n    \n2、resource：资源限制（比如：定义5GB空间，我们期望对应的存储空间至少5GB。）  \n\n3、selector：标签选择器。不加标签，就会在所有PV找最佳匹配。\n\n4、storageClassName：存储类名称：\n\n5、volumeMode：指后端存储卷的模式。可以用于做类型限制，哪种类型的PV可以被当前claim所使用。\n\n6、volumeName：卷名称，指定后端PVC（相当于绑定）\n\n   \n# 三、两者差异\n\n1、PV是属于集群级别的，不能定义在名称空间中\n\n2、PVC时属于名称空间级别的。\n\n参考文档：\n\nhttps://blog.csdn.net/weixin_42973226/article/details/86501693  基于rook-ceph部署wordpress\n\nhttps://www.cnblogs.com/benjamin77/p/9944268.html  k8s的持久化存储PV&&PVC\n"
  },
  {
    "path": "components/external-storage/2、静态配置PV和PVC.md",
    "content": "Table of Contents\n=================\n\n   * [一、环境介绍](#一环境介绍)\n   * [二、PV操作](#二pv操作)\n      * [01、创建PV卷](#01创建pv卷)\n      * [02、PV配置参数介绍](#02pv配置参数介绍)\n      * [03、创建PV资源](#03创建pv资源)\n      * [04、查看PV](#04查看pv)\n   * [三、PVC操作](#三pvc操作)\n      * [01、创建PVC资源](#01创建pvc资源)\n      * [02、查看PVC/PV](#02查看pvcpv)\n   * [四、Pod中使用存储](#四pod中使用存储)\n   * [五、验证](#五验证)\n      * [01、验证PV是否可用](#01验证pv是否可用)\n      * [02、进入pod查看挂载情况](#02进入pod查看挂载情况)\n      * [03、删除pod](#03删除pod)\n      * [04、继续删除pvc](#04继续删除pvc)\n      * [05、继续删除pv](#05继续删除pv)\n      \n# 一、环境介绍\n\n作为准备工作，我们已经在 k8s同一局域内网节点上搭建了一个 NFS 服务器，目录为 /data/nfs, pv是全局的，pvc可以指定namespace。\n\n# 二、PV操作\n\n## 01、创建PV卷\n\n```bash\n# 创建pv卷对应的目录\nmkdir -p /data/nfs/pv001\nmkdir -p /data/nfs/pv002\n\n# 配置exportrs\n$ vim /etc/exports\n/data/nfs *(rw,no_root_squash,sync,insecure)\n/data/nfs/pv001 *(rw,no_root_squash,sync,insecure)\n/data/nfs/pv002 *(rw,no_root_squash,sync,insecure)\n\n# 配置生效\nexportfs -r\n\n# 重启rpcbind、nfs服务\nsystemctl restart rpcbind && systemctl restart nfs\n\n# 查看挂载点\n$ showmount -e localhost\nExport list for localhost:\n/data/nfs/pv002 *\n/data/nfs/pv001 *\n/data/nfs       *\n```\n\n## 02、PV配置参数介绍\n\n```bash\n配置说明：\n\n① capacity 指定 PV 的容量为 20G。\n\n② accessModes 指定访问模式为 ReadWriteOnce，支持的访问模式有：\n    ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。\n    ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。\n    ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。\n    \n③ persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle，支持的策略有：\n    Retain – 就是保留现场，K8S什么也不做，需要管理员手动去处理PV里的数据，处理完后，再手动删除PV\n    Recycle – K8S会将PV里的数据删除，然后把PV的状态变成Available，又可以被新的PVC绑定使用\n    Delete – K8S会自动删除该PV及里面的数据\n    \n④ storageClassName 指定 PV 的 class 为 nfs。相当于为 PV 设置了一个分类，PVC 可以指定 class 申请相应 class 的 PV。\n\n⑤ 指定 PV 在 NFS 服务器上对应的目录。\n\n一般来说，PV和PVC的生命周期分为5个阶段：\n    Provisioning，即PV的创建，可以直接创建PV（静态方式），也可以使用StorageClass动态创建\n    Binding，将PV分配给PVC\n    Using，Pod通过PVC使用该Volume\n    Releasing，Pod释放Volume并删除PVC\n    Reclaiming，回收PV，可以保留PV以便下次使用，也可以直接从云存储中删除\n\n根据这5个阶段，Volume的状态有以下4种：\n    Available：可用\n    Bound：已经分配给PVC\n    Released：PVC解绑但还未执行回收策略\n    Failed：发生错误\n\n变成Released的PV会根据定义的回收策略做相应的回收工作。有三种回收策略：\n    Retain 就是保留现场，K8S什么也不做，等待用户手动去处理PV里的数据，处理完后，再手动删除PV\n    Delete K8S会自动删除该PV及里面的数据\n    Recycle K8S会将PV里的数据删除，然后把PV的状态变成Available，又可以被新的PVC绑定使用\n```\n\n## 03、创建PV资源\n\n1、nfs-pv001.yaml\n\n```bash\n# 清理pv资源\nkubectl delete -f nfs-pv001.yaml\n\n# 编写pv资源文件\ncat > nfs-pv001.yaml <<\\EOF\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv001\n  labels:\n    pv: nfs-pv001\nspec:\n  capacity:\n    storage: 20Gi\n  accessModes:\n    - ReadWriteOnce\n  persistentVolumeReclaimPolicy: Recycle\n  storageClassName: nfs\n  nfs:\n    path: /data/nfs/pv001\n    server: 192.168.56.11\nEOF\n\n# 部署pv到集群中\nkubectl apply -f nfs-pv001.yaml\n```\n\n2、nfs-pv002.yaml\n\n```bash\n# 清理pv资源\nkubectl delete -f nfs-pv002.yaml\n\n# 编写pv资源文件\ncat > nfs-pv002.yaml <<\\EOF\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv002\n  labels:\n    pv: nfs-pv002\nspec:\n  capacity:\n    storage: 30Gi\n  accessModes:\n    - ReadWriteOnce\n  persistentVolumeReclaimPolicy: Recycle\n  storageClassName: nfs\n  nfs:\n    path: /data/nfs/pv002\n    server: 192.168.56.11\nEOF\n\n# 部署pv到集群中\nkubectl apply -f nfs-pv002.yaml\n```\n\n## 04、查看PV\n\n```bash\n# 查看pv\n$ kubectl get pv\nNAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE\nnfs-pv001     20Gi       RWO            Recycle          Available           nfs                      68s\nnfs-pv002     30Gi       RWO            Recycle          Available           nfs                      33s\n\n#STATUS 为 Available，表示 pv 就绪，可以被 PVC 申请。\n```\n\n# 三、PVC操作\n\n## 01、创建PVC资源\n\n接下来创建2个名为pvc001和pvc002的PVC，配置文件 nfs-pvc001.yaml 如下：\n\n1、nfs-pvc001.yaml\n\n```bash\n# 清理pvc资源\nkubectl delete -f nfs-pvc001.yaml\n\n# 编写pvc资源文件\ncat > nfs-pvc001.yaml <<\\EOF           \napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: nfs-pvc001\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 20Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nfs-pv001\nEOF\n\n# 部署pvc到集群中\nkubectl apply -f nfs-pvc001.yaml\n```\n\n2、nfs-pvc002.yaml\n\n```bash\n# 清理pvc资源\nkubectl delete -f nfs-pvc002.yaml\n\n# 编写pvc资源文件\ncat > nfs-pvc002.yaml <<\\EOF           \napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: nfs-pvc002\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 30Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nfs-pv002\nEOF\n\n# 部署pvc到集群中\nkubectl apply -f nfs-pvc002.yaml\n```\n\n## 02、查看PVC/PV\n\n```bash\n$ kubectl get pvc --show-labels\nNAME            STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE\nnfs-pvc001      Bound    nfs-pv001       20Gi       RWO            nfs            18s\nnfs-pvc002      Bound    nfs-pv002       30Gi       RWO            nfs            7s\n\n$ kubectl get pv --show-labels\nNAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE\nnfs-pv001     20Gi       RWO            Recycle          Bound    default/nfs-pvc001   nfs                     17m\nnfs-pv002     30Gi       RWO            Recycle          Bound    default/nfs-pvc002   nfs                     17m\n\n# 从 kubectl get pvc 和 kubectl get pv 的输出可以看到 pvc001 和pvc002分别绑定到pv001和pv002，申请成功。注意pvc绑定到对应pv通过labels标签方式实现，也可以不指定，将随机绑定到pv。\n```\n\n# 四、Pod中使用存储\n\n```与使用普通 Volume 的格式类似，在 volumes 中通过 persistentVolumeClaim 指定使用nfs-pvc001和nfs-pvc002申请的 Volume。```\n\n1、nfs-pod001.yaml \n\n```bash\n# 清理pod资源\nkubectl delete -f nfs-pod001.yaml\n\n# 编写pod资源文件\ncat > nfs-pod001.yaml <<\\EOF\nkind: Pod\napiVersion: v1\nmetadata:\n  name: nfs-pod001\nspec:\n  containers:\n    - name: myfrontend\n      image: nginx\n      volumeMounts:\n      - mountPath: \"/var/www/html\"\n        name: nfs-pv001\n  volumes:\n    - name: nfs-pv001\n      persistentVolumeClaim:\n        claimName: nfs-pvc001\nEOF\n\n# 创建pod资源\nkubectl apply -f nfs-pod001.yaml\n\n```\n\n2、nfs-pod002.yaml\n\n```bash\n# 清理pod资源\nkubectl delete -f nfs-pod002.yaml\n\n# 编写pod资源文件\ncat > nfs-pod002.yaml <<\\EOF\nkind: Pod\napiVersion: v1\nmetadata:\n  name: nfs-pod002\nspec:\n  containers:\n    - name: myfrontend\n      image: nginx\n      volumeMounts:\n      - mountPath: \"/var/www/html\"\n        name: nfs-pv002\n  volumes:\n    - name: nfs-pv002\n      persistentVolumeClaim:\n        claimName: nfs-pvc002\nEOF\n\n# 创建pod资源\nkubectl apply -f nfs-pod002.yaml\n```\n\n# 五、验证\n\n## 01、验证PV是否可用\n\n```bash\n# 进入到pod创建文件\nkubectl exec nfs-pod001 touch /var/www/html/index001.html\nkubectl exec nfs-pod002 touch /var/www/html/index002.html\n\n# 登录到nfs-server上面查看文件是否创建成功\n$ ls /data/nfs/pv001/\nindex001.html\n\n$ ls /data/nfs/pv002/\nindex002.html\n```\n\n## 02、进入pod查看挂载情况\n\n```bash\n# 验证pod001的挂载\n$ kubectl exec -it nfs-pod001 /bin/bash\n$ root@nfs-pod001:/# df -h\nFilesystem                   Size  Used Avail Use% Mounted on\noverlay                      711G   85G  627G  12% /\ntmpfs                         64M     0   64M   0% /dev\ntmpfs                         16G     0   16G   0% /sys/fs/cgroup\n/dev/sda3                    711G   85G  627G  12% /etc/hosts\nshm                           64M     0   64M   0% /dev/shm\n192.168.56.11:/data/nfs/pv001  932G  620M  931G   1% /var/www/html\n\n# 验证pod002的挂载\n$ kubectl exec -it nfs-pod002 /bin/bash\n$ root@nfs-pod002:/# df -h\nFilesystem                   Size  Used Avail Use% Mounted on\noverlay                      711G   85G  627G  12% /\ntmpfs                         64M     0   64M   0% /dev\ntmpfs                         16G     0   16G   0% /sys/fs/cgroup\n/dev/sda3                    711G   85G  627G  12% /etc/hosts\nshm                           64M     0   64M   0% /dev/shm\n192.168.56.11:/data/nfs/pv002  932G  620M  931G   1% /var/www/html\n```\n\n## 03、删除pod\n\npv和pvc不会被删除，nfs存储的数据不会被删除\n\n```bash\n$ kubectl delete -f nfs-pod001.yaml \npod \"nfs-pod001\" deleted\n\n$ kubectl get pv\nNAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS    REASON   AGE\nnfs-pv001       20Gi       RWO            Recycle          Bound    default/nfs-pvc001     nfs                      13m\nnfs-pv002       30Gi       RWO            Recycle          Bound    default/nfs-pvc002     nfs                      13m\n\n$ kubectl get pvc\nNAME            STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS    AGE\nnfs-pvc001      Bound    nfs-pv001       20Gi       RWO            nfs             13m\nnfs-pvc002      Bound    nfs-pv002       30Gi       RWO            nfs             13m\n```\n## 04、继续删除pvc\n\npv将被释放,处于 Available 可用状态，并且nfs存储中的数据被删除。\n\n```bash\n$ kubectl delete -f nfs-pvc001.yaml \npersistentvolumeclaim \"nfs-pvc001\" deleted\n\n$ kubectl get pv\nNAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                   STORAGECLASS    REASON   AGE\nnfs-pv001       20Gi       RWO            Recycle          Available                           nfs                      18m\nnfs-pv002       30Gi       RWO            Recycle          Bound       default/nfs-pvc002      nfs                      18m\n\n$ ls /nfs/data/pv001/  # 文件不存在\n```\n\n## 05、继续删除pv\n\n```bash\n$ kubectl delete -f nfs-pv001.yaml\npersistentvolume \"nfs-pv001\" deleted\n```\n\n参考文档：\n\nhttps://blog.csdn.net/networken/article/details/86697018  kubernetes部署NFS持久存储\n"
  },
  {
    "path": "components/external-storage/3、动态申请PV卷.md",
    "content": "Table of Contents\n=================\n\n   * [Kubernetes 中部署 NFS Provisioner 为 NFS 提供动态分配卷](#kubernetes-中部署-nfs-provisioner-为-nfs-提供动态分配卷)\n      * [一、NFS Provisioner 简介](#一nfs-provisioner-简介)\n      * [二、External NFS驱动的工作原理](#二external-nfs驱动的工作原理)\n         * [1、nfs-client](#1nfs-client)\n         * [2、nfs](#2nfs)\n      * [三、部署服务](#三部署服务)\n         * [1、配置授权](#1配置授权)\n         * [2、部署nfs-client-provisioner](#2部署nfs-client-provisioner)\n         * [3、部署NFS Provisioner](#3部署nfs-provisioner)\n         * [4、创建StorageClass](#4创建storageclass)\n      * [四、创建PVC](#四创建pvc)\n         * [01、创建一个新的namespace，然后创建pvc资源](#01创建一个新的namespace然后创建pvc资源)\n      * [五、创建测试Pod](#五创建测试pod)\n         * [01、进入 NFS Server 服务器验证是否创建对应文件](#01进入-nfs-server-服务器验证是否创建对应文件)\n         \n# Kubernetes 中部署 NFS Provisioner 为 NFS 提供动态分配卷\n\n## 一、NFS Provisioner 简介\n\nNFS Provisioner 是一个自动配置卷程序，它使用现有的和已配置的 NFS 服务器来支持通过持久卷声明动态配置 Kubernetes 持久卷。\n\n- 持久卷被配置为：namespace−{pvcName}-${pvName}。\n\n## 二、External NFS驱动的工作原理\n\nK8S的外部NFS驱动，可以按照其工作方式（是作为NFS server还是NFS client）分为两类：\n\n### 1、nfs-client\n\n- 也就是我们接下来演示的这一类，它通过K8S的内置的NFS驱动挂载远端的NFS服务器到本地目录；然后将自身作为storage provider，关联storage class。当用户创建对应的PVC来申请PV时，该provider就将PVC的要求与自身的属性比较，一旦满足就在本地挂载好的NFS目录中创建PV所属的子目录，为Pod提供动态的存储服务。\n\n### 2、nfs\n\n- 与nfs-client不同，该驱动并不使用k8s的NFS驱动来挂载远端的NFS到本地再分配，而是直接将本地文件映射到容器内部，然后在容器内使用ganesha.nfsd来对外提供NFS服务；在每次创建PV的时候，直接在本地的NFS根目录中创建对应文件夹，并export出该子目录。利用NFS动态提供Kubernetes后端存储卷\n\n- 本文将介绍使用nfs-client-provisioner这个应用，利用NFS Server给Kubernetes作为持久存储的后端，并且动态提供PV。前提条件是有已经安装好的NFS服务器，并且NFS服务器与Kubernetes的Slave节点都能网络连通。将nfs-client驱动做一个deployment部署到K8S集群中，然后对外提供存储服务。\n\n`nfs-client-provisioner` 是一个Kubernetes的简易NFS的外部 provisioner，本身不提供NFS，需要现有的NFS服务器提供存储\n\n## 三、部署服务\n\n### 1、配置授权\n\n现在的 Kubernetes 集群大部分是基于 RBAC 的权限控制，所以创建一个一定权限的 ServiceAccount 与后面要创建的 “NFS Provisioner” 绑定，赋予一定的权限。\n\n```bash\n# 清理rbac授权\nkubectl delete -f nfs-rbac.yaml -n kube-system\n\n# 编写yaml\ncat >nfs-rbac.yaml<<-EOF\n---\nkind: ServiceAccount\napiVersion: v1\nmetadata:\n  name: nfs-client-provisioner\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: nfs-client-provisioner-runner\nrules:\n  - apiGroups: [\"\"]\n    resources: [\"persistentvolumes\"]\n    verbs: [\"get\", \"list\", \"watch\", \"create\", \"delete\"]\n  - apiGroups: [\"\"]\n    resources: [\"persistentvolumeclaims\"]\n    verbs: [\"get\", \"list\", \"watch\", \"update\"]\n  - apiGroups: [\"storage.k8s.io\"]\n    resources: [\"storageclasses\"]\n    verbs: [\"get\", \"list\", \"watch\"]\n  - apiGroups: [\"\"]\n    resources: [\"events\"]\n    verbs: [\"create\", \"update\", \"patch\"]\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: run-nfs-client-provisioner\nsubjects:\n  - kind: ServiceAccount\n    name: nfs-client-provisioner\n    namespace: kube-system\nroleRef:\n  kind: ClusterRole\n  name: nfs-client-provisioner-runner\n  apiGroup: rbac.authorization.k8s.io\n---\nkind: Role\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: leader-locking-nfs-client-provisioner\nrules:\n  - apiGroups: [\"\"]\n    resources: [\"endpoints\"]\n    verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\"]\n---\nkind: RoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: leader-locking-nfs-client-provisioner\nsubjects:\n  - kind: ServiceAccount\n    name: nfs-client-provisioner\n    # replace with namespace where provisioner is deployed\n    namespace: kube-system\nroleRef:\n  kind: Role\n  name: leader-locking-nfs-client-provisioner\n  apiGroup: rbac.authorization.k8s.io\nEOF\n\n# 应用授权\nkubectl apply -f nfs-rbac.yaml -n kube-system\n```\n\n### 2、部署nfs-client-provisioner\n\n首先克隆仓库获取yaml文件\n```\ngit clone https://github.com/kubernetes-incubator/external-storage.git\ncp -R external-storage/nfs-client/deploy/ /root/\ncd deploy\n```\n### 3、部署NFS Provisioner\n\n修改deployment.yaml文件,这里修改的参数包括NFS服务器所在的IP地址（10.198.1.155），以及NFS服务器共享的路径（/data/nfs/），两处都需要修改为你实际的NFS服务器和共享目录。另外修改nfs-client-provisioner镜像从七牛云拉取。\n\n设置 NFS Provisioner 部署文件，这里将其部署到 “kube-system” Namespace 中。\n\n```bash\n# 清理NFS Provisioner资源\nkubectl delete -f nfs-provisioner-deploy.yaml -n kube-system\n\nexport NFS_ADDRESS='10.198.1.155'\nexport NFS_DIR='/data/nfs'\n\n# 编写deployment.yaml\ncat >nfs-provisioner-deploy.yaml<<-EOF\n---\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  name: nfs-client-provisioner\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: nfs-client-provisioner\n  strategy:\n    type: Recreate  #---设置升级策略为删除再创建(默认为滚动更新)\n  template:\n    metadata:\n      labels:\n        app: nfs-client-provisioner\n    spec:\n      serviceAccountName: nfs-client-provisioner\n      containers:\n        - name: nfs-client-provisioner\n          #---由于quay.io仓库国内被墙，所以替换成七牛云的仓库\n          #image: quay-mirror.qiniu.com/external_storage/nfs-client-provisioner:latest\n          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest\n          volumeMounts:\n            - name: nfs-client-root\n              mountPath: /persistentvolumes\n          env:\n            - name: PROVISIONER_NAME\n              value: nfs-client  #---nfs-provisioner的名称，以后设置的storageclass要和这个保持一致\n            - name: NFS_SERVER\n              value: ${NFS_ADDRESS}  #---NFS服务器地址，和 valumes 保持一致\n            - name: NFS_PATH\n              value: ${NFS_DIR}  #---NFS服务器目录，和 valumes 保持一致\n      volumes:\n        - name: nfs-client-root\n          nfs:\n            server: ${NFS_ADDRESS}  #---NFS服务器地址\n            path: ${NFS_DIR} #---NFS服务器目录\nEOF\n\n# 部署deployment.yaml\nkubectl apply -f nfs-provisioner-deploy.yaml -n kube-system\n\n# 查看创建的pod\nkubectl get pod -o wide -n kube-system|grep nfs-client\n\n# 查看pod日志\nkubectl logs -f `kubectl get pod -o wide -n kube-system|grep nfs-client|awk '{print $1}'` -n kube-system\n```\n\n### 4、创建StorageClass\n\nstorage class的定义，需要注意的是：provisioner属性要等于驱动所传入的环境变量`PROVISIONER_NAME`的值。否则，驱动不知道知道如何绑定storage class。\n此处可以不修改，或者修改provisioner的名字，需要与上面的deployment的`PROVISIONER_NAME`名字一致。\n\n```bash\n# 清理storageclass资源\nkubectl delete -f nfs-storage.yaml\n\n# 编写yaml\ncat >nfs-storage.yaml<<-EOF\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: nfs-storage\n  annotations:\n    storageclass.kubernetes.io/is-default-class: \"true\"  #---设置为默认的storageclass\nprovisioner: nfs-client  #---动态卷分配者名称，必须和上面创建的\"PROVISIONER_NAME\"变量中设置的Name一致\nparameters:\n  archiveOnDelete: \"true\"  #---设置为\"false\"时删除PVC不会保留数据,\"true\"则保留数据\nmountOptions: \n  - hard        #指定为硬挂载方式\n  - nfsvers=4   #指定NFS版本，这个需要根据 NFS Server 版本号设置\nEOF\n\n#部署class.yaml\nkubectl apply -f nfs-storage.yaml\n\n#查看创建的storageclass(这里可以看到nfs-storage已经变为默认的storageclass了)\n$ kubectl get sc\nNAME                    PROVISIONER      AGE\nnfs-storage (default)   nfs-client       3m38s\n```\n\n## 四、创建PVC\n\n### 01、创建一个新的namespace，然后创建pvc资源\n\n```bash\n# 删除命令空间\nkubectl delete ns kube-public\n\n# 创建命名空间\nkubectl create ns kube-public\n\n# 清理pvc\nkubectl delete -f test-claim.yaml -n kube-public\n\n# 编写yaml\ncat >test-claim.yaml<<\\EOF\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: test-claim\nspec:\n  storageClassName: nfs-storage #---需要与上面创建的storageclass的名称一致\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 100Gi\nEOF\n\n#创建PVC\nkubectl apply -f test-claim.yaml -n kube-public\n\n#查看创建的PV和PVC\n$ kubectl get pvc -n kube-public\nNAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE\ntest-claim   Bound    pvc-593f241f-a75f-459a-af18-a672e5090921   100Gi      RWX            nfs-storage    3s\n\nkubectl get pv\n\n#然后，我们进入到NFS的export目录，可以看到对应该volume name的目录已经创建出来了。其中volume的名字是namespace，PVC name以及uuid的组合：\n\n#注意，出现pvc在pending的原因可能为nfs-client-provisioner pod 出现了问题，删除重建的时候会出现镜像问题\n```\n\n## 五、创建测试Pod\n\n```bash\n# 清理资源\nkubectl delete -f test-pod.yaml -n kube-public\n\n# 编写yaml\ncat > test-pod.yaml <<\\EOF\nkind: Pod\napiVersion: v1\nmetadata:\n  name: test-pod\nspec:\n  containers:\n  - name: test-pod\n    image: busybox:latest\n    command:\n      - \"/bin/sh\"\n    args:\n      - \"-c\"\n      - \"touch /mnt/SUCCESS && exit 0 || exit 1\"\n    volumeMounts:\n      - name: nfs-pvc\n        mountPath: \"/mnt\"\n  restartPolicy: \"Never\"\n  volumes:\n    - name: nfs-pvc\n      persistentVolumeClaim:\n        claimName: test-claim\nEOF\n\n#创建pod\nkubectl apply -f test-pod.yaml -n kube-public\n\n#查看创建的pod\nkubectl get pod -o wide -n kube-public\n```\n\n### 01、进入 NFS Server 服务器验证是否创建对应文件\n\n进入 NFS Server 服务器的 NFS 挂载目录，查看是否存在 Pod 中创建的文件：\n\n```bash\n$ cd /data/nfs/\n$ ls\narchived-kube-public-test-claim-pvc-2dd4740d-f2d1-4e88-a0fc-383c00e37255  kube-public-test-claim-pvc-ad304939-e75d-414f-81b5-7586ef17db6c\narchived-kube-public-test-claim-pvc-593f241f-a75f-459a-af18-a672e5090921  kube-system-test1-claim-pvc-f84dc09c-b41e-4e67-a239-b14f8d342efc\narchived-kube-public-test-claim-pvc-b08b209d-c448-4ce4-ab5c-1bf37cc568e6  pv001\ndefault-test-claim-pvc-4f18ed06-27cd-465b-ac87-b2e0e9565428               pv002\n\n# 可以看到已经生成 SUCCESS 该文件，并且可知通过 NFS Provisioner 创建的目录命名方式为 “namespace名称-pvc名称-pv名称”，pv 名称是随机字符串，所以每次只要不删除 PVC，那么 Kubernetes 中的与存储绑定将不会丢失，要是删除 PVC 也就意味着删除了绑定的文件夹，下次就算重新创建相同名称的 PVC，生成的文件夹名称也不会一致，因为 PV 名是随机生成的字符串，而文件夹命名又跟 PV 有关,所以删除 PVC 需谨慎。\n```\n\n\n参考文档：\n\nhttps://blog.csdn.net/qq_25611295/article/details/86065053  k8s pv与pvc持久化存储（静态与动态）\n\nhttps://blog.csdn.net/networken/article/details/86697018 kubernetes部署NFS持久存储\n\nhttps://www.jianshu.com/p/5e565a8049fc  kubernetes部署NFS持久存储（静态和动态）\n"
  },
  {
    "path": "components/external-storage/4、Kubernetes之MySQL持久存储和故障转移.md",
    "content": "Table of Contents\n=================\n\n   * [一、MySQL持久化演练](#一mysql持久化演练)\n      * [1、数据库提供持久化存储，主要分为下面几个步骤：](#1数据库提供持久化存储主要分为下面几个步骤)\n   * [二、静态PV PVC](#二静态pv-pvc)\n      * [1、创建 PV](#1创建-pv)\n      * [2、创建PVC](#2创建pvc)\n   * [三、部署 MySQL](#三部署-mysql)\n      * [1、MySQL 的配置文件mysql.yaml如下：](#1mysql-的配置文件mysqlyaml如下)\n      * [2、更新 MySQL 数据](#2更新-mysql-数据)\n      * [3、故障转移](#3故障转移)\n   * [四、全新命名空间使用](#四全新命名空间使用)\n   \n# 一、MySQL持久化演练\n\n## 1、数据库提供持久化存储，主要分为下面几个步骤：\n\n    1、创建 PV 和 PVC\n\n    2、部署 MySQL\n\n    3、向 MySQL 添加数据\n\n    4、模拟节点宕机故障，Kubernetes 将 MySQL 自动迁移到其他节点\n\n    5、验证数据一致性\n   \n\n# 二、静态PV PVC\n\n```bash\nPV就好比是一个仓库，我们需要先购买一个仓库，即定义一个PV存储服务，例如CEPH,NFS,Local Hostpath等等。\n\nPVC就好比租户，pv和pvc是一对一绑定的，挂载到POD中，一个pvc可以被多个pod挂载。\n```\n\n## 1、创建 PV\n\n```bash\n# 清理pv资源\nkubectl delete -f mysql-static-pv.yaml\n\n# 编写pv yaml资源文件\ncat > mysql-static-pv.yaml <<\\EOF\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: mysql-static-pv\nspec:\n  capacity:\n    storage: 80Gi\n\n  accessModes:\n    - ReadWriteOnce\n  #ReadWriteOnce - 卷可以由单个节点以读写方式挂载\n  #ReadOnlyMany  - 卷可以由许多节点以只读方式挂载\n  #ReadWriteMany - 卷可以由许多节点以读写方式挂载\n\n  persistentVolumeReclaimPolicy: Retain\n  #Retain，不清理, 保留 Volume（需要手动清理）\n  #Recycle，删除数据，即 rm -rf /thevolume/*（只有 NFS 和 HostPath 支持）\n  #Delete，删除存储资源，比如删除 AWS EBS 卷（只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持）\n\n  nfs:\n    path: /data/nfs/mysql/\n    server: 10.198.1.155\n  mountOptions:\n    - vers=4\n    - minorversion=0\n    - noresvport\nEOF\n\n# 部署pv到集群中\nkubectl apply -f mysql-static-pv.yaml\n\n# 查看pv\n$ kubectl get pv\nNAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                           STORAGECLASS          REASON   AGE\nmysql-static-pv                            80Gi       RWO            Retain           Available                                                                                  4m20s\n```\n\n## 2、创建PVC\n\n```bash\n# 清理pvc资源\nkubectl delete -f mysql-pvc.yaml \n\n# 编写pvc yaml资源文件\ncat > mysql-pvc.yaml <<\\EOF\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-static-pvc\nspec:\n  accessModes:\n  - ReadWriteOnce\n  resources:\n    requests:\n      storage: 80Gi\nEOF\n\n# 创建pvc资源\nkubectl apply -f mysql-pvc.yaml\n\n# 查看pvc\n$ kubectl get pvc\nNAME               STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE\nmysql-static-pvc   Bound         pvc-c55f8695-2a0b-4127-a60b-5c1aba8b9104   80Gi       RWO            nfs-storage    81s\n```\n\n# 三、部署 MySQL\n\n## 1、MySQL 的配置文件mysql.yaml如下：\n\n```bash\nkubectl delete -f mysql.yaml\n\ncat >mysql.yaml<<\\EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql\nspec:\n  ports:\n  - port: 3306\n  selector:\n    app: mysql\n---\napiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  name: mysql\nspec:\n  selector:\n    matchLabels:\n      app: mysql\n  template:\n    metadata:\n      labels:\n        app: mysql\n    spec:\n      containers:\n      - name: mysql\n        image: mysql:5.6\n        env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: password\n        ports:\n        - name: mysql\n          containerPort: 3306\n        volumeMounts:\n        - name: mysql-persistent-storage\n          mountPath: /var/lib/mysql\n      volumes:\n      - name: mysql-persistent-storage\n        persistentVolumeClaim:\n          claimName: mysql-static-pvc\nEOF\n\nkubectl apply -f mysql.yaml\n\n# PVC mysql-static-pvc Bound 的 PV mysql-static-pv 将被 mount 到 MySQL 的数据目录 /var/lib/mysql。\n```\n\n## 2、更新 MySQL 数据\n\nMySQL 被部署到 k8s-node02，下面通过客户端访问 Service mysql：\n\n```bash\n$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword\nIf you don't see a command prompt, try pressing enter.\nmysql>\n\n我们在mysql库中创建一个表myid，然后在表里新增几条数据。\n\nmysql> use mysql\nDatabase changed\n\nmysql> drop table myid;\nQuery OK, 0 rows affected (0.12 sec)\n\nmysql> create table myid(id int(4));\nQuery OK, 0 rows affected (0.23 sec)\n\nmysql> insert myid values(888);\nQuery OK, 1 row affected (0.03 sec)\n\nmysql> select * from myid;\n+------+\n| id   |\n+------+\n|  888 |\n+------+\n1 row in set (0.00 sec)\n```\n\n## 3、故障转移\n\n我们现在把 node02 机器关机，模拟节点宕机故障。\n\n\n```bash\n1、一段时间之后，Kubernetes 将 MySQL 迁移到 k8s-node01\n\n$ kubectl get pod -o wide\nNAME                     READY   STATUS        RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES\nmysql-7686899cf9-8z6tc   1/1     Running       0          21s   10.244.1.19   node01   <none>           <none>\nmysql-7686899cf9-d4m42   1/1     Terminating   0          23m   10.244.2.17   node02   <none>           <none>\n\n2、验证数据的一致性\n\n$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword\nIf you don't see a command prompt, try pressing enter.\nmysql> use mysql\nReading table information for completion of table and column names\nYou can turn off this feature to get a quicker startup with -A\n\nDatabase changed\nmysql> select * from myid;\n+------+\n| id   |\n+------+\n|  888 |\n+------+\n1 row in set (0.00 sec)\n\n3、MySQL 服务恢复，数据也完好无损，我们可以可以在存储节点上面查看一下生成的数据库文件。\n\n[root@nfs_server mysql-pv]# ll\n-rw-rw---- 1 systemd-bus-proxy ssh_keys       56 12月 14 09:53 auto.cnf\n-rw-rw---- 1 systemd-bus-proxy ssh_keys 12582912 12月 14 10:15 ibdata1\n-rw-rw---- 1 systemd-bus-proxy ssh_keys 50331648 12月 14 10:15 ib_logfile0\n-rw-rw---- 1 systemd-bus-proxy ssh_keys 50331648 12月 14 09:53 ib_logfile1\ndrwx------ 2 systemd-bus-proxy ssh_keys     4096 12月 14 10:05 mysql\ndrwx------ 2 systemd-bus-proxy ssh_keys     4096 12月 14 09:53 performance_schema\n```\n\n# 四、全新命名空间使用\n\npv是全局的，pvc可以指定namespace\n\n```bash\nkubectl delete ns test-ns\n\nkubectl create ns test-ns\n\nkubectl apply -f mysql-pvc.yaml -n test-ns\n\nkubectl apply -f mysql.yaml -n test-ns\n\nkubectl get pods -n test-ns -o wide\n\nkubectl -n test-ns logs -f $(kubectl get pods -n test-ns|grep mysql|awk '{print $1}')\n\nkubectl run -n test-ns -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword\n```\n\n参考文档：\n\nhttps://blog.51cto.com/wzlinux/2330295   Kubernetes 之 MySQL 持久存储和故障转移(十一)\n\nhttps://qingmu.io/2019/08/11/Run-mysql-on-kubernetes/ 从部署mysql聊一聊有状态服务和PV及PVC\n"
  },
  {
    "path": "components/external-storage/5、Kubernetes之Nginx动静态PV持久存储.md",
    "content": "Table of Contents\n=================\n\n   * [一、nginx使用nfs静态PV](#一nginx使用nfs静态pv)\n      * [1、静态nfs-static-nginx-rc.yaml](#1静态nfs-static-nginx-rcyaml)\n      * [2、静态nfs-static-nginx-deployment.yaml](#2静态nfs-static-nginx-deploymentyaml)\n      * [3、nginx多目录挂载](#3nginx多目录挂载)\n   * [二、nginx使用nfs动态PV](#二nginx使用nfs动态pv)\n      * [1、动态nfs-dynamic-nginx.yaml](#1动态nfs-dynamic-nginxyaml)\n      \n# 一、nginx使用nfs静态PV\n\n## 1、静态nfs-static-nginx-rc.yaml\n\n```bash\n##清理资源\nkubectl delete -f nfs-static-nginx-rc.yaml -n test\n\ncat >nfs-static-nginx-rc.yaml<<\\EOF\n##创建namespace\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n   name: test\n   labels:\n     name: test\n##创建nfs-pv\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv\n  labels:\n    pv: nfs-pv\nspec:\n  capacity:\n    storage: 10Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs  # 注意这里使用nfs的storageClassName，如果没改k8s的默认storageClassName的话，这里可以省略\n  nfs:\n    path: /data/nfs/nginx/\n    server: 10.198.1.155\n##创建nfs-pvc\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: nfs-pvc\n  namespace: test\n  labels:\n    pvc: nfs-pvc\nspec:\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 10Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nfs-pv\n##部署应用nginx\n---\napiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: nginx-test\n  namespace: test\n  labels:\n    name: nginx-test\nspec:\n  replicas: 2\n  selector:\n    name: nginx-test\n  template:\n    metadata:\n      labels:\n       name: nginx-test\n    spec:\n      containers:\n      - name: nginx-test\n        image: docker.io/nginx\n        volumeMounts:\n        - mountPath: /usr/share/nginx/html\n          name: nginx-data\n        ports:\n        - containerPort: 80\n      volumes:\n      - name: nginx-data\n        persistentVolumeClaim:\n          claimName: nfs-pvc\n##创建service\n---\napiVersion: v1\nkind: Service\nmetadata:\n  namespace: test\n  name: nginx-test\n  labels:\n    name: nginx-test\nspec:\n  type: NodePort\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n    name: http\n    nodePort: 30080\n  selector:\n    name: nginx-test\nEOF\n\n##创建资源\nkubectl apply -f nfs-static-nginx-rc.yaml -n test\n\n##查看pv资源\nkubectl get pv -n test --show-labels\n\n##查看pvc资源\nkubectl get pvc -n test --show-labels\n\n##查看pod\n$ kubectl get pods -n test\nNAME               READY   STATUS    RESTARTS   AGE\nnginx-test-r4n2j   1/1     Running   0          54s\nnginx-test-zstf5   1/1     Running   0          54s\n\n#可以看到，nginx应用已经部署成功。\n#nginx应用的数据目录是使用的nfs共享存储，我们在nfs共享的目录里加入index.html文件，然后再访问nginx-service暴露的端口\n#切换到到nfs-server服务器上\n\necho \"Test NFS Share discovery with nfs-static-nginx-rc\" > /data/nfs/nginx/index.html\n\n#在浏览器上访问kubernetes主节点的 http://master:30080，就能访问到这个页面内容了\n```\n\n## 2、静态nfs-static-nginx-deployment.yaml\n\n```bash\n##清理资源\nkubectl delete -f nfs-static-nginx-deployment.yaml -n test\n\ncat >nfs-static-nginx-deployment.yaml<<\\EOF\n##创建namespace\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n   name: test\n   labels:\n     name: test\n##创建nfs-pv\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv\n  labels:\n    pv: nfs-pv\nspec:\n  capacity:\n    storage: 10Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs  # 注意这里使用nfs的storageClassName，如果没改k8s的默认storageClassName的话，这里可以省略\n  nfs:\n    path: /data/nfs/nginx/\n    server: 10.198.1.155\n##创建nfs-pvc\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: nfs-pvc\n  namespace: test\n  labels:\n    pvc: nfs-pvc\nspec:\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 10Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nfs-pv\n##部署应用nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: test\n  labels:\n    name: nginx-test\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      name: nginx-test\n  template:\n    metadata:\n      labels:\n       name: nginx-test\n    spec:\n      containers:\n      - name: nginx-test\n        image: docker.io/nginx\n        volumeMounts:\n        - mountPath: /usr/share/nginx/html\n          name: nginx-data\n        ports:\n        - containerPort: 80\n      volumes:\n      - name: nginx-data\n        persistentVolumeClaim:\n          claimName: nfs-pvc\n##创建service\n---\napiVersion: v1\nkind: Service\nmetadata:\n  namespace: test\n  name: nginx-test\n  labels:\n    name: nginx-test\nspec:\n  type: NodePort\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n    name: http\n    nodePort: 30080\n  selector:\n    name: nginx-test\nEOF\n\n##创建资源\nkubectl apply -f nfs-static-nginx-deployment.yaml -n test\n\n##查看pv资源\nkubectl get pv -n test --show-labels\n\n##查看pvc资源\nkubectl get pvc -n test --show-labels\n\n##查看pod\n$ kubectl get pods -n test\nNAME                                READY   STATUS    RESTARTS   AGE\nnginx-deployment-64d6f78cdf-8bw8t   1/1     Running   0          55s\nnginx-deployment-64d6f78cdf-n5n4q   1/1     Running   0          55s\n\n#可以看到，nginx应用已经部署成功。\n#nginx应用的数据目录是使用的nfs共享存储，我们在nfs共享的目录里加入index.html文件，然后再访问nginx-service暴露的端口\n#切换到到nfs-server服务器上\n\necho \"Test NFS Share discovery with nfs-static-nginx-deployment\" > /data/nfs/nginx/index.html\n\n#在浏览器上访问kubernetes主节点的 http://master:30080，就能访问到这个页面内容了\n```\n\n## 3、nginx多目录挂载\n\n```\n1、PV和PVC是一一对应关系，当有PV被某个PVC所占用时，会显示banding，其它PVC不能再使用绑定过的PV。\n\n2、PVC一旦绑定PV，就相当于是一个存储卷，此时PVC可以被多个Pod所使用。（PVC支不支持被多个Pod访问，取决于访问模型accessMode的定义）。\n\n3、PVC若没有找到合适的PV时，则会处于pending状态。\n\n4、PV是属于集群级别的，不能定义在名称空间中。\n\n5、PVC时属于名称空间级别的。\n```\n\n```bash\n##清理资源\nkubectl delete -f nfs-static-nginx-dp-many.yaml -n test\n\ncat >nfs-static-nginx-dp-many.yaml<<\\EOF\n##创建namespace\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n   name: test\n   labels:\n     name: test\n##创建nginx-data-pv\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nginx-data-pv\n  labels:\n    pv: nginx-data-pv\nspec:\n  capacity:\n    storage: 50Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs  # 注意这里使用nfs的storageClassName，如果没改k8s的默认storageClassName的话，这里可以省略\n  nfs:\n    path: /data/nfs/nginx/\n    server: 10.198.1.155\n##创建nginx-etc-pv\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nginx-etc-pv\n  labels:\n    pv: nginx-etc-pv\nspec:\n  capacity:\n    storage: 50Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs  # 注意这里使用nfs的storageClassName，如果没改k8s的默认storageClassName的话，这里可以省略\n  nfs:\n    path: /data/nfs/nginx/\n    server: 10.198.1.155\n##创建pvc名字为nfs-nginx-data,存放数据\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: nfs-nginx-data\n  namespace: test\n  labels:\n    pvc: nfs-nginx-data\nspec:\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 50Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nginx-data-pv\n##创建pvc名字为nfs-nginx-etc,存放配置文件\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: nfs-nginx-etc\n  namespace: test\n  labels:\n    pvc: nfs-nginx-etc\nspec:\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 50Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nginx-etc-pv\n##部署应用nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: test\n  labels:\n    name: nginx-test\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      name: nginx-test\n  template:\n    metadata:\n      labels:\n       name: nginx-test\n    spec:\n      containers:\n      - name: nginx-test\n        image: docker.io/nginx\n        volumeMounts:\n        - mountPath: /usr/share/nginx/html\n          name: nginx-data\n        # - mountPath: /etc/nginx   #--这里需要注意，如果是这么挂载，那么需要事先现在/data/nfs/nginx/目录下把nginx的完整配置提前拷贝好\n        #   name: nginx-etc\n        ports:\n        - containerPort: 80\n      volumes:\n      - name: nginx-data\n        persistentVolumeClaim:\n          claimName: nfs-nginx-data\n      # - name: nginx-etc\n      #   persistentVolumeClaim:\n      #     claimName: nfs-nginx-etc\n##创建service\n---\napiVersion: v1\nkind: Service\nmetadata:\n  namespace: test\n  name: nginx-test\n  labels:\n    name: nginx-test\nspec:\n  type: NodePort\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n    name: http\n    nodePort: 30080\n  selector:\n    name: nginx-test\nEOF\n\n##创建资源\nkubectl apply -f nfs-static-nginx-dp-many.yaml -n test\n\n##查看pv资源\nkubectl get pv -n test --show-labels\n\n##查看pvc资源\nkubectl get pvc -n test --show-labels\n\n##查看pod\n$ kubectl get pods -n test\nNAME                                READY   STATUS    RESTARTS   AGE\nnginx-deployment-64d6f78cdf-8bw8t   1/1     Running   0          55s\nnginx-deployment-64d6f78cdf-n5n4q   1/1     Running   0          55s\n\n##进入容器\nkubectl exec -it nginx-deployment-f687cdf47-xncj8 -n test /bin/bash\n\n#可以看到，nginx应用已经部署成功。\n#nginx应用的数据目录是使用的nfs共享存储，我们在nfs共享的目录里加入index.html文件，然后再访问nginx-service暴露的端口\n#切换到到nfs-server服务器上\n\necho \"Test NFS Share discovery with nfs-static-nginx-dp-many\" > /data/nfs/nginx/index.html\n\n#在浏览器上访问kubernetes主节点的 http://master:30080，就能访问到这个页面内容了\n```\n\n## 4、参数namespace\n\n```bash\n##清理资源\nexport NAMESPACE=\"mos-namespace\"\n\nkubectl delete -f nfs-static-nginx-dp-many.yaml -n ${NAMESPACE}\n\ncat >nfs-static-nginx-dp-many.yaml<<-EOF\n##创建namespace\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n   name: ${NAMESPACE}\n   labels:\n     name: ${NAMESPACE}\n##创建nginx-data-pv\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nginx-data-pv\n  labels:\n    pv: nginx-data-pv\nspec:\n  capacity:\n    storage: 50Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs  # 注意这里使用nfs的storageClassName，如果没改k8s的默认storageClassName的话，这里可以省略\n  nfs:\n    path: /data/nfs/nginx/\n    server: 10.198.1.155\n##创建nginx-log-pv\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nginx-log-pv\n  labels:\n    pv: nginx-log-pv\nspec:\n  capacity:\n    storage: 50Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs  # 注意这里使用nfs的storageClassName，如果没改k8s的默认storageClassName的话，这里可以省略\n  nfs:\n    path: /data/nfs/nginx/\n    server: 10.198.1.155\n##创建pvc名字为nfs-nginx-data,存放数据\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: nfs-nginx-data\n  labels:\n    pvc: nfs-nginx-data\nspec:\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 50Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nginx-data-pv\n##创建pvc名字为nfs-nginx-log,存放日志文件\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: nfs-nginx-log\n  labels:\n    pvc: nfs-nginx-log\nspec:\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 50Gi\n  storageClassName: nfs\n  selector:\n    matchLabels:\n      pv: nginx-log-pv\n##部署应用nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  labels:\n    name: nginx-test\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      name: nginx-test\n  template:\n    metadata:\n      labels:\n       name: nginx-test\n    spec:\n      containers:\n      - name: nginx-test\n        image: docker.io/nginx\n        volumeMounts:\n        - mountPath: /usr/share/nginx/html\n          name: nginx-data\n        - mountPath: /var/log/nginx\n          name: nginx-log\n        ports:\n        - containerPort: 80\n      volumes:\n      - name: nginx-data\n        persistentVolumeClaim:\n          claimName: nfs-nginx-data\n      - name: nginx-log\n        persistentVolumeClaim:\n          claimName: nfs-nginx-log\n##创建service\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-test\n  labels:\n    name: nginx-test\nspec:\n  type: NodePort\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n    name: http\n    nodePort: 30180\n  selector:\n    name: nginx-test\nEOF\n\n##创建资源\nkubectl apply -f nfs-static-nginx-dp-many.yaml -n ${NAMESPACE}\n```\n\n\n# 二、nginx使用nfs动态PV\n\n`https://github.com/Lancger/opsfull/blob/master/components/external-storage/3%E3%80%81%E5%8A%A8%E6%80%81%E7%94%B3%E8%AF%B7PV%E5%8D%B7.md`\n\n## 1、动态nfs-dynamic-nginx.yaml\n\n通过参数控制在哪个命名空间创建\n\n```bash\n##清理命名空间\nkubectl delete ns k8s-public\n\n##创建命名空间\nkubectl create ns k8s-public\n\n##清理资源\nkubectl delete -f nfs-dynamic-nginx-deployment.yaml -n k8s-public\n\ncat >nfs-dynamic-nginx-deployment.yaml<<\\EOF\n##动态申请nfs-dynamic-pvc\n---\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: nfs-dynamic-claim\nspec:\n  storageClassName: nfs-storage #--需要与上面创建的storageclass的名称一致\n  accessModes:\n    - ReadWriteMany\n  resources:\n    requests:\n      storage: 90Gi\n##部署应用nginx\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  labels:\n    name: nginx-test\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      name: nginx-test\n  template:\n    metadata:\n      labels:\n       name: nginx-test\n    spec:\n      containers:\n      - name: nginx-test\n        image: docker.io/nginx\n        volumeMounts:\n        - mountPath: /usr/share/nginx/html\n          name: nginx-data\n        ports:\n        - containerPort: 80\n      volumes:\n      - name: nginx-data\n        persistentVolumeClaim:\n          claimName: nfs-dynamic-claim\n##创建service\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-test\n  labels:\n    name: nginx-test\nspec:\n  type: NodePort\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n    name: http\n    nodePort: 30090\n  selector:\n    name: nginx-test\nEOF\n\n##创建资源\nkubectl apply -f nfs-dynamic-nginx-deployment.yaml -n k8s-public\n\n##查看pv资源\nkubectl get pv -n k8s-public --show-labels\n\n##查看pvc资源\nkubectl get pvc -n k8s-public --show-labels\n\n##查看pod\n$ kubectl get pods -n k8s-public\nNAME                                READY   STATUS    RESTARTS   AGE\nnginx-deployment-544f569478-5t8wm   1/1     Running   0          40s\nnginx-deployment-544f569478-8gks5   1/1     Running   0          40s\nnginx-deployment-544f569478-pw96x   1/1     Running   0          40s\n\n#可以看到，nginx应用已经部署成功。\n#nginx应用的数据目录是使用的nfs共享存储，我们在nfs共享的目录里加入index.html文件，然后再访问nginx-service暴露的端口\n#切换到到nfs-server服务器上\n\n#注意动态的在这个目录，创建的目录命名方式为 “namespace名称-pvc名称-pv名称”\n/data/nfs/kube-public-test-claim-pvc-ad304939-e75d-414f-81b5-7586ef17db6c\n\necho \"Test NFS Share discovery with nfs-dynamic-nginx-deployment\" > /data/nfs/kube-public-test-claim-pvc-ad304939-e75d-414f-81b5-7586ef17db6c/index.html\n\n#在浏览器上访问kubernetes主节点的 http://master:30090，就能访问到这个页面内容了\n```\n\n![](https://github.com/Lancger/opsfull/blob/master/images/dynamic-pv.png)\n\n\n参考文档：\n\nhttps://kubernetes.io/zh/docs/tasks/run-application/run-stateless-application-deployment/  \n\nhttps://blog.51cto.com/ylw6006/2071845 在kubernetes集群中运行nginx\n"
  },
  {
    "path": "components/external-storage/README.md",
    "content": "PersistenVolume（PV）：对存储资源创建和使用的抽象，使得存储作为集群中的资源管理\n\nPV分为静态和动态，动态能够自动创建PV\n\nPersistentVolumeClaim（PVC）：让用户不需要关心具体的Volume实现细节\n\n容器与PV、PVC之间的关系，可以如下图所示：\n\n  ![PV](https://github.com/Lancger/opsfull/blob/master/images/pv01.png)\n\n总的来说，PV是提供者，PVC是消费者，消费的过程就是绑定\n\n# 问题一\n\npv挂载正常，pvc一直处于Pending状态\n\n```bash \n#在test的命名空间创建pvc\n$ kubectl get pvc -n test\nNAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE\nnfs-pvc   Pending---这里发现一直处于Pending的状态                                      nfs-storage    10s\n\n#查看日志\n$ kubectl describe pvc nfs-pvc -n test\nfailed to provision volume with StorageClass \"nfs-storage\": claim Selector is not supported\n#从日志中发现，问题出在标签匹配的地方\n```\n\n参考资料：\n\nhttps://blog.csdn.net/qq_25611295/article/details/86065053  k8s pv与pvc持久化存储（静态与动态）\n\nhttps://www.jianshu.com/p/5e565a8049fc  kubernetes部署NFS持久存储（静态和动态）\n"
  },
  {
    "path": "components/heapster/README.md",
    "content": "# 一、问题现象\nheapster:  已经被k8s给舍弃掉了\n```bash\nheapster logs这个报错啥情况 \nE0918 16:56:05.022867       1 manager.go:101] Error in scraping containers from kubelet_summary:10.10.188.242:10255: Get http://10.10.188.242:10255/stats/summary/: dial tcp 10.10.188.242:10255: getsockopt: connection refused\n```\n# 排查思路\n\n\n```\n1、排查下kubelet，10255是它暴露的端口\n\nservice kubelet status  #看状态是正常的\n\n#在10.10.188.242上执行\n[root@localhost ~]# netstat -lnpt | grep 10255\ntcp        0      0 10.10.188.240:10255     0.0.0.0:*               LISTEN      9243/kubelet\n\n看了下/var/log/pods/kube-system_heapster-5f848f54bc-rtbv4_abf53b7c-491f-472a-9e8b-815066a6ae3d/heapster下日志  所有的物理节点都是10255 拒绝连接\n\n\n2、浏览器访问查看数据\n\n10.10.188.242 是你节点的IP吧，正常的话浏览器访问http://IP:10255/stats/summary是有值的，你看下，如果没有那就是kubelet的配置出问题\n\n```\n![heapster获取数据异常1](https://github.com/Lancger/opsfull/blob/master/images/heapster-01.png)\n\n![heapster获取数据异常2](https://github.com/Lancger/opsfull/blob/master/images/heapster-02.png)\n"
  },
  {
    "path": "components/ingress/0.通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系.md",
    "content": "# 一、通俗的讲:\n\n- 1、Service 是后端真实服务的抽象，一个 Service 可以代表多个相同的后端服务\n\n- 2、Ingress 是反向代理规则，用来规定 HTTP/S 请求应该被转发到哪个 Service 上，比如根据请求中不同的 Host 和 url 路径让请求落到不同的 Service 上\n\n- 3、Ingress Controller 就是一个反向代理程序，它负责解析 Ingress 的反向代理规则，如果 Ingress 有增删改的变动，所有的 Ingress Controller 都会及时更新自己相应的转发规则，当 Ingress Controller 收到请求后就会根据这些规则将请求转发到对应的 Service\n\n# 二、数据流向图\n\nKubernetes 并没有自带 Ingress Controller，它只是一种标准，具体实现有多种，需要自己单独安装，常用的是 Nginx Ingress Controller 和 Traefik Ingress Controller。 所以 Ingress 是一种转发规则的抽象，Ingress Controller 的实现需要根据这些 Ingress 规则来将请求转发到对应的 Service，我画了个图方便大家理解：\n\n  ![Ingress Controller数据流向图](https://github.com/Lancger/opsfull/blob/master/images/Ingress%20Controller01.png)\n\n从图中可以看出，Ingress Controller 收到请求，匹配 Ingress 转发规则，匹配到了就转发到后端 Service，而 Service 可能代表的后端 Pod 有多个，选出一个转发到那个 Pod，最终由那个 Pod 处理请求。\n\n# 三、Ingress Controller对外暴露方式\n\n有同学可能会问，既然 Ingress Controller 要接受外面的请求，而 Ingress Controller 是部署在集群中的，怎么让 Ingress Controller 本身能够被外面访问到呢，有几种方式：\n\n- 1、Ingress Controller 用 Deployment 方式部署，给它添加一个 Service，类型为 LoadBalancer，这样会自动生成一个 IP 地址，通过这个 IP 就能访问到了，并且一般这个 IP 是高可用的（前提是集群支持 LoadBalancer，通常云服务提供商才支持，自建集群一般没有）\n\n- 2、使用集群内部的某个或某些节点作为边缘节点，给 node 添加 label 来标识，Ingress Controller 用 DaemonSet 方式部署，使用 nodeSelector 绑定到边缘节点，保证每个边缘节点启动一个 Ingress Controller 实例，用 hostPort 直接在这些边缘节点宿主机暴露端口，然后我们可以访问边缘节点中 Ingress Controller 暴露的端口，这样外部就可以访问到 Ingress Controller 了\n\n- 3、Ingress Controller 用 Deployment 方式部署，给它添加一个 Service，类型为 NodePort，部署完成后查看会给出一个端口，通过 kubectl get svc 我们可以查看到这个端口，这个端口在集群的每个节点都可以访问，通过访问集群节点的这个端口就可以访问 Ingress Controller 了。但是集群节点这么多，而且端口又不是 80和443，太不爽了，一般我们会在前面自己搭个负载均衡器，比如用 Nginx，将请求转发到集群各个节点的那个端口上，这样我们访问 Nginx 就相当于访问到 Ingress Controller 了\n\n一般比较推荐的是前面两种方式。\n\n参考资料：\n\nhttps://cloud.tencent.com/developer/article/1326535  通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系\n"
  },
  {
    "path": "components/ingress/1.kubernetes部署Ingress-nginx单点和高可用.md",
    "content": "# 一、Ingress-nginx简介\n\n&#8195;Pod的IP以及service IP只能在集群内访问，如果想在集群外访问kubernetes提供的服务，可以使用nodeport、proxy、loadbalacer以及ingress等方式，由于service的IP集群外不能访问，就是使用ingress方式再代理一次，即ingress代理service，service代理pod.\nIngress基本原理图如下：\n\n  ![Ingress-nginx](https://github.com/Lancger/opsfull/blob/master/images/Ingress-nginx.png)\n \n# 二、部署nginx-ingress-controller\n\n```bash\n# github地址\nhttps://github.com/kubernetes/ingress-nginx\nhttps://kubernetes.github.io/ingress-nginx/\n\n# 1、下载nginx-ingress-controller配置文件\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml\n\n# 2、service-nodeport.yaml为ingress通过nodeport对外提供服务，注意默认nodeport暴露端口为随机，可以编辑该文件自定义端口\nUsing NodePort:\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml\n\n# 3、查看ingress-nginx组件状态\nroot># kubectl get pod -n ingress-nginx\nNAME                                        READY   STATUS    RESTARTS   AGE\nnginx-ingress-controller-568867bf56-mbvm2   1/1     Running   0          4m46s\n\n查看创建的ingress service暴露的端口：\nroot># kubectl get svc -n ingress-nginx \nNAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE\ningress-nginx   NodePort   10.97.243.123   <none>        80:30725/TCP,443:32314/TCP   5m46s\n```\n\n# 二、创建ingress-nginx后端服务\n\n1.创建一个Service及后端Deployment(以nginx为例)\n\n```\ncat > deploy-demon.yaml<<\\EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: myapp\n  namespace: default\nspec:\n  selector:\n    app: myapp\n    release: canary\n  ports:\n  - name: http\n    port: 80\n    targetPort: 80\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: myapp-deploy\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp\n      release: canary\n  template:\n    metadata:\n      labels:\n        app: myapp\n        release: canary\n    spec:\n      containers:\n      - name: myapp\n        image: ikubernetes/myapp:v2\n        ports:\n        - name: httpd\n          containerPort: 80\nEOF\n          \nroot># kubectl apply -f deploy-demon.yaml\n\nroot># kubectl get pods\n\nroot># kubectl get svc myapp\nNAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\nmyapp   ClusterIP   10.106.30.175   <none>        80/TCP    59s\n\n# 通过ClusterIP方式内部测试访问Services\nroot># curl 10.106.30.175\nHello MyApp | Version: v2 | <a href=\"hostname.html\">Pod Name</a>\n```\n\n# 三、创建myapp的ingress规则\n\n```\ncat > ingress-myapp.yaml<<\\EOF\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-myapp\n  namespace: default\n  annotations:\n    kubernets.io/ingress.class: \"nginx\"\nspec:\n  rules:\n  - host: www.k8s-devops.com\n    http:\n      paths:\n      - path:\n        backend:\n          serviceName: myapp\n          servicePort: 80\nEOF\n          \nroot># kubectl apply -f ingress-myapp.yaml  \n\nroot># kubectl get ingress\nNAME            HOSTS                ADDRESS         PORTS   AGE\ningress-myapp   www.k8s-devops.com   10.97.243.123   80      5s\n\n# 通过Ingress方式内部测试访问域名\nroot># curl -x 10.97.243.123:80 http://www.k8s-devops.com\nHello MyApp | Version: v2 | <a href=\"hostname.html\">Pod Name</a>\n```\n\n# 四、查看ingress-default-backend的详细信息：\n\n```\nroot># kubectl exec -n ingress-nginx -it nginx-ingress-controller-568867bf56-mbvm2 -- /bin/sh\n\n$ cat nginx.conf\n\n        ## start server www.k8s-devops.com\n        server {\n                server_name www.k8s-devops.com ;\n\n                listen 80  ;\n                listen 443  ssl http2 ;\n\n                set $proxy_upstream_name \"-\";\n\n                ssl_certificate_by_lua_block {\n                        certificate.call()\n                }\n\n                location / {\n\n                        set $namespace      \"default\";\n                        set $ingress_name   \"ingress-myapp\";\n                        set $service_name   \"myapp\";\n                        set $service_port   \"80\";\n                        set $location_path  \"/\";\n                        \n```\n\n# 五、测试域名\n```\n1、这是nginx-ingress-controller采用的deployment部署的多副本\nroot># kubectl get deployment -A\ningress-nginx          nginx-ingress-controller    6/6     6            6           65m (这里有6个副本)\n\nroot># kubectl get svc -n ingress-nginx \nNAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE\ningress-nginx   NodePort   10.97.243.123   <none>        80:30725/TCP,443:32314/TCP   69m\n\nroot># kubectl describe svc ingress-nginx -n ingress-nginx\nName:                     ingress-nginx\nNamespace:                ingress-nginx\nLabels:                   app.kubernetes.io/name=ingress-nginx\n                          app.kubernetes.io/part-of=ingress-nginx\nAnnotations:              kubectl.kubernetes.io/last-applied-configuration:\n                            {\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/par...\nSelector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx\nType:                     NodePort\nIP:                       10.97.243.123\nPort:                     http  80/TCP\nTargetPort:               80/TCP\nNodePort:                 http  30725/TCP\nEndpoints:                10.244.154.195:80,10.244.154.196:80,10.244.44.197:80 + 3 more...   这里转到6个pod\nPort:                     https  443/TCP\nTargetPort:               443/TCP\nNodePort:                 https  32314/TCP\nEndpoints:                10.244.154.195:443,10.244.154.196:443,10.244.44.197:443 + 3 more...\nSession Affinity:         None\nExternal Traffic Policy:  Cluster\nEvents:                   <none>\n\nroot># kubectl get endpoints -n ingress-nginx\nNAME            ENDPOINTS                                                          AGE\ningress-nginx   10.244.154.195:80,10.244.154.196:80,10.244.44.197:80 + 9 more...   68m\n\nIngress Controller 用 Deployment 方式部署，给它添加一个 Service，类型为 NodePort，部署完成后查看会给出一个端口，通过 kubectl get svc 我们可以查看到这个端口，这个端口在集群的每个节点都可以访问，通过访问集群节点的这个端口就可以访问 Ingress Controller 了。但是集群节点这么多，而且端口又不是 80和443，太不爽了，一般我们会在前面自己搭个负载均衡器，比如用 Nginx，将请求转发到集群各个节点的那个端口上，这样我们访问 Nginx 就相当于访问到 Ingress Controller 了。\n\n# 通过Nodeport方式测试（主机IP+端口）\ncurl 10.10.0.24:30725\ncurl 10.10.0.32:30725\ncurl 10.10.0.23:30725\ncurl 10.10.0.25:30725\ncurl 10.10.0.29:30725\ncurl 10.10.0.12:30725\n\n2、通过Ingress IP 绑定域名测试\n\nroot># kubectl get ingress -A\nNAMESPACE   NAME            HOSTS                ADDRESS         PORTS   AGE\ndefault     ingress-myapp   www.k8s-devops.com   10.97.243.123   80      45m\n\nroot># curl -x 10.97.243.123:80 http://www.k8s-devops.com\n```\n\n# 六、Ingress高可用\n\n&#8195;Ingress高可用，我们可以通过修改deployment的副本数来实现高可用，但是由于ingress承载着整个集群流量的接入，所以生产环境中，建议把ingress通过DaemonSet的方式部署集群中，而且该节点打上污点不允许业务pod进行调度，以避免业务应用与Ingress服务发生资源争抢。然后通过SLB把ingress节点主机添为后端服务器，进行流量转发。\n\n1、修改为DaemonSet方式部署\n\n```\nwget -N https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml -O ingress-nginx-mandatory.yaml\n\n1、类型的修改\nsed  -i 's/kind: Deployment/kind: DaemonSet/g' ingress-nginx-mandatory.yaml\nsed  -i 's/replicas:/#replicas:/g' ingress-nginx-mandatory.yaml\n\n2、镜像的修改(可忽略)\n#sed -i -e 's?quay.io?quay.azk8s.cn?g' -e 's?k8s.gcr.io?gcr.azk8s.cn/google-containers?g'  ingress-nginx-mandatory.yaml\n\n3、使pod共享宿主机网络,暴露所监听的端口以及让容器使用K8S的DNS\n# spec.template.spec 下面\n# serviceAccountName: nginx-ingress-serviceaccount 的前后,平级加上 hostNetwork: true 和 dnsPolicy: \"ClusterFirstWithHostNet\"\n\nsed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\\      hostNetwork: true' ingress-nginx-mandatory.yaml\nsed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\\      dnsPolicy: \"ClusterFirstWithHostNet\"' ingress-nginx-mandatory.yaml\n\n4、节点打标签和污点\n# 添加节点标签append to  serviceAccountName\n      nodeSelector:\n        node-ingress: \"true\"\n      tolerations:\n      - key: \"node-role.kubernetes.io/master\"\n        operator: \"Equal\"\n        value: \"\"\n        effect: \"NoSchedule\"\n        \nsed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\\      nodeSelector:\\n        node-ingress: \"true\"' ingress-nginx-mandatory.yaml\n\n修改参数如下：\n　　kind: Deployment #修改为DaemonSet\n　　replicas: 1 #注销此行，DaemonSet不需要此参数\n　　hostNetwork: true #添加该字段让docker使用物理机网络，在物理机暴露服务端口（80），注意物理机80端口提前不能被占用\n　　dnsPolicy: ClusterFirstWithHostNet #使用hostNetwork后容器会使用物理机网络包括DNS，会无法解析内部service，使用此参数让容器使用K8S的DNS\n　　nodeSelector:node-ingress: \"true\" #添加节点标签\n　　tolerations: 添加对指定节点容忍度\n  \n注意一点，因为我们创建的ingress-controller采用的时hostnetwork模式，所以无需在创建ingress-svc服务来把端口映射到节点主机上。\n```\n\n&#8195;这里我在3台master节点部署(生产环境不要使用master节点，应该部署在独立的节点上)，因为我们采用DaemonSet的方式，所以我们需要对3个节点打标签以及容忍度。\n\n```\n## 查看标签\nroot># kubectl get nodes --show-labels\n\n## 给节点打标签\n[root@k8s-master-01]# kubectl label nodes k8s-master-01 node-ingress=\"true\"\n[root@k8s-master-01]# kubectl label nodes k8s-master-02 node-ingress=\"true\"\n[root@k8s-master-01]# kubectl label nodes k8s-master-03 node-ingress=\"true\"\n\n## 节点打污点\n### master节点我之前已经打过污点，如果你没有打污点，执行下面3条命令。此污点名称需要与yaml文件中pod的容忍污点对应\n[root@k8s-master-01]# kubectl taint nodes k8s-master-01 node-role.kubernetes.io/master=:NoSchedule\n[root@k8s-master-01]# kubectl taint nodes k8s-master-02 node-role.kubernetes.io/master=:NoSchedule\n[root@k8s-master-01]# kubectl taint nodes k8s-master-03 node-role.kubernetes.io/master=:NoSchedule\n```\n\n2、最终配置文件DaemonSet版的Ingress\n```\ncat >ingress-nginx-mandatory.yaml<<\\EOF\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n\n---\n\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: nginx-configuration\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n\n---\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: tcp-services\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n\n---\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: udp-services\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: nginx-ingress-serviceaccount\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRole\nmetadata:\n  name: nginx-ingress-clusterrole\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - configmaps\n      - endpoints\n      - nodes\n      - pods\n      - secrets\n    verbs:\n      - list\n      - watch\n  - apiGroups:\n      - \"\"\n    resources:\n      - nodes\n    verbs:\n      - get\n  - apiGroups:\n      - \"\"\n    resources:\n      - services\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - \"\"\n    resources:\n      - events\n    verbs:\n      - create\n      - patch\n  - apiGroups:\n      - \"extensions\"\n      - \"networking.k8s.io\"\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - \"extensions\"\n      - \"networking.k8s.io\"\n    resources:\n      - ingresses/status\n    verbs:\n      - update\n\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: Role\nmetadata:\n  name: nginx-ingress-role\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - configmaps\n      - pods\n      - secrets\n      - namespaces\n    verbs:\n      - get\n  - apiGroups:\n      - \"\"\n    resources:\n      - configmaps\n    resourceNames:\n      # Defaults to \"<election-id>-<ingress-class>\"\n      # Here: \"<ingress-controller-leader>-<nginx>\"\n      # This has to be adapted if you change either parameter\n      # when launching the nginx-ingress-controller.\n      - \"ingress-controller-leader-nginx\"\n    verbs:\n      - get\n      - update\n  - apiGroups:\n      - \"\"\n    resources:\n      - configmaps\n    verbs:\n      - create\n  - apiGroups:\n      - \"\"\n    resources:\n      - endpoints\n    verbs:\n      - get\n\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: RoleBinding\nmetadata:\n  name: nginx-ingress-role-nisa-binding\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: nginx-ingress-role\nsubjects:\n  - kind: ServiceAccount\n    name: nginx-ingress-serviceaccount\n    namespace: ingress-nginx\n\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRoleBinding\nmetadata:\n  name: nginx-ingress-clusterrole-nisa-binding\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: nginx-ingress-clusterrole\nsubjects:\n  - kind: ServiceAccount\n    name: nginx-ingress-serviceaccount\n    namespace: ingress-nginx\n\n---\n\napiVersion: apps/v1\n#kind: Deployment  #修改为DaemonSet\nkind: DaemonSet\nmetadata:\n  name: nginx-ingress-controller\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\nspec:\n  #replicas: 1  #注销此行，DaemonSet不需要此参数\n  selector:\n    matchLabels:\n      app.kubernetes.io/name: ingress-nginx\n      app.kubernetes.io/part-of: ingress-nginx\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io/name: ingress-nginx\n        app.kubernetes.io/part-of: ingress-nginx\n      annotations:\n        prometheus.io/port: \"10254\"\n        prometheus.io/scrape: \"true\"\n    spec:\n      # wait up to five minutes for the drain of connections\n      terminationGracePeriodSeconds: 300\n      serviceAccountName: nginx-ingress-serviceaccount\n      hostNetwork: true  #添加该字段让docker使用物理机网络，在物理机暴露服务端口（80），注意物理机80端口提前不能被占用\n      dnsPolicy: ClusterFirstWithHostNet  #使用hostNetwork后容器会使用物理机网络包括DNS，会无法解析内部service，使用此参数让容器使用K8S的DNS\n      nodeSelector:\n        kubernetes.io/os: linux\n      nodeSelector:\n        node-ingress: \"true\"  #添加节点标签\n      tolerations:  #添加对指定节点容忍度\n      - key: \"node-role.kubernetes.io/master\"\n        operator: \"Equal\"\n        value: \"\"\n        effect: \"NoSchedule\"\n      containers:\n        - name: nginx-ingress-controller\n          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1\n          args:\n            - /nginx-ingress-controller\n            - --configmap=$(POD_NAMESPACE)/nginx-configuration\n            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services\n            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services\n            - --publish-service=$(POD_NAMESPACE)/ingress-nginx\n            - --annotations-prefix=nginx.ingress.kubernetes.io\n          securityContext:\n            allowPrivilegeEscalation: true\n            capabilities:\n              drop:\n                - ALL\n              add:\n                - NET_BIND_SERVICE\n            # www-data -> 33\n            runAsUser: 33\n          env:\n            - name: POD_NAME\n              valueFrom:\n                fieldRef:\n                  fieldPath: metadata.name\n            - name: POD_NAMESPACE\n              valueFrom:\n                fieldRef:\n                  fieldPath: metadata.namespace\n          ports:\n            - name: http\n              containerPort: 80\n            - name: https\n              containerPort: 443\n          livenessProbe:\n            failureThreshold: 3\n            httpGet:\n              path: /healthz\n              port: 10254\n              scheme: HTTP\n            initialDelaySeconds: 10\n            periodSeconds: 10\n            successThreshold: 1\n            timeoutSeconds: 10\n          readinessProbe:\n            failureThreshold: 3\n            httpGet:\n              path: /healthz\n              port: 10254\n              scheme: HTTP\n            periodSeconds: 10\n            successThreshold: 1\n            timeoutSeconds: 10\n          lifecycle:\n            preStop:\n              exec:\n                command:\n                  - /wait-shutdown\n\n---\nEOF\n\nkubectl apply -f ingress-nginx-mandatory.yaml\n```\n\n3、创建资源\n```\n[root@k8s-master01 ingress-master]# kubectl apply -f ingress-nginx-mandatory.yaml\n## 查看资源分布情况\n### 可以看到两个ingress-controller已经根据我们选择，部署在3个master节点上\n[root@k8s-master01 ingress-master]# kubectl get pod -n ingress-nginx -o wide\nNAME                             READY   STATUS    RESTARTS   AGE    IP              NODE           NOMINATED NODE   READINESS GATES\nnginx-ingress-controller-298dq   1/1     Running   0          134m   172.16.11.122   k8s-master03   <none>           <none>\nnginx-ingress-controller-sh9h2   1/1     Running   0          134m   172.16.11.121   k8s-master02   <none>           <none>\n```\n\n4、测试\n```\n#配置集群外域名解析，当前测试环境我们使用windows hosts文件进行解析(针对于node节点有公网IP的类型)\n92.168.92.56  www.k8s-devops.com  \n92.168.92.57  www.k8s-devops.com \n92.168.92.58  www.k8s-devops.com\n\n使用域名进行访问:\nwww.k8s-devops.com\n```\n\n参考资料：\n\nhttps://www.cnblogs.com/tchua/p/11174386.html   Kubernetes集群Ingress高可用部署\n\nhttps://github.com/kubernetes/ingress-nginx/blob/04e2ad8fcd51b0741263a37b8e7424ca3979137c/docs/deploy/index.md  官网\n\nhttps://blog.csdn.net/networken/article/details/85881558   kubernetes部署Ingress-nginx\n\nhttps://www.jianshu.com/p/a8e18cef13b2  HA ingress-nginx: DaemonSet hostNetwork keepavlied\n"
  },
  {
    "path": "components/ingress/1.外部服务发现之Ingress介绍.md",
    "content": "# 一、ingress介绍\n\n&#8195;&#8195;K8s集群对外暴露服务的方式目前只有三种：`LoadBlancer`、`NodePort`、`Ingress`。前两种熟悉起来比较快，而且使用起来也比较方便，在此就不进行介绍了。\n\n&#8195;&#8195;Ingress其实就是从 kuberenets 集群外部访问集群的一个入口，将外部的请求转发到集群内不同的 Service 上，其实就相当于 nginx、haproxy 等负载均衡代理服务器，有的同学可能觉得我们直接使用 nginx 就实现了，但是只使用 nginx 这种方式有很大缺陷，每次有新服务加入的时候怎么改 Nginx 配置？不可能让我们去手动更改或者滚动更新前端的 Nginx Pod 吧？那我们再加上一个服务发现的工具比如 consul 如何？貌似是可以，对吧？而且在之前单独使用 docker 的时候，这种方式已经使用得很普遍了，Ingress 实际上就是这样实现的，只是服务发现的功能自己实现了，不需要使用第三方的服务了，然后再加上一个域名规则定义，路由信息的刷新需要一个靠 Ingress controller 来提供。\n\n&#8195;&#8195;其中ingress controller目前主要有两种：基于`nginx`服务的ingress controller和基于`traefik`的ingress controller。而其中traefik的ingress controller，目前支持http和https协议\n\n# 二、ingress的工作原理\n\n## 1、ingress由两部分组成: ingress controller和ingress服务\n\n&#8195;&#8195;Ingress controller 可以理解为一个监听器，通过不断地与 kube-apiserver 打交道，实时的感知后端 service、pod 的变化，当得到这些变化信息后，Ingress controller 再结合 Ingress 的配置，更新反向代理负载均衡器，达到服务发现的作用。其实这点和服务发现工具 consul consul-template 非常类似。\n\n## 2、ingress具体的工作原理如下\n\n&#8195;&#8195;ingress contronler通过与k8s的api进行交互，动态的去感知k8s集群中ingress服务规则的变化，然后读取它，并按照定义的ingress规则，转发到k8s集群中对应的service。而这个ingress规则写明了哪个域名对应k8s集群中的哪个service，然后再根据ingress-controller中的nginx配置模板，生成一段对应的nginx配置。然后再把该配置动态的写到ingress-controller的pod里，该ingress-controller的pod里面运行着一个nginx服务，控制器会把生成的nginx配置写入到nginx的配置文件中，然后reload一下，使其配置生效。以此来达到域名分配置及动态更新的效果。\n\n# 三、Traefik\n\n&#8195;&#8195;Traefik 是一款开源的反向代理与负载均衡工具。它最大的优点是能够与常见的微服务系统直接整合，可以实现自动化动态配置。目前支持 Docker、Swarm、Mesos/Marathon、 Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB、Rest API 等等后端模型。\n\n&#8195;&#8195;要使用 traefik，我们同样需要部署 traefik 的 Pod，由于我们演示的集群中只有 master 节点有外网网卡，所以我们这里只有 master 这一个边缘节点，我们将 traefik 部署到该节点上即可。\n\n  ![traefik原理图](https://github.com/Lancger/opsfull/blob/master/images/traefik-architecture.png)\n\n- 1、 首先，为安全起见我们这里使用 RBAC 安全认证方式：([rbac.yaml](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml))\n\n```\n# vim rbac.yaml\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - pods\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n```\n\n- 2、直接在集群中创建即可：\n\n```\n$ kubectl create -f rbac.yaml\n\nserviceaccount \"traefik-ingress-controller\" created\nclusterrole.rbac.authorization.k8s.io \"traefik-ingress-controller\" created\nclusterrolebinding.rbac.authorization.k8s.io \"traefik-ingress-controller\" created\n```\n\n- 3、然后使用 Deployment 来管理 traefik Pod，直接使用官方的 traefik 镜像部署即可（[traefik.yaml](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-deployment.yaml)）\n\n```\n# vim traefik.yaml\n---\nkind: Deployment\napiVersion: extensions/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      tolerations:\n      - operator: \"Exists\"\n      nodeSelector:\n        kubernetes.io/hostname: master  #默认master是不允许被调度的，加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看\n      containers:\n      - image: traefik:v1.7\n        name: traefik-ingress-lb\n        ports:\n        - name: http\n          containerPort: 80\n          #hostPort: 80\n        - name: admin\n          containerPort: 8080\n        args:\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      # 该端口为 traefik ingress-controller的服务端口\n      port: 80\n      name: web\n      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围\n      # 从默认20000~40000之间选一个可用端口，让ingress-controller暴露给外部的访问\n      nodePort: 23456\n    - protocol: TCP\n      # 该端口为 traefik 的管理WEB界面\n      port: 8080\n      name: admin\n      nodePort: 23457\n  type: NodePort\n```\n\n- 4、直接创建上面的资源对象即可：\n\n```\n$ kubectl create -f traefik.yaml\n\ndeployment.extensions \"traefik-ingress-controller\" created\nservice \"traefik-ingress-service\" created\n```\n\n- 5、要注意上面 yaml 文件：\n\n```\ntolerations:\n- operator: \"Exists\"\nnodeSelector:\n  kubernetes.io/hostname: master\n\n由于我们这里的特殊性，只有 master 节点有外网访问权限，所以我们使用nodeSelector标签将traefik的固定调度到master这个节点上，那么上面的tolerations是干什么的呢？这个是因为我们集群使用的 kubeadm 安装的，master 节点默认是不能被普通应用调度的，要被调度的话就需要添加这里的 tolerations 属性，当然如果你的集群和我们的不太一样，直接去掉这里的调度策略就行。\n\nnodeSelector 和 tolerations 都属于 Pod 的调度策略，在后面的课程中会为大家讲解。\n\n```\n\n- 6、traefik 还提供了一个 web ui 工具，就是上面的 8080 端口对应的服务，为了能够访问到该服务，我们这里将服务设置成的 NodePort：\n\n```\n$ kubectl get pods -n kube-system -l k8s-app=traefik-ingress-lb -o wide\nNAME                                          READY     STATUS    RESTARTS   AGE       IP            NODE\ntraefik-ingress-controller-57c4f787d9-bfhnl   1/1       Running   0          8m        10.244.0.18   master\n$ kubectl get svc -n kube-system\nNAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE\n...\ntraefik-ingress-service   NodePort    10.102.183.112   <none>        80:23456/TCP,8080:23457/TCP   8m\n...\n```\n\n现在在浏览器中输入 [http://master_node_ip:23457 例如 http://16.21.206.156:23457/dashboard/ 注意这里是使用的IP] 就可以访问到 traefik 的 dashboard 了\n\n# 四、Ingress 对象\n\n以上我们是通过 NodePort 来访问 traefik 的 Dashboard 的，那怎样通过 ingress 来访问呢？ 首先，需要创建一个 ingress 对象：(ingress.yaml)\n\n```\n# vim ingress.yaml\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: traefik-ui.test.com\n    http:\n      paths:\n      - backend:\n          serviceName: traefik-ingress-service\n          #servicePort: 8080\n          servicePort: admin  #跟上面service的name对应\n```\n\n然后为 traefik dashboard 创建对应的 ingress 对象：\n\n```\n$ kubectl create -f ingress.yaml\n\ningress.extensions \"traefik-web-ui\" created\n```\n\n要注意上面的 ingress 对象的规则，特别是 rules 区域，我们这里是要为 traefik 的 dashboard 建立一个 ingress 对象，所以这里的 serviceName 对应的是上面我们创建的 traefik-ingress-service，端口也要注意对应 8080 端口，为了避免端口更改，这里的 servicePort 的值也可以替换成上面定义的 port 的名字：admin\n\n创建完成后，我们应该怎么来测试呢？\n\n- 1、第一步，在本地的/etc/hosts里面添加上 traefik-ui.test.com 与 master 节点外网 IP 的映射关系\n\n- 2、第二步，在浏览器中访问：http://traefik-ui.test.com 我们会发现并没有得到我们期望的 dashboard 界面，这是因为我们上面部署 traefik 的时候使用的是 NodePort 这种 Service 对象，所以我们只能通过上面的 23456 端口访问到我们的目标对象：[http://traefik-ui.test.com:23456](http://traefik-ui.test.com:23456) 加上端口后我们发现可以访问到 dashboard 了，而且在 dashboard 当中多了一条记录，正是上面我们创建的 ingress 对象的数据，我们还可以切换到 HEALTH 界面中，可以查看当前 traefik 代理的服务的整体的健康状态 \n\n&#8195;&#8195;注意这里为何是23456而不是23457，因为这里是通过ingress设置的域名来访问的，宿主机的23456端口对应宿主机上traefik-ingress-controller-nginx-pod容器的80端口，然后再经过ingress代理到service对应的pod节点上，如果traefik-ingress-controller-nginx-pod设置了宿主机端口映射，那么可以省略23456端口，下面会讲到hostPort: 80参数的使用，因为走了多层代理，所以直接Nodeport方式的性能会好一些，但是量一多，维护起来就比较麻烦）\n\n- 3、第三步，上面我们可以通过自定义域名加上端口可以访问我们的服务了，但是我们平时服务别人的服务是不是都是直接用的域名啊，http 或者 https 的，几乎很少有在域名后面加上端口访问的吧？为什么？太麻烦啊，端口也记不住，要解决这个问题，怎么办，我们只需要把我们上面的 traefik 的核心应用的端口隐射到 master 节点上的 80 端口，是不是就可以了，因为 http 默认就是访问 80 端口，但是我们在 Service 里面是添加的一个 NodePort 类型的服务，没办法映射 80 端口，怎么办？这里就可以直接在 Pod 中指定一个 hostPort 即可，更改上面的 traefik.yaml 文件中的容器端口：\n\n```\ncontainers:\n- image: traefik\nname: traefik-ingress-lb\nports:\n- name: http\n  containerPort: 80\n  hostPort: 80\n- name: admin\n  containerPort: 8080\n```\n\n添加以后 hostPort: 80，然后更新应用：\n\n```\n$ kubectl apply -f traefik.yaml\n```\n\n更新完成后，这个时候我们在浏览器中直接使用域名方法测试下：\n\n- 4、第四步，正常来说，我们如果有自己的域名，我们可以将我们的域名添加一条 DNS 记录，解析到 master 的外网 IP 上面，这样任何人都可以通过域名来访问我的暴露的服务了。如果你有多个边缘节点的话，可以在每个边缘节点上部署一个 ingress-controller 服务，然后在边缘节点前面挂一个负载均衡器，比如 nginx，将所有的边缘节点均作为这个负载均衡器的后端，这样就可以实现 ingress-controller 的高可用和负载均衡了。\n\n到这里我们就通过 ingress 对象对外成功暴露了一个服务，下节课我们再来详细了解 traefik 的更多用法。\n\n# 五、traefik 合并文件\n\n1、创建文件 traefik-controller-ingress.yaml\n```\nvim traefik-controller-ingress.yaml\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - pods\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n\n---\nkind: Deployment\napiVersion: extensions/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      tolerations:\n      - operator: \"Exists\"\n      nodeSelector:\n        kubernetes.io/hostname: master  #默认master是不允许被调度的，加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes来查看\n      containers:\n      - image: traefik:v1.7\n        name: traefik-ingress-lb\n        ports:\n        - name: http\n          containerPort: 80\n          hostPort: 80\n        - name: admin\n          containerPort: 8080\n        args:\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      # 该端口为 traefik ingress-controller的服务端口\n      port: 80\n      name: web\n      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围\n      # 从默认20000~40000之间选一个可用端口，让ingress-controller暴露给外部的访问\n      nodePort: 23456\n    - protocol: TCP\n      # 该端口为 traefik 的管理WEB界面\n      port: 8080\n      name: admin\n      nodePort: 23457\n  type: NodePort\n\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: traefik-ui.test.com\n    http:\n      paths:\n      - backend:\n          serviceName: traefik-ingress-service\n          #servicePort: 8080\n          servicePort: admin  #跟上面service的name对应\n```\n\n2、更新应用\n\n```\n$ kubectl apply -f traefik-controller-ingress.yaml\n```\n\n3、访问测试\n\n```\nhttp://traefik-ui.test.com    绑定master的公网IP或者VIP\n```\n\nhttps://blog.csdn.net/oyym_mv/article/details/86986510  Kubernetes实录(11) kubernetes使用traefik作为反向代理（Deamonset模式）\n"
  },
  {
    "path": "components/ingress/2.ingress tls配置.md",
    "content": "# 1、Ingress tls\n\n上节课给大家展示了 traefik 的安装使用以及简单的 ingress 的配置方法，这节课我们来学习一下 ingress tls 以及 path 路径在 ingress 对象中的使用方法。\n\n# 2、TLS 认证\n\n在现在大部分场景下面我们都会使用 https 来访问我们的服务，这节课我们将使用一个自签名的证书，当然你有在一些正规机构购买的 CA 证书是最好的，这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书：\n```\nmkdir -p /ssl/\ncd /ssl/\nopenssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt\n```\n现在我们有了证书，我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书：\n```\nkubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system\n```\n# 3、配置 Traefik\n\n前面我们使用的是 Traefik 的默认配置，现在我们来配置 Traefik，让其支持 https：\n\n```\nmkdir -p /config/\ncd /config/\n\ncat > traefik.toml <<\\EOF\ndefaultEntryPoints = [\"http\", \"https\"]\n\n[entryPoints]\n  [entryPoints.http]\n  address = \":80\"\n    [entryPoints.http.redirect]\n      entryPoint = \"https\"\n  [entryPoints.https]\n  address = \":443\"\n    [entryPoints.https.tls]\n      [[entryPoints.https.tls.certificates]]\n      CertFile = \"/ssl/tls.crt\"\n      KeyFile = \"/ssl/tls.key\"\nEOF\n \n上面的配置文件中我们配置了 http 和 https 两个入口，并且配置了将 http 服务强制跳转到 https 服务，这样我们所有通过 traefik 进来的服务都是 https 的，要访问 https 服务，当然就得配置对应的证书了，可以看到我们指定了 CertFile 和 KeyFile 两个文件，由于 traefik pod 中并没有这两个证书，所以我们要想办法将上面生成的证书挂载到 Pod 中去，是不是前面我们讲解过 secret 对象可以通过 volume 形式挂载到 Pod 中？至于上面的 traefik.toml 这个文件我们要怎么让 traefik pod 能够访问到呢？还记得我们前面讲过的 ConfigMap 吗？我们是不是可以将上面的 traefik.toml 配置文件通过一个 ConfigMap 对象挂载到 traefik pod 中去：\n\nkubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system\n\nroot># kubectl get configmap -n kube-system\nNAME                                 DATA   AGE\ncoredns                              1      11h\nextension-apiserver-authentication   6      11h\nkube-flannel-cfg                     2      11h\nkube-proxy                           2      11h\nkubeadm-config                       2      11h\nkubelet-config-1.15                  1      11h\ntraefik-conf                         1      10s\n\n现在就可以更改下上节课的 traefik pod 的 yaml 文件了：\n\ncd /data/components/ingress/\n\ncat > traefik.yaml <<\\EOF\nkind: Deployment\napiVersion: extensions/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      volumes:\n      - name: ssl\n        secret:\n          secretName: traefik-cert\n      - name: config\n        configMap:\n          name: traefik-conf\n      tolerations:\n      - operator: \"Exists\"\n      nodeSelector:\n        kubernetes.io/hostname: linux-node1.example.com\n      containers:\n      - image: traefik\n        name: traefik-ingress-lb\n        volumeMounts:\n        - mountPath: \"/ssl\"  #这里注意挂载的路径\n          name: \"ssl\"\n        - mountPath: \"/config\" #这里注意挂载的路径\n          name: \"config\"\n        ports:\n        - name: http\n          containerPort: 80\n          hostPort: 80\n        - name: https\n          containerPort: 443\n          hostPort: 443\n        - name: admin\n          containerPort: 8080\n        args:\n        - --configfile=/config/traefik.toml\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\nEOF\n\n和之前的比较，我们增加了 443 的端口配置，以及启动参数中通过 configfile 指定了 traefik.toml 配置文件，这个配置文件是通过 volume 挂载进来的。然后更新下 traefik pod:\n\nkubectl apply -f traefik.yaml\nkubectl logs -f traefik-ingress-controller-7dcfd9c6df-v58k7 -n kube-system\n\n更新完成后我们查看 traefik pod 的日志，如果出现类似于上面的一些日志信息，证明更新成功了。现在我们去访问 traefik 的 dashboard 会跳转到 https 的地址，并会提示证书相关的报警信息，这是因为我们的证书是我们自建的，并不受浏览器信任，如果你是正规机构购买的证书并不会出现改报警信息，你应该可以看到我们常见的绿色标志： \n\nhttps://traefik.k8s.com/dashboard/\n```\n\n# 4、配置 ingress\n\n其实上面的 TLS 认证方式已经成功了，接下来我们通过一个实例来说明下 ingress 中 path 的用法，这里我们部署了3个简单的 web 服务，通过一个环境变量来标识当前运行的是哪个服务：（backend.yaml）\n\n```\ncd /data/components/ingress/\n\ncat > backend.yaml <<\\EOF\nkind: Deployment\napiVersion: extensions/v1beta1\nmetadata:\n  name: svc1\nspec:\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: svc1\n    spec:\n      containers:\n      - name: svc1\n        image: cnych/example-web-service\n        env:\n        - name: APP_SVC\n          value: svc1\n        ports:\n        - containerPort: 8080\n          protocol: TCP\n\n---\nkind: Deployment\napiVersion: extensions/v1beta1\nmetadata:\n  name: svc2\nspec:\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: svc2\n    spec:\n      containers:\n      - name: svc2\n        image: cnych/example-web-service\n        env:\n        - name: APP_SVC\n          value: svc2\n        ports:\n        - containerPort: 8080\n          protocol: TCP\n\n---\nkind: Deployment\napiVersion: extensions/v1beta1\nmetadata:\n  name: svc3\nspec:\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: svc3\n    spec:\n      containers:\n      - name: svc3\n        image: cnych/example-web-service\n        env:\n        - name: APP_SVC\n          value: svc3\n        ports:\n        - containerPort: 8080\n          protocol: TCP\n\n---\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    app: svc1\n  name: svc1\nspec:\n  type: ClusterIP\n  ports:\n  - port: 8080\n    name: http\n  selector:\n    app: svc1\n\n---\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    app: svc2\n  name: svc2\nspec:\n  type: ClusterIP\n  ports:\n  - port: 8080\n    name: http\n  selector:\n    app: svc2\n\n---\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    app: svc3\n  name: svc3\nspec:\n  type: ClusterIP\n  ports:\n  - port: 8080\n    name: http\n  selector:\n    app: svc3\nEOF\n\n可以看到上面我们定义了3个 Deployment，分别对应3个 Service：\n\nkubectl create -f backend.yaml\n\n然后我们创建一个 ingress 对象来访问上面的3个服务：（example-ingress.yaml）\n\ncat > example-ingress.yaml <<\\EOF\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: example-web-app\n  annotations:\n    kubernetes.io/ingress.class: \"traefik\"\nspec:\n  rules:\n  - host: example.k8s.com\n    http:\n      paths:\n      - path: /s1\n        backend:\n          serviceName: svc1\n          servicePort: 8080\n      - path: /s2\n        backend:\n          serviceName: svc2\n          servicePort: 8080\n      - path: /\n        backend:\n          serviceName: svc3\n          servicePort: 8080\nEOF\n\n\n注意我们这里定义的 ingress 对象和之前有一个不同的地方是我们增加了 path 路径的定义，不指定的话默认是 '/'，创建该 ingress 对象：\n\nkubectl create -f example-ingress.yaml\n\n现在我们可以在本地 hosts 里面给域名 example.k8s.com 添加对应的 hosts 解析，然后就可以在浏览器中访问，可以看到默认也会跳转到 https 的页面： \n```\n\n参考文档：\n\nhttps://www.qikqiak.com/k8s-book/docs/41.ingress%20config.html\n"
  },
  {
    "path": "components/ingress/3.ingress-http使用示例.md",
    "content": "# 一、ingress-http测试示例\n\n## 1、关键三个点：\n\n    注意这3个资源的namespace: kube-system需要一致\n\n    Deployment\n\n    Service\n\n    Ingress\n\n```\n$ vim nginx-deployment-http.yaml\n\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: kube-system\nspec:\n  replicas: 2\n  template:\n    metadata:\n      labels:\n        app: nginx-pod\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.15.5\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-service\n  namespace: kube-system\n  annotations:\n    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度\nspec:\n  template:\n    metadata:\n      labels:\n        name: nginx-service\nspec:\n  selector:\n    app: nginx-pod\n  ports:\n  - port: 80\n    targetPort: 80\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: k8s.nginx.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n```\n\n## 2、创建资源\n\n```\n$ kubectl apply -f nginx-deployment-http.yaml\n\ndeployment.apps/nginx-pod create\nservice/nginx-service create\ningress.extensions/nginx-ingress create\n```\n\n## 3、访问刚创建的资源\n\n首先这里需要先找到traefik-ingress pod 分布到到了那个节点，这里我们发现是落在了10.199.1.159的节点，然后我们绑定该节点对应的公网IP，这里假设为16.21.26.139\n\n```\n16.21.26.139 k8s.nginx.com\n```\n\n```\n$ kubectl get pod -A -o wide|grep traefik-ingress\nkube-system   traefik-ingress-controller-7d454d7c68-8qpjq   1/1     Running   0          21h   10.46.2.10    10.199.1.159   <none>           <none>\n```\n\n  ![ingress测试示例1](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-01.png)\n\n\n## 4、清理资源\n\n### 1、清理deployment\n```\n# 获取deployment\n$ kubectl get deploy -A\n\nNAMESPACE     NAME                         READY   UP-TO-DATE   AVAILABLE   AGE\nkube-system   coredns                      2/2     2            2           3d\nkube-system   heapster                     1/1     1            1           3d\nkube-system   kubernetes-dashboard         1/1     1            1           3d\nkube-system   metrics-server               1/1     1            1           3d\nkube-system   nginx-pod                    2/2     2            2           25m\nkube-system   traefik-ingress-controller   1/1     1            1           2d22h\n\n# 清理deployment\n$ kubectl delete deploy nginx-pod -n kube-system\n\ndeployment.extensions \"nginx-pod\" deleted\n```\n\n### 2、清理service\n```\n# 获取svc\n$ kubectl get svc -A\n\nNAMESPACE     NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE\ndefault       kubernetes                ClusterIP   10.44.0.1       <none>        443/TCP                                     3d\nkube-system   heapster                  ClusterIP   10.44.158.46    <none>        80/TCP                                      3d\nkube-system   kube-dns                  ClusterIP   10.44.0.2       <none>        53/UDP,53/TCP,9153/TCP                      3d\nkube-system   kubernetes-dashboard      NodePort    10.44.176.99    <none>        443:27008/TCP                               3d\nkube-system   metrics-server            ClusterIP   10.44.40.157    <none>        443/TCP                                     3d\nkube-system   nginx-service             ClusterIP   10.44.148.252   <none>        80/TCP                                      28m\nkube-system   traefik-ingress-service   NodePort    10.44.67.195    <none>        80:23456/TCP,443:23457/TCP,8080:33192/TCP   2d22h\n\n# 清理svc\n$ kubectl delete svc nginx-service -n kube-system\n\nservice \"nginx-service\" deleted\n```\n\n### 3、清理ingress\n\n```\n# 获取ingress\n$ kubectl get ingress -A\n\nNAMESPACE     NAME                   HOSTS                 ADDRESS   PORTS   AGE\nkube-system   kubernetes-dashboard   dashboard.test.com              80      2d22h\nkube-system   nginx-ingress          k8s.nginx.com                   80      29m\nkube-system   traefik-web-ui         traefik-ui.test.com             80      2d22h\n\n# 清理ingress\n$ kubectl delete ingress nginx-ingress -n kube-system\n\ningress.extensions \"nginx-ingress\" deleted\n```\n\n\n参考资料：\n\nhttps://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/     Kubernetes traefik ingress使用\n"
  },
  {
    "path": "components/ingress/4.ingress-https使用示例.md",
    "content": "# 一、ingress-https测试示例\n\n1、TLS 认证\n\n在现在大部分场景下面我们都会使用 https 来访问我们的服务，这节课我们将使用一个自签名的证书，当然你有在一些正规机构购买的 CA 证书是最好的，这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书：\n\n```\nmkdir -p /ssl-k8s/\ncd /ssl-k8s/\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_k8s.key -out tls_k8s.crt -subj \"/CN=hello.k8s.com\"\n```\n\n现在我们有了证书，我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书：(这个需手动执行创建好)\n\n```\nkubectl create secret generic traefik-k8s --from-file=tls_k8s.crt --from-file=tls_k8s.key -n kube-system\n```\n\n```\n# vim /config/traefik.toml\n\ndefaultEntryPoints = [\"http\", \"https\"]\n[entryPoints]\n  [entryPoints.http]\n    address = \":80\"\n  [entryPoints.https]\n    address = \":443\"\n    [entryPoints.https.tls]\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/tls_first.crt\"\n        KeyFile = \"/ssl/tls_first.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/tls_second.crt\"\n        KeyFile = \"/ssl/tls_second.key\"\n```\n\n## 1、关键五个点：\n\n    注意这5个资源的namespace: kube-system需要一致\n\n    secret          ---secret 对象来存储ssl证书\n    \n    configmap       ---configmap 用来保存一个或多个key/value信息\n    \n    Deployment\n\n    Service\n\n    Ingress\n\n## 2、合并创建secret，configmap以及traefik文件\n```\n# vim traefik-controller-https.yaml\n\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: traefik-conf\n  namespace: kube-system\ndata:\n  traefik.toml: |\n    insecureSkipVerify = true\n    defaultEntryPoints = [\"http\", \"https\"]\n    [entryPoints]\n      [entryPoints.http]\n        address = \":80\"\n      [entryPoints.https]\n        address = \":443\"\n        [entryPoints.https.tls]\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/tls_first.crt\"\n            KeyFile = \"/ssl/tls_first.key\"\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/tls_second.crt\"\n            KeyFile = \"/ssl/tls_second.key\"\n---\nkind: Deployment\napiVersion: apps/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      volumes:\n      - name: ssl\n        secret:\n          secretName: traefik-cert\n      - name: config\n        configMap:\n          name: traefik-conf\n      #nodeSelector:\n      #  node-role.kubernetes.io/traefik: \"true\"\n      containers:\n      - image: traefik:v1.7.12\n        imagePullPolicy: IfNotPresent\n        name: traefik-ingress-lb\n        volumeMounts:\n        - mountPath: \"/ssl\"\n          name: \"ssl\"\n        - mountPath: \"/config\"\n          name: \"config\"\n        resources:\n          limits:\n            cpu: 1000m\n            memory: 800Mi\n          requests:\n            cpu: 500m\n            memory: 600Mi\n        args:\n        - --configfile=/config/traefik.toml\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n        securityContext:\n          capabilities:\n            drop:\n              - ALL\n            add:\n              - NET_BIND_SERVICE\n        ports:\n          - name: http\n            containerPort: 80\n            hostPort: 80\n          - name: https\n            containerPort: 443\n            hostPort: 443\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      # 该端口为 traefik ingress-controller的服务端口\n      port: 80\n      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围\n      # 从默认20000~40000之间选一个可用端口，让ingress-controller暴露给外部的访问\n      nodePort: 23456\n      name: http\n    - protocol: TCP\n      # \n      port: 443\n      nodePort: 23457\n      name: https\n    - protocol: TCP\n      # 该端口为 traefik 的管理WEB界面\n      port: 8080\n      name: admin\n  type: NodePort\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - pods\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n```\n# 二、应用测试示例\n```\n$ vim nginx-deployment-https.yaml\n\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: kube-system\nspec:\n  replicas: 2\n  template:\n    metadata:\n      labels:\n        app: nginx-pod\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.15.5\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-service\n  namespace: kube-system\n  annotations:\n    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度\nspec:\n  template:\n    metadata:\n      labels:\n        name: nginx-service\nspec:\n  selector:\n    app: nginx-pod\n  ports:\n  - port: 80\n    targetPort: 80\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: k8s.nginx.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n  tls:\n  - secretName: traefik-k8s\n```\n\n## 2、创建资源\n\n```\n$ kubectl apply -f nginx-deployment-https.yaml\n\ndeployment.apps/nginx-pod create\nservice/nginx-service create\ningress.extensions/nginx-ingress create\n```\n\n## 3、访问刚创建的资源\n\n首先这里需要先找到traefik-ingress pod 分布到到了那个节点，这里我们发现是落在了10.199.1.159的节点，然后我们绑定该节点对应的公网IP，这里假设为16.21.26.139\n\n```\n16.21.26.139 k8s.nginx.com\n```\n\n```\n$ kubectl get pod -A -o wide|grep traefik-ingress\nkube-system   traefik-ingress-controller-7d454d7c68-8qpjq   1/1     Running   0          21h   10.46.2.10    10.199.1.159   <none>           <none>\n```\n\n  ![ingress测试示例1](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-01.png)\n\n\n## 4、清理资源\n\n### 1、清理deployment\n```\n# 获取deployment\n$ kubectl get deploy -A\n\nNAMESPACE     NAME                         READY   UP-TO-DATE   AVAILABLE   AGE\nkube-system   coredns                      2/2     2            2           3d\nkube-system   heapster                     1/1     1            1           3d\nkube-system   kubernetes-dashboard         1/1     1            1           3d\nkube-system   metrics-server               1/1     1            1           3d\nkube-system   nginx-pod                    2/2     2            2           25m\nkube-system   traefik-ingress-controller   1/1     1            1           2d22h\n\n# 清理deployment\n$ kubectl delete deploy nginx-pod -n kube-system\n\ndeployment.extensions \"nginx-pod\" deleted\n```\n\n### 2、清理service\n```\n# 获取svc\n$ kubectl get svc -A\n\nNAMESPACE     NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE\ndefault       kubernetes                ClusterIP   10.44.0.1       <none>        443/TCP                                     3d\nkube-system   heapster                  ClusterIP   10.44.158.46    <none>        80/TCP                                      3d\nkube-system   kube-dns                  ClusterIP   10.44.0.2       <none>        53/UDP,53/TCP,9153/TCP                      3d\nkube-system   kubernetes-dashboard      NodePort    10.44.176.99    <none>        443:27008/TCP                               3d\nkube-system   metrics-server            ClusterIP   10.44.40.157    <none>        443/TCP                                     3d\nkube-system   nginx-service             ClusterIP   10.44.148.252   <none>        80/TCP                                      28m\nkube-system   traefik-ingress-service   NodePort    10.44.67.195    <none>        80:23456/TCP,443:23457/TCP,8080:33192/TCP   2d22h\n\n# 清理svc\n$ kubectl delete svc nginx-service -n kube-system\n\nservice \"nginx-service\" deleted\n```\n\n### 3、清理ingress\n\n```\n# 获取ingress\n$ kubectl get ingress -A\n\nNAMESPACE     NAME                   HOSTS                 ADDRESS   PORTS   AGE\nkube-system   kubernetes-dashboard   dashboard.test.com              80      2d22h\nkube-system   nginx-ingress          k8s.nginx.com                   80      29m\nkube-system   traefik-web-ui         traefik-ui.test.com             80      2d22h\n\n# 清理ingress\n$ kubectl delete ingress nginx-ingress -n kube-system\n\ningress.extensions \"nginx-ingress\" deleted\n```\n\n\n参考资料：\n\nhttps://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/     Kubernetes traefik ingress使用\n\nhttp://docs.kubernetes.org.cn/558.html  \n"
  },
  {
    "path": "components/ingress/5.hello-tls.md",
    "content": "# 证书文件\n\n1、生成证书\n```\nmkdir -p /ssl/{default,first,second}\ncd /ssl/default/\nopenssl req -x509 -nodes -days 165 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=k8s.test.com\"\nkubectl -n kube-system create secret tls traefik-cert --key=tls.key --cert=tls.crt\n\ncd /ssl/first/\nopenssl req -x509 -nodes -days 265 -newkey rsa:2048 -keyout tls_first.key -out tls_first.crt -subj \"/CN=k8s.first.com\"\nkubectl create secret generic first-k8s --from-file=tls_first.crt --from-file=tls_first.key -n kube-system\n\ncd /ssl/second/\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_second.key -out tls_second.crt -subj \"/CN=k8s.second.com\"\nkubectl create secret generic second-k8s --from-file=tls_second.crt --from-file=tls_second.key -n kube-system\n\n#查看证书\nkubectl get secret traefik-cert first-k8s second-k8s -n kube-system\nkubectl describe secret traefik-cert first-k8s second-k8s -n kube-system\n```\n\n2、删除证书\n\n```\n$ kubectl delete secret traefik-cert first-k8s second-k8s -n kube-system\n\nsecret \"second-k8s\" deleted\nsecret \"traefik-cert\" deleted\nsecret \"first-k8s\" deleted\n```\n\n# 证书配置\n\n1、创建configMap(cm)\n\n```\nmkdir -p /config/\ncd /config/\n\n$  vim traefik.toml\ndefaultEntryPoints = [\"http\", \"https\"]\n[entryPoints]\n  [entryPoints.http]\n    address = \":80\"\n  [entryPoints.https]\n    address = \":443\"\n    [entryPoints.https.tls]\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/default/tls.crt\"\n        KeyFile = \"/ssl/default/tls.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/first/tls_first.crt\"\n        KeyFile = \"/ssl/first/tls_first.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/second/tls_second.crt\"\n        KeyFile = \"/ssl/second/tls_second.key\"\n        \n$ kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system\n\n$ kubectl get configmap traefik-conf -n kube-system\n\n$ kubectl describe cm traefik-conf -n kube-system\n```\n2、删除configMap(cm)\n\n```\n$ kubectl delete cm traefik-conf -n kube-system\n```\n\n# traefik-ingress-controller文件\n\n1、创建文件\n```\n$ vim traefik-controller-tls.yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: traefik-conf\n  namespace: kube-system\ndata:\n  traefik.toml: |\n    insecureSkipVerify = true\n    defaultEntryPoints = [\"http\", \"https\"]\n    [entryPoints]\n      [entryPoints.http]\n        address = \":80\"\n      [entryPoints.https]\n        address = \":443\"\n        [entryPoints.https.tls]\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/default/tls.crt\"\n            KeyFile = \"/ssl/default/tls.key\"\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/first/tls_first.crt\"\n            KeyFile = \"/ssl/first/tls_first.key\"\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/second/tls_second.crt\"\n            KeyFile = \"/ssl/second/tls_second.key\"\n---\nkind: Deployment\napiVersion: apps/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      volumes:\n      - name: ssl\n        secret:\n          secretName: traefik-cert\n      - name: config\n        configMap:\n          name: traefik-conf\n      #nodeSelector:\n      #  node-role.kubernetes.io/traefik: \"true\"\n      containers:\n      - image: traefik:v1.7.12\n        imagePullPolicy: IfNotPresent\n        name: traefik-ingress-lb\n        volumeMounts:\n        - mountPath: \"/ssl\"\n          name: \"ssl\"\n        - mountPath: \"/config\"\n          name: \"config\"\n        resources:\n          limits:\n            cpu: 1000m\n            memory: 800Mi\n          requests:\n            cpu: 500m\n            memory: 600Mi\n        args:\n        - --configfile=/config/traefik.toml\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n        securityContext:\n          capabilities:\n            drop:\n              - ALL\n            add:\n              - NET_BIND_SERVICE\n        ports:\n          - name: http\n            containerPort: 80\n            hostPort: 80\n          - name: https\n            containerPort: 443\n            hostPort: 443\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      # 该端口为 traefik ingress-controller的服务端口\n      port: 80\n      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围\n      # 从默认20000~40000之间选一个可用端口，让ingress-controller暴露给外部的访问\n      nodePort: 23456\n      name: http\n    - protocol: TCP\n      # \n      port: 443\n      nodePort: 23457\n      name: https\n    - protocol: TCP\n      # 该端口为 traefik 的管理WEB界面\n      port: 8080\n      name: admin\n  type: NodePort\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - pods\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n```\n\n2、应用生效\n```\n$ kubectl apply -f traefik-controller-tls.yaml \n\nconfigmap/traefik-conf created\ndeployment.apps/traefik-ingress-controller created\nservice/traefik-ingress-service created\nclusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created\nclusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created\nserviceaccount/traefik-ingress-controller created\n\n#删除资源\n$ kubectl delete -f traefik-controller-tls.yaml \n```\n# 测试deployment和ingress\n```\n$ vim nginx-ingress-deploy.yaml\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: kube-system\nspec:\n  replicas: 2\n  template:\n    metadata:\n      labels:\n        app: nginx-pod\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.15.5\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-service\n  namespace: kube-system\n  annotations:\n    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度\nspec:\n  template:\n    metadata:\n      labels:\n        name: nginx-service\nspec:\n  selector:\n    app: nginx-pod\n  ports:\n  - port: 80\n    targetPort: 80\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  tls:\n  - secretName: first-k8s\n  - secretName: second-k8s\n  rules:\n  - host: k8s.first.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n  - host: k8s.senond.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n          \n$ kubectl apply -f nginx-ingress-deploy.yaml \n$ kubectl delete -f nginx-ingress-deploy.yaml \n```\n"
  },
  {
    "path": "components/ingress/6.ingress-https使用示例.md",
    "content": "# ingress-https测试示例\n\n# 一、证书文件\n\n## 1、TLS 认证\n\n在现在大部分场景下面我们都会使用 https 来访问我们的服务，这节课我们将使用一个自签名的证书，当然你有在一些正规机构购买的 CA 证书是最好的，这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书：\n```\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=hello.test.com\"\n```\n\n现在我们有了证书，我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书：(这个需手动执行创建好)\n\n```\nkubectl -n kube-system create secret tls traefik-cert --key=tls.key --cert=tls.crt\n```\n\n## 2、多证书创建\n\n```\nmkdir -p /ssl/{default,first,second}\ncd /ssl/default/\nopenssl req -x509 -nodes -days 165 -newkey rsa:2048 -keyout tls_default.key -out tls_default.crt -subj \"/CN=k8s.test.com\"\nkubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt\n\ncd /ssl/first/\nopenssl req -x509 -nodes -days 265 -newkey rsa:2048 -keyout tls_first.key -out tls_first.crt -subj \"/CN=k8s.first.com\"\nkubectl -n kube-system create secret tls first-k8s --key=tls_first.key --cert=tls_first.crt\n\ncd /ssl/second/\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_second.key -out tls_second.crt -subj \"/CN=k8s.second.com\"\nkubectl -n kube-system create secret tls second-k8s --key=tls_second.key --cert=tls_second.crt\n\n#查看证书\nkubectl get secret traefik-cert first-k8s second-k8s -n kube-system\nkubectl describe secret traefik-cert first-k8s second-k8s -n kube-system\n```\n\n## 3、删除证书\n\n```\n$ kubectl delete secret traefik-cert first-k8s second-k8s -n kube-system\n\nsecret \"second-k8s\" deleted\nsecret \"traefik-cert\" deleted\nsecret \"first-k8s\" deleted\n```\n\n## 4、关键5个点\n```\n\n注意这5个资源的namespace: kube-system 需要一致\n\nsecret          ---secret 对象来存储ssl证书\n\nconfigmap       ---configmap 用来保存一个或多个key/value信息\n\nDeployment\n\nService\n\nIngress\n\n```\n\n# 二、证书配置\n\n## 1、创建configMap(cm)\n\n```\nmkdir -p /config/\ncd /config/\n\n$  vim traefik.toml\ndefaultEntryPoints = [\"http\", \"https\"]\n[entryPoints]\n  [entryPoints.http]\n    address = \":80\"\n  [entryPoints.https]\n    address = \":443\"\n    [entryPoints.https.tls]\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/default/tls_default.crt\"\n        KeyFile = \"/ssl/default/tls_default.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/first/tls_first.crt\"\n        KeyFile = \"/ssl/first/tls_first.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/ssl/second/tls_second.crt\"\n        KeyFile = \"/ssl/second/tls_second.key\"\n        \n$ kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system\n\n$ kubectl get configmap traefik-conf -n kube-system\n\n$ kubectl describe cm traefik-conf -n kube-system\n```\n## 2、删除configMap(cm)\n\n```\n$ kubectl delete cm traefik-conf -n kube-system\n```\n\n# 三、traefik-ingress-controller控制文件\n\n## 1、创建文件\n```\n$ cd /config/\n\n$ vim traefik-controller-tls.yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: traefik-conf\n  namespace: kube-system\ndata:\n  traefik.toml: |\n    insecureSkipVerify = true\n    defaultEntryPoints = [\"http\", \"https\"]\n    [entryPoints]\n      [entryPoints.http]\n        address = \":80\"\n      [entryPoints.https]\n        address = \":443\"\n        [entryPoints.https.tls]\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/default/tls.crt\"\n            KeyFile = \"/ssl/default/tls.key\"\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/first/tls_first.crt\"\n            KeyFile = \"/ssl/first/tls_first.key\"\n          [[entryPoints.https.tls.certificates]]\n            CertFile = \"/ssl/second/tls_second.crt\"\n            KeyFile = \"/ssl/second/tls_second.key\"\n---\nkind: Deployment\napiVersion: apps/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      volumes:\n      - name: ssl\n        secret:\n          secretName: traefik-cert\n      - name: config\n        configMap:\n          name: traefik-conf\n      #nodeSelector:\n      #  node-role.kubernetes.io/traefik: \"true\"\n      tolerations:\n      - operator: \"Exists\"\n      nodeSelector:\n        kubernetes.io/hostname: 10.198.1.156    #指定traefik-ingress-controller跑在这个node节点上面\n      containers:\n      - image: traefik:v1.7.12\n        imagePullPolicy: IfNotPresent\n        name: traefik-ingress-lb\n        volumeMounts:\n        - mountPath: \"/ssl\"\n          name: \"ssl\"\n        - mountPath: \"/config\"\n          name: \"config\"\n        resources:\n          limits:\n            cpu: 1000m\n            memory: 800Mi\n          requests:\n            cpu: 500m\n            memory: 600Mi\n        args:\n        - --configfile=/config/traefik.toml\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n        securityContext:\n          capabilities:\n            drop:\n              - ALL\n            add:\n              - NET_BIND_SERVICE\n        ports:\n          - name: http\n            containerPort: 80\n            hostPort: 80\n          - name: https\n            containerPort: 443\n            hostPort: 443\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      # 该端口为 traefik ingress-controller的服务端口\n      port: 80\n      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围\n      # 从默认20000~40000之间选一个可用端口，让ingress-controller暴露给外部的访问\n      nodePort: 23456\n      name: http\n    - protocol: TCP\n      port: 443\n      nodePort: 23457\n      name: https\n    - protocol: TCP\n      # 该端口为 traefik 的管理WEB界面\n      port: 8080\n      name: admin\n  type: NodePort\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - pods\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n```\n\n## 2、应用生效\n```\n$ kubectl apply -f traefik-controller-tls.yaml \n\nconfigmap/traefik-conf created\ndeployment.apps/traefik-ingress-controller created\nservice/traefik-ingress-service created\nclusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created\nclusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created\nserviceaccount/traefik-ingress-controller created\n\n#删除资源\n$ kubectl delete -f traefik-controller-tls.yaml \n```\n# 四、命令行创建 https ingress 例子\n```\n# 创建示例应用\n$ kubectl run test-hello --image=nginx:alpine --port=80 --expose -n kube-system\n\n# 删除示例应用（kubectl run 默认创建的是deployment资源应用 ）\n$ kubectl delete deployment test-hello -n kube-system\n$ kubectl delete svc test-hello -n kube-system\n\n# hello-tls-ingress 示例\n$ cd /config/\n$ vim hello-tls.ing.yaml\n\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: hello-tls-ingress\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: k8s.test.com\n    http:\n      paths:\n      - backend:\n          serviceName: test-hello\n          servicePort: 80\n  tls:\n  - secretName: traefik-cert\n  \n# 创建 https ingress\n$ kubectl apply -f /config/hello-tls.ing.yaml\n\n# 注意根据hello示例，需要在kube-system命名空间创建对应的secret: traefik-cert(这步在开篇已经创建了，无须再创建)\n$ kubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt\n\n# 删除 https ingress\n$ kubectl delete -f /config/hello-tls.ing.yaml\n```\n#测试访问（找到traefik-controller pod运行在哪个node节点上，然后绑定该节点的IP，然后访问该url）\n\nhttps://k8s.test.com:23457\n\n  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-02.png)\n\n# 五、测试deployment和ingress\n```\n$ vim nginx-ingress-deploy.yaml\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: kube-system\nspec:\n  replicas: 2\n  template:\n    metadata:\n      labels:\n        app: nginx-pod\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.15.5\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-service\n  namespace: kube-system\n  annotations:\n    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度\nspec:\n  template:\n    metadata:\n      labels:\n        name: nginx-service\nspec:\n  selector:\n    app: nginx-pod\n  ports:\n  - port: 80\n    targetPort: 80\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  tls:\n  - secretName: first-k8s\n  - secretName: second-k8s\n  rules:\n  - host: k8s.first.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n  - host: k8s.second.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n          \n$ kubectl apply -f nginx-ingress-deploy.yaml \n$ kubectl delete -f nginx-ingress-deploy.yaml\n```\n\n#访问测试\n\nhttps://k8s.first.com:23457/\n\n  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-03.png)\n\nhttps://k8s.second.com:23457/\n\n  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-04.png)\n\n\n参考资料：\n\nhttps://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/ Kubernetes traefik ingress使用\n\nhttp://docs.kubernetes.org.cn/558.html\n"
  },
  {
    "path": "components/ingress/README.md",
    "content": "参考资料：\n\nhttps://segmentfault.com/a/1190000019908991  k8s ingress原理及ingress-nginx部署测试\n\nhttps://www.cnblogs.com/tchua/p/11174386.html  Kubernetes集群Ingress高可用部署\n"
  },
  {
    "path": "components/ingress/nginx-ingress/README.md",
    "content": "\n"
  },
  {
    "path": "components/ingress/traefik-ingress/1.traefik反向代理Deamonset模式.md",
    "content": "# 一、Deamonset方式部署traefik-controller-ingress\n\nhttps://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml\n\n这里使用的DaemonSet，只是用traefik-ds.yaml ，traefik-rbac.yaml ， ui.yaml\n\n```bash\nkubectl delete -f traefik-ds.yaml\n\nrm -f ./traefik-ds.yaml\n\ncat >traefik-ds.yaml<<\\EOF\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\nkind: DaemonSet\napiVersion: apps/v1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      #=======添加nodeSelector信息：只在master节点创建=======\n      tolerations:\n      - operator: \"Exists\"\n      nodeSelector:\n        kubernetes.io/role: master  #默认master是不允许被调度的，加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看\n      #===================================================\n      containers:\n      - image: traefik:v1.7\n        name: traefik-ingress-lb\n        ports:\n        - name: http\n          containerPort: 80\n          hostPort: 80\n        - name: admin\n          containerPort: 8080\n          hostPort: 8080\n        securityContext:\n          capabilities:\n            drop:\n            - ALL\n            add:\n            - NET_BIND_SERVICE\n        args:\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      port: 80\n      name: web\n    - protocol: TCP\n      port: 8080\n      name: admin\nEOF\n\nkubectl apply -f traefik-ds.yaml\n```\n\n# 二、traefik-rbac配置\n\nhttps://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml\n\n```\nkubectl delete -f traefik-rbac.yaml\n\nrm -f ./traefik-rbac.yaml\n\ncat >traefik-rbac.yaml<<\\EOF\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n    - extensions\n    resources:\n    - ingresses/status\n    verbs:\n    - update\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\nEOF\n\nkubectl apply -f traefik-rbac.yaml\n```\n\n# 三、traefik-ui使用traefik进行代理\n\nhttps://github.com/containous/traefik/blob/v1.7/examples/k8s/ui.yaml\n\n1、代理方式一\n\n```bash\nkubectl delete -f ui.yaml\n\nrm -f ./ui.yaml\n\ncat >ui.yaml<<\\EOF\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n  - name: web\n    port: 80\n    targetPort: 8080\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\nspec:\n  rules:\n  - host: traefik-ui.devops.com\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: traefik-web-ui\n          servicePort: web\n---\nEOF\n\nkubectl apply -f ui.yaml\n```\n\n2、代理方式二\n\n```\nkubectl delete -f ui.yaml\n\nrm -f ./ui.yaml\n\ncat >ui.yaml<<\\EOF\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      # 该端口为 traefik ingress-controller的服务端口\n      port: 80\n      name: web\n    - protocol: TCP\n      # 该端口为 traefik 的管理WEB界面\n      port: 8080\n      name: admin\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: traefik-ui.devops.com\n    http:\n      paths:\n      - backend:\n          serviceName: traefik-ingress-service\n          #servicePort: 8080\n          servicePort: admin  #跟上面service的name对应\n---\nEOF\n\nkubectl apply -f ui.yaml\n```\n\n# 四、访问测试 \n\n`http://traefik-ui.devops.com`\n\n# 五、汇总\n```\nkubectl delete -f all-ds.yaml\n\nrm -f ./all-ds.yaml\n\ncat >all-ds.yaml<<\\EOF\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\nkind: DaemonSet\napiVersion: apps/v1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      #=======添加nodeSelector信息：只在master节点创建=======\n      tolerations:\n      - operator: \"Exists\"\n      nodeSelector:\n        kubernetes.io/role: master  #默认master是不允许被调度的，加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看\n      #===================================================\n      containers:\n      - image: traefik:v1.7\n        name: traefik-ingress-lb\n        ports:\n        - name: http\n          containerPort: 80\n          hostPort: 80\n        - name: admin\n          containerPort: 8080\n          hostPort: 8080\n        securityContext:\n          capabilities:\n            drop:\n            - ALL\n            add:\n            - NET_BIND_SERVICE\n        args:\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      port: 80\n      name: web\n    - protocol: TCP\n      port: 8080\n      name: admin\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n    - extensions\n    resources:\n    - ingresses/status\n    verbs:\n    - update\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n  - name: web\n    port: 80\n    targetPort: 8080\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\nspec:\n  rules:\n  - host: traefik-ui.devops.com\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: traefik-web-ui\n          servicePort: web\nEOF\n\nkubectl apply -f all-ds.yaml\n```\n\n参考资料：\n\nhttps://blog.csdn.net/oyym_mv/article/details/86986510  kubernetes使用traefik作为反向代理（Deamonset模式）\n\nhttps://www.cnblogs.com/twodoge/p/11663006.html  第二个坑新版本的 apps.v1 API需要在yaml文件中，selector变为必选项\n"
  },
  {
    "path": "components/ingress/traefik-ingress/2.traefik反向代理Deamonset模式TLS.md",
    "content": "# Ingress-Https测试示例\n\n# 一、证书文件\n\n## 1、TLS 认证\n\n&#8195;在现在大部分场景下面我们都会使用 https 来访问我们的服务，这节课我们将使用一个自签名的证书，当然你有在一些正规机构购买的 CA 证书是最好的，这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书：\n```bash\nrm -rf /etc/certs/ssl/\n\nmkdir -p /etc/certs/ssl/\ncd /etc/certs/ssl/\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=hello.test.com\"\n```\n\n&#8195;现在我们有了证书，我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书：(这个需手动执行创建好)\n\n```bash\nkubectl -n kube-system create secret tls traefik-cert --key=tls.key --cert=tls.crt\n```\n\n## 2、多证书创建\n\n```bash\nkubectl delete secret traefik-cert first-k8s second-k8s -n kube-system\n\nrm -rf /etc/certs/ssl/\n\nmkdir -p /etc/certs/ssl/{default,first,second}\ncd /etc/certs/ssl/default/\nopenssl req -x509 -nodes -days 165 -newkey rsa:2048 -keyout tls_default.key -out tls_default.crt -subj \"/CN=*.devops.com\"\nkubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt\n\ncd /etc/certs/ssl/first/\nopenssl req -x509 -nodes -days 265 -newkey rsa:2048 -keyout tls_first.key -out tls_first.crt -subj \"/CN=k8s.first.com\"\nkubectl -n kube-system create secret tls first-k8s --key=tls_first.key --cert=tls_first.crt\n\ncd /etc/certs/ssl/second/\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_second.key -out tls_second.crt -subj \"/CN=k8s.second.com\"\nkubectl -n kube-system create secret tls second-k8s --key=tls_second.key --cert=tls_second.crt\n\n#查看证书\nkubectl get secret traefik-cert first-k8s second-k8s -n kube-system\nkubectl describe secret traefik-cert first-k8s second-k8s -n kube-system\n```\n## 3、关键5个点\n```bash\n注意这5个资源的namespace: kube-system 需要一致\n\nsecret          ---secret 对象来存储ssl证书\n\nconfigmap       ---configmap 用来保存一个或多个key/value信息\n\nDeployment  or  DaemonSet\n\nService\n\nIngress\n```\n\n# 二、证书配置,创建configMap(cm)\n\n1、http和https并存\n```bash\nkubectl delete cm traefik-conf -n kube-system\n\nrm -rf /etc/certs/config/\n\nmkdir -p /etc/certs/config/\ncd /etc/certs/config/\n\ncat >traefik.toml<<\\EOF\n# 设置insecureSkipVerify = true，可以配置backend为443(比如dashboard)的ingress规则\ninsecureSkipVerify = true\ndefaultEntryPoints = [\"http\", \"https\"]\n[entryPoints]\n  [entryPoints.http]\n    address = \":80\"\n  [entryPoints.https]\n    address = \":443\"\n    [entryPoints.https.tls]\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/etc/certs/ssl/default/tls_default.crt\"\n        KeyFile = \"/etc/certs/ssl/default/tls_default.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/etc/certs/ssl/first/tls_first.crt\"\n        KeyFile = \"/etc/certs/ssl/first/tls_first.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/etc/certs/ssl/second/tls_second.crt\"\n        KeyFile = \"/etc/certs/ssl/second/tls_second.key\"\nEOF\n        \nkubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system\n\nkubectl get configmap traefik-conf -n kube-system\n\nkubectl describe cm traefik-conf -n kube-system\n```\n2、http跳转到https\n\n```bash\nkubectl delete cm traefik-conf -n kube-system\n\nrm -rf /etc/certs/config/\n\nmkdir -p /etc/certs/config/\ncd /etc/certs/config/\n\ncat >traefik.toml<<\\EOF\n# 指定了 \"traefik\" 在访问 \"https\" 后端时可以忽略TLS证书验证错误，从而使得 \"https\" 的后端，可以像http后端一样直接通过 \"traefik\" 透出，如kubernetes dashboard\ninsecureSkipVerify = true\n# \ndefaultEntryPoints = [\"http\", \"https\"]\n[entryPoints]\n  [entryPoints.http]\n  address = \":80\"\n    [entryPoints.http.redirect]\n    entryPoint = \"https\"\n  [entryPoints.https]\n  address = \":443\"\n    [entryPoints.https.tls]\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/etc/certs/ssl/default/tls_default.crt\"\n        KeyFile = \"/etc/certs/ssl/default/tls_default.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/etc/certs/ssl/first/tls_first.crt\"\n        KeyFile = \"/etc/certs/ssl/first/tls_first.key\"\n      [[entryPoints.https.tls.certificates]]\n        CertFile = \"/etc/certs/ssl/second/tls_second.crt\"\n        KeyFile = \"/etc/certs/ssl/second/tls_second.key\"\nEOF\n        \nkubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system\n\nkubectl get configmap traefik-conf -n kube-system\n\nkubectl describe cm traefik-conf -n kube-system\n```\n\n# 三、traefik-ingress-controller控制文件\n\n## 1、创建文件\n```\nkubectl delete -f traefik-controller-tls.yaml \n\nrm -f ./traefik-controller-tls.yaml\n\ncat >traefik-controller-tls.yaml<<\\EOF\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\nkind: DaemonSet\napiVersion: extensions/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      hostNetwork: true  #添加该字段让docker使用物理机网络，在物理机暴露服务端口（80），注意物理机80端口提前不能被占用\n      dnsPolicy: ClusterFirstWithHostNet  #使用hostNetwork后容器会使用物理机网络包括DNS，会无法解析内部service，使用此参数让容器使用K8S的DNS\n      volumes:\n      - name: ssl\n        secret:\n          secretName: traefik-cert\n      - name: config\n        configMap:\n          name: traefik-conf\n      #=======添加nodeSelector信息：只在master节点创建=======\n      tolerations:\n      - key: node-role.kubernetes.io/master\n        operator: \"Equal\"\n        value: \"\"\n        effect: NoSchedule\n      nodeSelector:\n        node-role.kubernetes.io/master: \"\"\n      #===================================================\n      containers:\n      - image: traefik:v1.7.12\n        name: traefik-ingress-lb\n        volumeMounts:\n        - mountPath: \"/etc/certs/ssl\"\n          name: \"ssl\"\n        - mountPath: \"/etc/certs/config\"\n          name: \"config\"\n        resources:\n          limits:\n            cpu: 1000m\n            memory: 800Mi\n          requests:\n            cpu: 500m\n            memory: 600Mi\n        ports:\n          - name: http\n            containerPort: 80\n            hostPort: 80\n          - name: https\n            containerPort: 443\n            hostPort: 443\n          - name: admin\n            containerPort: 8080\n            hostPort: 8080\n        securityContext:\n          capabilities:\n            drop:\n             - ALL\n            add:\n             - NET_BIND_SERVICE\n        args:\n        - --configfile=/etc/certs/config/traefik.toml\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n    - extensions\n    resources:\n    - ingresses/status\n    verbs:\n    - update\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      port: 80\n      name: http\n    - protocol: TCP\n      port: 443\n      name: https\n    - protocol: TCP\n      port: 8080\n      name: admin\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\nspec:\n  tls:\n  - secretName: traefik-cert\n  rules:\n  - host: traefik-ui.devops.com\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: traefik-ingress-service\n          servicePort: admin\n---\nEOF\n\nkubectl apply -f traefik-controller-tls.yaml\n```\n\n## 2、删除资源\n```\nkubectl delete -f traefik-controller-tls.yaml \n```\n# 四、命令行创建 https ingress 例子\n```\n# 创建示例应用\n$ kubectl run test-hello --image=nginx:alpine --port=80 --expose -n kube-system\n\n# 删除示例应用（kubectl run 默认创建的是deployment资源应用 ）\n$ kubectl delete deployment test-hello -n kube-system\n$ kubectl delete svc test-hello -n kube-system\n\n# hello-tls-ingress 示例\n$ cd /config/\n$ vim hello-tls.ing.yaml\n\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: hello-tls-ingress\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: k8s.test.com\n    http:\n      paths:\n      - backend:\n          serviceName: test-hello\n          servicePort: 80\n  tls:\n  - secretName: traefik-cert\n  \n# 创建 https ingress\n$ kubectl apply -f /config/hello-tls.ing.yaml\n\n# 注意根据hello示例，需要在kube-system命名空间创建对应的secret: traefik-cert(这步在开篇已经创建了，无须再创建)\n$ kubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt\n\n# 删除 https ingress\n$ kubectl delete -f /config/hello-tls.ing.yaml\n```\n#测试访问（找到traefik-controller pod运行在哪个node节点上，然后绑定该节点的IP，然后访问该url）\n\nhttps://k8s.test.com:23457\n\n  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-02.png)\n\n# 五、测试deployment和ingress\n```\n$ vim nginx-ingress-deploy.yaml\n---\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: kube-system\nspec:\n  replicas: 2\n  template:\n    metadata:\n      labels:\n        app: nginx-pod\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.15.5\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-service\n  namespace: kube-system\n  annotations:\n    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度\nspec:\n  template:\n    metadata:\n      labels:\n        name: nginx-service\nspec:\n  selector:\n    app: nginx-pod\n  ports:\n  - port: 80\n    targetPort: 80\n---\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  tls:\n  - secretName: first-k8s\n  - secretName: second-k8s\n  rules:\n  - host: k8s.first.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n  - host: k8s.second.com\n    http:\n      paths:\n      - backend:\n          serviceName: nginx-service\n          servicePort: 80\n          \n$ kubectl apply -f nginx-ingress-deploy.yaml \n$ kubectl delete -f nginx-ingress-deploy.yaml\n```\n\n#访问测试\n\nhttps://k8s.first.com:23457/\n\n  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-03.png)\n\nhttps://k8s.second.com:23457/\n\n  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-04.png)\n\n\n参考资料：\n\nhttps://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/ Kubernetes traefik ingress使用\n\nhttp://docs.kubernetes.org.cn/558.html\n"
  },
  {
    "path": "components/ingress/traefik-ingress/README.md",
    "content": "\n"
  },
  {
    "path": "components/ingress/常用操作.md",
    "content": "```\n[root@master ingress]# kubectl get ingress -A\nNAMESPACE     NAME                   HOSTS                 ADDRESS   PORTS   AGE\ndefault       nginx-ingress          k8s.nginx.com                   80      40m\nkube-system   kubernetes-dashboard   dashboard.test.com              80      2d21h\nkube-system   traefik-web-ui         traefik-ui.test.com             80      2d21h\n\n\n\n[root@master ingress]# kubectl delete ingress hello-tls-ingress\ningress.extensions \"hello-tls-ingress\" deleted\n\n```\n\n# 1、rbac.yaml\n\n首先，为安全起见我们这里使用 RBAC 安全认证方式：(rbac.yaml)\n\n```\nmkdir -p /data/components/ingress\n\ncat > /data/components/ingress/rbac.yaml << \\EOF\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - services\n      - endpoints\n      - secrets\n    verbs:\n      - get\n      - list\n      - watch\n  - apiGroups:\n      - extensions\n    resources:\n      - ingresses\n    verbs:\n      - get\n      - list\n      - watch\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: traefik-ingress-controller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: traefik-ingress-controller\nsubjects:\n- kind: ServiceAccount\n  name: traefik-ingress-controller\n  namespace: kube-system\nEOF\n\nkubectl create -f /data/components/ingress/rbac.yaml\n```\n\n# 2、traefik.yaml\n\n然后使用 Deployment 来管理 Pod，直接使用官方的 traefik 镜像部署即可（traefik.yaml）\n```\ncat > /data/components/ingress/traefik.yaml << \\EOF\n---\nkind: Deployment\napiVersion: extensions/v1beta1\nmetadata:\n  name: traefik-ingress-controller\n  namespace: kube-system\n  labels:\n    k8s-app: traefik-ingress-lb\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      k8s-app: traefik-ingress-lb\n  template:\n    metadata:\n      labels:\n        k8s-app: traefik-ingress-lb\n        name: traefik-ingress-lb\n    spec:\n      serviceAccountName: traefik-ingress-controller\n      terminationGracePeriodSeconds: 60\n      tolerations:\n      - operator: \"Exists\"\n      nodeSelector:\n        kubernetes.io/hostname: linux-node1.example.com  #默认master是不允许被调度的，加上tolerations后允许被调度\n      containers:\n      - image: traefik\n        name: traefik-ingress-lb\n        ports:\n        - name: http\n          containerPort: 80\n        - name: admin\n          containerPort: 8080\n        args:\n        - --api\n        - --kubernetes\n        - --logLevel=INFO\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: traefik-ingress-service\n  namespace: kube-system\nspec:\n  selector:\n    k8s-app: traefik-ingress-lb\n  ports:\n    - protocol: TCP\n      port: 80\n      name: web\n    - protocol: TCP\n      port: 8080\n      name: admin\n  type: NodePort\nEOF\n\nkubectl create -f /data/components/ingress/traefik.yaml\n\nkubectl apply -f /data/components/ingress/traefik.yaml\n```\n```\n要注意上面 yaml 文件:\ntolerations:\n- operator: \"Exists\"\nnodeSelector:\n  kubernetes.io/hostname: master\n  \n由于我们这里的特殊性，只有 master 节点有外网访问权限，所以我们使用nodeSelector标签将traefik的固定调度到master这个节点上，那么上面的tolerations是干什么的呢？这个是因为我们集群使用的 kubeadm 安装的，master 节点默认是不能被普通应用调度的，要被调度的话就需要添加这里的 tolerations 属性，当然如果你的集群和我们的不太一样，直接去掉这里的调度策略就行。\n\nnodeSelector 和 tolerations 都属于 Pod 的调度策略，在后面的课程中会为大家讲解。\n\n```\n# 3、traefik-ui\n\ntraefik 还提供了一个 web ui 工具，就是上面的 8080 端口对应的服务，为了能够访问到该服务，我们这里将服务设置成的 NodePort\n\n```\nroot># kubectl get pods -n kube-system -l k8s-app=traefik-ingress-lb -o wide\nNAME                                          READY   STATUS    RESTARTS   AGE   IP           NODE                      NOMINATED NODE   READINESS GATES\ntraefik-ingress-controller-5b58d5c998-6dn97   1/1     Running   0          88s   10.244.0.2   linux-node1.example.com   <none>           <none>\n\nroot># kubectl get svc -n kube-system|grep traefik-ingress-service\ntraefik-ingress-service   NodePort    10.102.214.49   <none>        80:32472/TCP,8080:32482/TCP   44s\n\n现在在浏览器中输入 master_node_ip:32303 就可以访问到 traefik 的 dashboard 了\n```\nhttp://192.168.56.11:32482/dashboard/\n\n# 4、Ingress 对象\n\n现在我们是通过 NodePort 来访问 traefik 的 Dashboard 的，那怎样通过 ingress 来访问呢？ 首先，需要创建一个 ingress 对象：(ingress.yaml)\n\n```\ncat > /data/components/ingress/ingress.yaml <<\\EOF\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: traefik-web-ui\n  namespace: kube-system\n  annotations:\n    kubernetes.io/ingress.class: traefik\nspec:\n  rules:\n  - host: traefik.k8s.com\n    http:\n      paths:\n      - backend:\n          serviceName: traefik-ingress-service\n          #servicePort: 8080\n          servicePort: admin  #这里建议使用servicePort: admin,这样就避免端口的调整\nEOF\n\nkubectl create -f /data/components/ingress/ingress.yaml\nkubectl apply -f /data/components/ingress/ingress.yaml\n\n要注意上面的 ingress 对象的规则，特别是 rules 区域，我们这里是要为 traefik 的 dashboard 建立一个 ingress 对象，所以这里的 serviceName 对应的是上面我们创建的 traefik-ingress-service，端口也要注意对应 8080 端口，为了避免端口更改，这里的 servicePort 的值也可以替换成上面定义的 port 的名字：admin\n```\n创建完成后，我们应该怎么来测试呢？\n\n```\n第一步，在本地的/etc/hosts里面添加上 traefik.k8s.com 与 master 节点外网 IP 的映射关系\n\n第二步，在浏览器中访问：http://traefik.k8s.com 我们会发现并没有得到我们期望的 dashboard 界面，这是因为我们上面部署 traefik 的时候使用的是 NodePort 这种 Service 对象，所以我们只能通过上面的 32482 端口访问到我们的目标对象：http://traefik.k8s.com:32482\n\n加上端口后我们发现可以访问到 dashboard 了，而且在 dashboard 当中多了一条记录，正是上面我们创建的 ingress 对象的数据，我们还可以切换到 HEALTH 界面中，可以查看当前 traefik 代理的服务的整体的健康状态 \n\n第三步，上面我们可以通过自定义域名加上端口可以访问我们的服务了，但是我们平时服务别人的服务是不是都是直接用的域名啊，http 或者 https 的，几乎很少有在域名后面加上端口访问的吧？为什么？太麻烦啊，端口也记不住，要解决这个问题，怎么办，我们只需要把我们上面的 traefik 的核心应用的端口隐射到 master 节点上的 80 端口，是不是就可以了，因为 http 默认就是访问 80 端口，但是我们在 Service 里面是添加的一个 NodePort 类型的服务，没办法映射 80 端口，怎么办？这里就可以直接在 Pod 中指定一个 hostPort 即可，更改上面的 traefik.yaml 文件中的容器端口：\n\ncontainers:\n- image: traefik\nname: traefik-ingress-lb\nports:\n- name: http\n  containerPort: 80\n  hostPort: 80      #新增这行\n- name: admin\n  containerPort: 8080\n  \n添加以后hostPort: 80，然后更新应用\nkubectl apply -f traefik.yaml\n\n更新完成后，这个时候我们在浏览器中直接使用域名方法测试下\nhttp://traefik.k8s.com\n\n第四步，正常来说，我们如果有自己的域名，我们可以将我们的域名添加一条 DNS 记录，解析到 master 的外网 IP 上面，这样任何人都可以通过域名来访问我的暴露的服务了。\n\n如果你有多个边缘节点的话，可以在每个边缘节点上部署一个 ingress-controller 服务，然后在边缘节点前面挂一个负载均衡器，比如 nginx，将所有的边缘节点均作为这个负载均衡器的后端，这样就可以实现 ingress-controller 的高可用和负载均衡了。\n```\n\n# 5、ingress tls\n\n上节课给大家展示了 traefik 的安装使用以及简单的 ingress 的配置方法，这节课我们来学习一下 ingress tls 以及 path 路径在 ingress 对象中的使用方法。\n\n1、TLS 认证\n\n在现在大部分场景下面我们都会使用 https 来访问我们的服务，这节课我们将使用一个自签名的证书，当然你有在一些正规机构购买的 CA 证书是最好的，这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书：\n```\nopenssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt\n```\n现在我们有了证书，我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书：\n```\nkubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system\n```\n\n3、配置 Traefik\n\n前面我们使用的是 Traefik 的默认配置，现在我们来配置 Traefik，让其支持 https：\n\n"
  },
  {
    "path": "components/initContainers/README.md",
    "content": "参考资料：\n\nhttps://www.cnblogs.com/yanh0606/p/11395920.html    Kubernetes的初始化容器initContainers\n\nhttps://www.jianshu.com/p/e57c3e17ce8c  理解 Init 容器\n"
  },
  {
    "path": "components/job/README.md",
    "content": "参考资料：\n\nhttps://www.jianshu.com/p/bd6cd1b4e076  Kubernetes对象之Job\n\n\nhttps://www.cnblogs.com/lvcisco/p/9670100.html   k8s Job、Cronjob 的使用 \n"
  },
  {
    "path": "components/k8s-monitor/README.md",
    "content": "```\n# 1、持久化监控数据\ncat > prometheus-class.yaml <<-EOF\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: fast\nprovisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'\nparameters:\n  archiveOnDelete: \"true\"\nEOF\n\n#部署class.yaml\nkubectl apply -f prometheus-class.yaml\n\n#查看创建的storageclass\nkubectl get sc\n\n#2、修改 Prometheus 持久化\nprometheus是一种 StatefulSet 有状态集的部署模式，所以直接将 StorageClass 配置到里面，在下面的yaml中最下面添加持久化配置\n#cat prometheus/prometheus-prometheus.yaml\napiVersion: monitoring.coreos.com/v1\nkind: Prometheus\nmetadata:\n  labels:\n    prometheus: k8s\n  name: k8s\n  namespace: monitoring\nspec:\n  alerting:\n    alertmanagers:\n    - name: alertmanager-main\n      namespace: monitoring\n      port: web\n  baseImage: quay.io/prometheus/prometheus\n  nodeSelector:\n    kubernetes.io/os: linux\n  podMonitorSelector: {}\n  replicas: 2\n  resources:\n    requests:\n      memory: 400Mi\n  ruleSelector:\n    matchLabels:\n      prometheus: k8s\n      role: alert-rules\n  securityContext:\n    fsGroup: 2000\n    runAsNonRoot: true\n    runAsUser: 1000\n  serviceAccountName: prometheus-k8s\n  serviceMonitorNamespaceSelector: {}\n  serviceMonitorSelector: {}\n  version: v2.11.0\n  storage:                     #----添加持久化配置，指定StorageClass为上面创建的fast\n    volumeClaimTemplate:\n      spec:\n        storageClassName: fast #---指定为fast\n        resources:\n          requests:\n            storage: 300Gi\n            \nkubectl apply -f prometheus/prometheus-prometheus.yaml\n\n#3、修改 Grafana 持久化配置\n\n由于 Grafana 是部署模式为 Deployment，所以我们提前为其创建一个 grafana-pvc.yaml 文件，加入下面 PVC 配置。\n#vim grafana-pvc.yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: grafana\n  namespace: monitoring  #---指定namespace为monitoring\nspec:\n  storageClassName: fast #---指定StorageClass为上面创建的fast\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 200Gi\n\nkubectl apply -f grafana-pvc.yaml\n\n#vim grafana/grafana-deployment.yaml\n......\n      volumes:\n      - name: grafana-storage       #-------新增持久化配置\n        persistentVolumeClaim:\n          claimName: grafana        #-------设置为创建的PVC名称\n      #- emptyDir: {}               #-------注释掉旧的配置\n      #  name: grafana-storage\n      - name: grafana-datasources\n        secret:\n          secretName: grafana-datasources\n      - configMap:\n          name: grafana-dashboards\n        name: grafana-dashboards\n......\n\nkubectl apply -f grafana/grafana-deployment.yaml\n```\n参考资料：\n\nhttps://www.cnblogs.com/skyflask/articles/11410063.html  kubernetes监控方案--cAdvisor+Heapster+InfluxDB+Grafana\n\nhttps://www.cnblogs.com/skyflask/p/11480988.html  kubernetes监控终极方案-kube-promethues\n\nhttp://www.mydlq.club/article/10/#wow1  Kube-promethues监控k8s集群\n\nhttps://jicki.me/docker/kubernetes/2019/07/22/kube-prometheus/   Coreos kube-prometheus 监控\n"
  },
  {
    "path": "components/kube-proxy/README.md",
    "content": "# Kube-Proxy简述\n\n```\n运行在每个节点上，监听 API Server 中服务对象的变化，再通过管理 IPtables 来实现网络的转发\nKube-Proxy 目前支持三种模式：\n\nUserSpace\n    k8s v1.2 后就已经淘汰\n\nIPtables\n    目前默认方式\n\nIPVS\n    需要安装ipvsadm、ipset 工具包和加载 ip_vs 内核模块\n\n```\n参考资料：\n\nhttps://ywnz.com/linuxyffq/2530.html  解析从外部访问Kubernetes集群中应用的几种方法  \n\nhttps://www.jianshu.com/p/b2d13cec7091  浅谈 k8s service&kube-proxy  \n\nhttps://www.codercto.com/a/90806.html  探究K8S Service内部iptables路由规则\n\nhttps://blog.51cto.com/goome/2369150  k8s实践7:ipvs结合iptables使用过程分析\n\nhttps://blog.csdn.net/xinghun_4/article/details/50492041  kubernetes中port、target port、node port的对比分析，以及kube-proxy代理\n"
  },
  {
    "path": "components/nfs/README.md",
    "content": "\n"
  },
  {
    "path": "components/pressure/README.md",
    "content": "# 一、生产大规模集群，网络组件选择\n\n如果用calico-RR反射器这种模式，保证性能的情况下大概能支撑好多个节点？\n\nRR反射器还分为两种 可以由calico的节点服务承载 也可以是直接的物理路由器做RR\n\n超大规模Calico如果全以BGP来跑没什么问题 只是要做好网络地址规划 即便是不同集群容器地址也不能重叠\n\n# 二、flanel网络组件压测\n\n```\nflannel受限于cpu压力\n```\n  ![k8s网络组件flannel压测](https://github.com/Lancger/opsfull/blob/master/images/pressure_flannel_01.png)\n\n# 三、calico网络组件压测\n\n```\ncalico则轻轻松松与宿主机性能相差无几\n\n如果单单一个集群 节点数超级多 如果不做BGP路由聚合 物理路由器或三层交换机会扛不住的\n```\n  ![k8s网络组件calico压测](https://github.com/Lancger/opsfull/blob/master/images/pressure_calico_01.png)\n\n# 四、calico网络和宿主机压测\n\n  ![k8s网络组件压测](https://github.com/Lancger/opsfull/blob/master/images/pressure_physical_01.png)\n"
  },
  {
    "path": "components/pressure/calico bgp网络需要物理路由和交换机支持吗.md",
    "content": "![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_01.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_02.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_03.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_04.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_05.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_06.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_07.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_08.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_09.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_10.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_11.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_12.png)\n"
  },
  {
    "path": "components/pressure/k8s集群更换网段方案.md",
    "content": "```\n1、服务器IP更换网段  有什么解决方案吗？不重新搭建集群的话？\n\n方案一：\n\n改监听地址，重做集群证书\n\n不然还真不好搞的\n\n方案二：\n\n如果etcd一开始是静态的 那就不好玩了\n\n得一开始就是基于dns discovery方式\n\n简明扼要的说\n\n就是但凡涉及IP地址的地方\n\n全部用fqdn\n\n无论是证书还是配置文件\n\n这四句话核心就够了\n\netcd官方本来就有正式文档讲dns discovery部署\n\n只是k8s部分，官方部署没有提\n\n```\n\n![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_01.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_02.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_05.png)\n\n![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_06.png)\n\n\n来自：  广大群友讨论集锦\n\nhttps://github.com/etcd-io/etcd/blob/a4018f25c91fff8f4f15cd2cee9f026650c7e688/Documentation/clustering.md#dns-discovery  \n"
  },
  {
    "path": "docs/Envoy的架构与基本术语.md",
    "content": "参考文档：\n\nhttps://jimmysong.io/kubernetes-handbook/usecases/envoy-terminology.html  Envoy 的架构与基本术语\n"
  },
  {
    "path": "docs/Kubernetes学习笔记.md",
    "content": "参考文档：\n\nhttps://blog.gmem.cc/kubernetes-study-note  Kubernetes学习笔记\n"
  },
  {
    "path": "docs/Kubernetes架构介绍.md",
    "content": "# Kubernetes架构介绍\n\n## Kubernetes架构\n\n![](https://github.com/Lancger/opsfull/blob/master/images/kubernetes%E6%9E%B6%E6%9E%84.jpg)\n\n## k8s架构图\n\n![](https://github.com/Lancger/opsfull/blob/master/images/k8s%E6%9E%B6%E6%9E%84%E5%9B%BE.jpg)\n\n## 一、K8S Master节点\n### API Server\napiserver提供集群管理的REST API接口，包括认证授权、数据校验以 及集群状态变更等\n只有API Server才直接操作etcd\n其他模块通过API Server查询或修改数据\n提供其他模块之间的数据交互和通信的枢纽\n\n### Scheduler\nscheduler负责分配调度Pod到集群内的node节点\n监听kube-apiserver，查询还未分配Node的Pod\n根据调度策略为这些Pod分配节点\n\n### Controller Manager\ncontroller-manager由一系列的控制器组成，它通过apiserver监控整个 集群的状态，并确保集群处于预期的工作状态\n\n### ETCD\n所有持久化的状态信息存储在ETCD中\n\n## 二、K8S Node节点\n### Kubelet\n1. 管理Pods以及容器、镜像、Volume等，实现对集群 对节点的管理。\n### Kube-proxy\n2. 提供网络代理以及负载均衡，实现与Service通信。\n### Docker Engine\n3. 负责节点的容器的管理工作。\n\n## 三、资源对象介绍\n\n### 3.1 Replication Controller，RC\n\n1. RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中\n的Pod来保证集群中运行指定数目的Pod副本。\n\n2. 指定的数目可以是多个也可以是1个;少于指定数目，RC就会启动运\n行新的Pod副本;多于指定数目，RC就会杀死多余的Pod副本。\n\n3. 即使在指定数目为1的情况下，通过RC运行Pod也比直接运行Pod更 明智，因为RC也可以发挥它高可用的能力，保证永远有1个Pod在运 行。\n\n### 3.2 Replica Set，RS\n\n1. RS是新一代RC，提供同样的高可用能力，区别主要在于RS后来居上， 能支持更多中的匹配模式。副本集对象一般不单独使用，而是作为部 署的理想状态参数使用。\n\n2. 是K8S 1.2中出现的概念，是RC的升级。一般和Deployment共同使用。\n\n### 3.3 Deployment\n1. Deployment表示用户对K8s集群的一次更新操作。Deployment是 一个比RS应用模式更广的API对象，\n\n2. 可以是创建一个新的服务，更新一个新的服务，也可以是滚动升 级一个服务。滚动升级一个服务，实际是创建一个新的RS，然后 逐渐将新RS中副本数增加到理想状态，将旧RS中的副本数减小 到0的复合操作;\n\n3. 这样一个复合操作用一个RS是不太好描述的，所以用一个更通用 的Deployment来描述。\n\n### 3.4 Service\n1. RC、RS和Deployment只是保证了支撑服务的POD的数量，但是没有解 决如何访问这些服务的问题。一个Pod只是一个运行服务的实例，随时可 能在一个节点上停止，在另一个节点以一个新的IP启动一个新的Pod，因 此不能以确定的IP和端口号提供服务。\n\n2. 要稳定地提供服务需要服务发现和负载均衡能力。服务发现完成的工作， 是针对客户端访问的服务，找到对应的的后端服务实例。\n\n3. 在K8集群中，客户端需要访问的服务就是Service对象。每个Service会对 应一个集群内部有效的虚拟IP，集群内部通过虚拟IP访问一个服务。\n\n## 四、K8S的IP地址\n1. Node IP: 节点设备的IP，如物理机，虚拟机等容器宿主的实际IP。 \n\n2. Pod IP: Pod 的IP地址，是根据docker0网格IP段进行分配的。 \n\n3. Cluster IP: Service的IP，是一个虚拟IP，仅作用于service对象，由k8s\n管理和分配，需要结合service port才能使用，单独的IP没有通信功能，\n集群外访问需要一些修改。\n\n4. 在K8S集群内部，nodeip podip clusterip的通信机制是由k8s制定的路由\n规则，不是IP路由。\n"
  },
  {
    "path": "docs/Kubernetes集群环境准备.md",
    "content": "# 一、k8s集群实验环境准备\r\n\r\n  ![架构图](https://github.com/Lancger/opsfull/blob/master/images/K8S.png)\r\n\r\n<table border=\"0\">\r\n    <tr>\r\n        <td><strong>主机名</strong></td>\r\n        <td><strong>IP地址（NAT）</strong></td>\r\n        <td><strong>描述</strong></td>\r\n    </tr>\r\n     <tr>\r\n        <td><strong>linux-node1.example.com</strong></td>\r\n        <td>eth0:192.168.56.11</td>\r\n        <td>Kubernets Master节点/Etcd节点</td>\r\n    </tr>\r\n    <tr>\r\n        <td><strong>linux-node2.example.com</strong></td>\r\n        <td>eth0:192.168.56.12</td>\r\n        <td>Kubernets Node节点/ Etcd节点</td>\r\n    </tr>\r\n    <tr>\r\n        <td><strong>linux-node3.example.com</strong></td>\r\n        <td>eth0:192.168.56.13</td>\r\n        <td>Kubernets Node节点/ Etcd节点</td>\r\n    </tr>\r\n</table>\r\n\r\n# 二、准备工作\r\n  \r\n1、设置主机名\r\n```\r\nhostnamectl set-hostname linux-node1\r\nhostnamectl set-hostname linux-node2\r\nhostnamectl set-hostname linux-node3\r\n```\r\n\r\n2、绑定主机host\r\n```\r\ncat > /etc/hosts <<EOF\r\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\r\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\r\n192.168.56.11 linux-node1 linux-node1.example.com\r\n192.168.56.12 linux-node2 linux-node2.example.com\r\n192.168.56.13 linux-node3 linux-node3.example.com\r\nEOF\r\n```\r\n\r\n3、设置部署节点(Master)到其它所有节点的SSH免密码登(包括本机)\r\n```\r\n[root@linux-node1 ~]# ssh-keygen -t rsa\r\n[root@linux-node1 ~]# ssh-copy-id linux-node1\r\n[root@linux-node1 ~]# ssh-copy-id linux-node2\r\n[root@linux-node1 ~]# ssh-copy-id linux-node3\r\n```\r\n\r\n4、关闭防火墙和selinux\r\n```\r\n#关闭防火墙\r\nsystemctl disable firewalld\r\nsystemctl stop firewalld\r\n\r\n#关闭selinux\r\nsed -i \"s/SELINUX=enforcing/SELINUX=disabled/g\" /etc/sysconfig/selinux\r\nsed -i \"s/SELINUXTYPE=targeted/SELINUXTYPE=disabled/g\" /etc/sysconfig/selinux\r\nsetenforce 0\r\n```\r\n\r\n5、其他配置\r\n```\r\nyum install -y ntpdate wget lrzsz vim net-tools\r\n\r\n#加入crontab\r\n1 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null 2>&1\r\n\r\n#设置时区\r\ncp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime\r\n\r\n#SSH登录慢\r\nsed -i \"s/#UseDNS yes/UseDNS no/\"  /etc/ssh/sshd_config\r\nsed -i \"s/GSSAPIAuthentication yes/GSSAPIAuthentication no/\"  /etc/ssh/sshd_config\r\nsystemctl restart sshd.service\r\n```\r\n\r\n6、软件包下载\r\n\r\nk8s-v1.12.0版本网盘地址: https://pan.baidu.com/s/1jU427W1f3oSDnzB3bU2s5w\r\n\r\n```\r\n#所有文件存放在/opt/kubernetes目录下\r\nmkdir -p /opt/kubernetes/{cfg,bin,ssl,log}\r\n\r\n#使用二进制方式进行部署\r\n官网下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#downloads-for-v1121\r\n\r\n#添加环境变量\r\nvim /root/.bash_profile\r\nPATH=$PATH:$HOME/bin:/opt/kubernetes/bin\r\nsource /root/.bash_profile\r\n```\r\n![官网下载链接](https://github.com/Lancger/opsfull/blob/master/images/k8s-soft.jpg)\r\n\r\n7、解压软件包\r\n```\r\ntar -zxvf kubernetes.tar.gz -C /usr/local/src/\r\ntar -zxvf kubernetes-server-linux-amd64.tar.gz -C /usr/local/src/\r\ntar -zxvf kubernetes-client-linux-amd64.tar.gz -C /usr/local/src/\r\ntar -zxvf kubernetes-node-linux-amd64.tar.gz -C /usr/local/src/\r\n```\r\n\r\n"
  },
  {
    "path": "docs/app.md",
    "content": "1.创建一个测试用的deployment\n```\n[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000\n\n[root@linux-node1 ~]# kubectl get deployment\nNAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\nnet-test   2         2         2            2           2h\n[root@linux-node1 ~]# kubectl delete deployment net-test\n```\n\n2.查看获取IP情况\n```\n[root@linux-node1 ~]# kubectl get pod -o wide\nNAME                        READY     STATUS    RESTARTS   AGE       IP          NODE\nnet-test-5767cb94df-6smfk   1/1       Running   1          1h        10.2.69.3   192.168.56.12\nnet-test-5767cb94df-ctkhz   1/1       Running   1          1h        10.2.17.3   192.168.56.13\n```\n\n3.测试联通性\n```\n[root@linux-node1 ~]# ping -c 1 10.2.69.3\nPING 10.2.69.3 (10.2.69.3) 56(84) bytes of data.\n64 bytes from 10.2.69.3: icmp_seq=1 ttl=61 time=1.39 ms\n\n--- 10.2.69.3 ping statistics ---\n1 packets transmitted, 1 received, 0% packet loss, time 0ms\nrtt min/avg/max/mdev = 1.396/1.396/1.396/0.000 ms\n\n[root@linux-node1 ~]# ping -c 1 10.2.17.3\nPING 10.2.17.3 (10.2.17.3) 56(84) bytes of data.\n64 bytes from 10.2.17.3: icmp_seq=1 ttl=61 time=1.16 ms\n\n--- 10.2.17.3 ping statistics ---\n1 packets transmitted, 1 received, 0% packet loss, time 0ms\nrtt min/avg/max/mdev = 1.164/1.164/1.164/0.000 ms\n\n#如果要在master节点不能ping通pod的IP,则需要检查flanneld服务,下面是各节点的网卡ip情况(发现各节点的flannel0的ip网段都是不一样的)\n#node1\n[root@linux-node1 ~]# ifconfig\ndocker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500\n        inet 10.2.41.1  netmask 255.255.255.0  broadcast 10.2.41.255\n        ether 02:42:77:d9:95:e3  txqueuelen 0  (Ethernet)\n        RX packets 0  bytes 0 (0.0 B)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 0  bytes 0 (0.0 B)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\neth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500\n        inet 192.168.56.11  netmask 255.255.255.0  broadcast 192.168.56.255\n        ether 00:0c:29:e6:00:79  txqueuelen 1000  (Ethernet)\n        RX packets 75548  bytes 10771254 (10.2 MiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 74344  bytes 12700211 (12.1 MiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nflannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472\n        inet 10.2.41.0  netmask 255.255.0.0  destination 10.2.41.0\n        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)\n        RX packets 30  bytes 2520 (2.4 KiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 30  bytes 2520 (2.4 KiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nlo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536\n        inet 127.0.0.1  netmask 255.0.0.0\n        loop  txqueuelen 1000  (Local Loopback)\n        RX packets 34140  bytes 8049438 (7.6 MiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 34140  bytes 8049438 (7.6 MiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\n#node2\n[root@linux-node2 ~]# ifconfig\ndocker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400\n        inet 10.2.69.1  netmask 255.255.255.0  broadcast 10.2.69.255\n        ether 02:42:de:56:b5:1e  txqueuelen 0  (Ethernet)\n        RX packets 10  bytes 448 (448.0 B)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 9  bytes 546 (546.0 B)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\neth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500\n        inet 192.168.56.12  netmask 255.255.255.0  broadcast 192.168.56.255\n        ether 00:0c:29:ee:65:40  txqueuelen 1000  (Ethernet)\n        RX packets 32893  bytes 4996885 (4.7 MiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 32877  bytes 3737878 (3.5 MiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nflannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472\n        inet 10.2.69.0  netmask 255.255.0.0  destination 10.2.69.0\n        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)\n        RX packets 3  bytes 252 (252.0 B)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 3  bytes 252 (252.0 B)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nlo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536\n        inet 127.0.0.1  netmask 255.0.0.0\n        loop  txqueuelen 1000  (Local Loopback)\n        RX packets 347  bytes 36887 (36.0 KiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 347  bytes 36887 (36.0 KiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nveth09ea856c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400\n        ether c6:be:00:bd:a9:18  txqueuelen 0  (Ethernet)\n        RX packets 10  bytes 588 (588.0 B)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 9  bytes 546 (546.0 B)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\n#node3\n[root@linux-node3 ~]# ifconfig\ndocker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400\n        inet 10.2.17.1  netmask 255.255.255.0  broadcast 10.2.17.255\n        ether 02:42:ac:11:ac:3c  txqueuelen 0  (Ethernet)\n        RX packets 32  bytes 2408 (2.3 KiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 31  bytes 2814 (2.7 KiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\neth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500\n        inet 192.168.56.13  netmask 255.255.255.0  broadcast 192.168.56.255\n        ether 00:0c:29:53:f4:b1  txqueuelen 1000  (Ethernet)\n        RX packets 47504  bytes 7138550 (6.8 MiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 48402  bytes 8310935 (7.9 MiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nflannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472\n        inet 10.2.17.0  netmask 255.255.0.0  destination 10.2.17.0\n        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)\n        RX packets 27  bytes 2268 (2.2 KiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 27  bytes 2268 (2.2 KiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nlo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536\n        inet 127.0.0.1  netmask 255.0.0.0\n        loop  txqueuelen 1000  (Local Loopback)\n        RX packets 129  bytes 13510 (13.1 KiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 129  bytes 13510 (13.1 KiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n\nveth8630a55b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400\n        ether 72:e9:df:4f:f6:64  txqueuelen 0  (Ethernet)\n        RX packets 32  bytes 2856 (2.7 KiB)\n        RX errors 0  dropped 0  overruns 0  frame 0\n        TX packets 31  bytes 2814 (2.7 KiB)\n        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0\n```\n\n4、创建nginx服务\n```\n#创建deployment文件\n[root@linux-node1 ~]# vim  nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  labels:\n    app: nginx\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.13.12\n        ports:\n        - containerPort: 80\n\n#创建deployment\n[root@linux-node1 ~]# kubectl create -f nginx-deployment.yaml\ndeployment.apps \"nginx-deployment\" created\n\n\n#查看deployment\n[root@linux-node1 ~]# kubectl get deployment\nNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\nnginx-deployment   3         3         3            2           48s\n\n\n#查看deployment详情\n[root@linux-node1 ~]# kubectl describe deployment nginx-deployment\nName:                   nginx-deployment\nNamespace:              default\nCreationTimestamp:      Tue, 09 Oct 2018 15:11:33 +0800\nLabels:                 app=nginx\nAnnotations:            deployment.kubernetes.io/revision=1\nSelector:               app=nginx\nReplicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType:           RollingUpdate\nMinReadySeconds:        0\nRollingUpdateStrategy:  25% max unavailable, 25% max surge\nPod Template:\n  Labels:  app=nginx\n  Containers:\n   nginx:\n    Image:        nginx:1.13.12\n    Port:         80/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nConditions:\n  Type           Status  Reason\n  ----           ------  ------\n  Available      True    MinimumReplicasAvailable\n  Progressing    True    NewReplicaSetAvailable\nOldReplicaSets:  <none>\nNewReplicaSet:   nginx-deployment-6c45fc49cb (3/3 replicas created)\nEvents:\n  Type    Reason             Age   From                   Message\n  ----    ------             ----  ----                   -------\n  Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set nginx-deployment-6c45fc49cb to 3\n\n\n#查看pod\n[root@linux-node1 ~]# kubectl get pod -o wide\nNAME                                READY     STATUS    RESTARTS   AGE       IP          NODE\nnginx-deployment-6c45fc49cb-7rwdp   1/1       Running   0          4m        10.2.76.5   192.168.56.12\nnginx-deployment-6c45fc49cb-8dgkd   1/1       Running   0          4m        10.2.76.4   192.168.56.12\nnginx-deployment-6c45fc49cb-clgkl   1/1       Running   0          4m        10.2.76.4   192.168.56.13\n\n\n#查看pod详情\n[root@linux-node1 ~]# kubectl describe pod nginx-deployment-6c45fc49cb-7rwdp\nName:           nginx-deployment-6c45fc49cb-7rwdp\nNamespace:      default\nNode:           192.168.56.12/192.168.56.12\nStart Time:     Tue, 09 Oct 2018 15:11:33 +0800\nLabels:         app=nginx\n                pod-template-hash=2701970576\nAnnotations:    <none>\nStatus:         Running\nIP:             10.2.76.5\nControlled By:  ReplicaSet/nginx-deployment-6c45fc49cb\nContainers:\n  nginx:\n    Container ID:   docker://0ab9b4f9bf3691f16e9cb6836a7375cb7f886398bfa8a81147e9a24f3634d591\n    Image:          nginx:1.13.12\n    Image ID:       docker-pullable://nginx@sha256:b1d09e9718890e6ebbbd2bc319ef1611559e30ce1b6f56b2e3b479d9da51dc35\n    Port:           80/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 09 Oct 2018 15:12:33 +0800\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4cgj8 (ro)\nConditions:\n  Type           Status\n  Initialized    True\n  Ready          True\n  PodScheduled   True\nVolumes:\n  default-token-4cgj8:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-4cgj8\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     <none>\nEvents:\n  Type    Reason                 Age   From                    Message\n  ----    ------                 ----  ----                    -------\n  Normal  Scheduled              4m    default-scheduler       Successfully assigned nginx-deployment-6c45fc49cb-7rwdp to 192.168.56.12\n  Normal  SuccessfulMountVolume  4m    kubelet, 192.168.56.12  MountVolume.SetUp succeeded for volume \"default-token-4cgj8\"\n  Normal  Pulling                4m    kubelet, 192.168.56.12  pulling image \"nginx:1.13.12\"\n  Normal  Pulled                 3m    kubelet, 192.168.56.12  Successfully pulled image \"nginx:1.13.12\"\n  Normal  Created                3m    kubelet, 192.168.56.12  Created container\n  Normal  Started                3m    kubelet, 192.168.56.12  Started container\n\n#导出资源描述\nkubectl get   --export -o yaml 命令会以Yaml格式导出系统中已有资源描述\n\n比如，我们可以将系统中 nginx 部署的描述导成 Yaml 文件\nkubectl get deployment nginx-deployment-6c45fc49cb-7rwdp --export -o yaml > nginx-deployment.yaml\n\n\n#测试pod访问\n测试访问nginx镜像（在对应的节点上测试，本来是其他节点也可以正常访问的）\n[root@linux-node3 ~]# curl --head http://10.2.76.4\nHTTP/1.1 200 OK\nServer: nginx/1.13.12\nDate: Tue, 09 Oct 2018 07:17:55 GMT\nContent-Type: text/html\nContent-Length: 612\nLast-Modified: Mon, 09 Apr 2018 16:01:09 GMT\nConnection: keep-alive\nETag: \"5acb8e45-264\"\nAccept-Ranges: bytes\n\n```\n\n5、更新Deployment\n```\n#--record  记录日志，方便以后回滚\n[root@linux-node1 ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.12.1 --record\ndeployment.apps \"nginx-deployment\" image updated\n\n```\n\n6、查看更新后的Deployment\n```\n#这里发现镜像已经更新为1.12.1版本了，然后CURRENT（当前镜像数为4个，期望值DESIRED为3个，说明正在进行滚动更新）\n[root@linux-node1 ~]# kubectl get deployment -o wide\nNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES         SELECTOR\nnginx-deployment   3         4         1            3           13m       nginx        nginx:1.12.1   app=nginx\n```\n\n7、查看历史记录\n```\n[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment\ndeployments \"nginx-deployment\"\nREVISION  CHANGE-CAUSE\n1         <none>               ---第一个没有，是因为我们创建的时候没有加上--record参数\n4         kubectl set image deployment/nginx-deployment nginx=nginx:1.12.2 --record=true\n5         kubectl set image deployment/nginx-deployment nginx=nginx:1.12.1 --record=true\n```\n\n7、查看具体某一个版本的升级历史\n```\n[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment --revision=1\ndeployments \"nginx-deployment\" with revision #1\nPod Template:\n  Labels:\tapp=nginx\n\tpod-template-hash=2701970576\n  Containers:\n   nginx:\n    Image:\tnginx:1.13.12\n    Port:\t80/TCP\n    Host Port:\t0/TCP\n    Environment:\t<none>\n    Mounts:\t<none>\n  Volumes:\t<none>\n```\n\n8、快速回滚到上一个版本\n```\n[root@linux-node1 ~]# kubectl rollout undo deployment/nginx-deployment\ndeployment.apps \"nginx-deployment\"\n[root@linux-node1 ~]#\n```\n\n9、扩容到5个节点\n```\n[root@linux-node1 ~]# kubectl get pod -o wide   ----之前是3个pod\nNAME                                READY     STATUS    RESTARTS   AGE       IP           NODE\nnginx-deployment-7498dc98f8-48lqg   1/1       Running   0          2m        10.2.76.15   192.168.56.12\nnginx-deployment-7498dc98f8-g4zkp   1/1       Running   0          2m        10.2.76.9    192.168.56.13\nnginx-deployment-7498dc98f8-z2466   1/1       Running   0          2m        10.2.76.16   192.168.56.12\n\n[root@linux-node1 ~]# kubectl scale deployment nginx-deployment --replicas 5\ndeployment.extensions \"nginx-deployment\" scaled\n\n[root@linux-node1 ~]# kubectl get pod -o wide     ----现在扩容到了5个pod\nNAME                                READY     STATUS    RESTARTS   AGE       IP           NODE\nnginx-deployment-7498dc98f8-28894   1/1       Running   0          8s        10.2.76.10   192.168.56.13\nnginx-deployment-7498dc98f8-48lqg   1/1       Running   0          2m        10.2.76.15   192.168.56.12\nnginx-deployment-7498dc98f8-g4zkp   1/1       Running   0          2m        10.2.76.9    192.168.56.13\nnginx-deployment-7498dc98f8-tt7z5   1/1       Running   0          7s        10.2.76.17   192.168.56.12\nnginx-deployment-7498dc98f8-z2466   1/1       Running   0          2m        10.2.76.16   192.168.56.12\n```\n\n10、Pod ip 变化频繁, 引入service-ip\n```\n#创建nginx-server\n[root@linux-node1 ~]# cat nginx-service.yaml\nkind: Service\napiVersion: v1\nmetadata:\n  name: nginx-service\nspec:\n  selector:\n    app: nginx\n  ports:\n  - protocol: TCP\n    port: 80\n    targetPort: 80\n    \n    \n[root@linux-node1 ~]# kubectl create -f nginx-service.yaml\nservice \"nginx-service\" created\n\n#发现给我们创建了一个vip 10.1.46.200 并且通过lvs做了负载均衡\n[root@linux-node1 ~]# kubectl get service\nNAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE\nkubernetes      ClusterIP   10.1.0.1      <none>        443/TCP   3h\nnginx-service   ClusterIP   10.1.46.200   <none>        80/TCP    5m\n\n#在node节点使用ipvsadm -Ln查看负载均衡后端节点\n[root@linux-node2 ~]# ipvsadm -Ln\nIP Virtual Server version 1.2.1 (size=4096)\nProt LocalAddress:Port Scheduler Flags\n  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn\nTCP  10.1.46.200:80 rr\n  -> 10.2.76.11:80                Masq    1      0          0\n  -> 10.2.76.12:80                Masq    1      0          0\n  -> 10.2.76.13:80                Masq    1      0          0\n  -> 10.2.76.18:80                Masq    1      0          0\n  -> 10.2.76.19:80                Masq    1      0          0\n  \n#在master上访问vip不行，是因为没有安装kube-proxy服务，需要在node节点去测试验证\n[root@linux-node1 ~]# curl --head http://10.1.46.200\n\n[root@linux-node2 ~]# curl --head http://10.1.46.200\nHTTP/1.1 200 OK\nServer: nginx/1.10.3\nDate: Tue, 09 Oct 2018 07:55:57 GMT\nContent-Type: text/html\nContent-Length: 612\nLast-Modified: Tue, 31 Jan 2017 15:01:11 GMT\nConnection: keep-alive\nETag: \"5890a6b7-264\"\nAccept-Ranges: bytes\n\n#每执行一次curl --head http://10.1.46.200请求，后端InActConn连接数就会增加1\n[root@linux-node2 ~]# ipvsadm -Ln\nIP Virtual Server version 1.2.1 (size=4096)\nProt LocalAddress:Port Scheduler Flags\n  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn\nTCP  10.1.46.200:80 rr\n  -> 10.2.76.11:80                Masq    1      0          1\n  -> 10.2.76.12:80                Masq    1      0          1\n  -> 10.2.76.13:80                Masq    1      0          2\n  -> 10.2.76.18:80                Masq    1      0          2\n  -> 10.2.76.19:80                Masq    1      0          2\n```\n"
  },
  {
    "path": "docs/app2.md",
    "content": "1.查询命名空间\n```\n[root@linux-node1 ~]# kubectl get namespace --all-namespaces\nNAME              STATUS   AGE\ndefault           Active   3d13h\nkube-node-lease   Active   3d13h\nkube-public       Active   3d13h\nkube-system       Active   3d13h\n```\n\n2.查询健康状况\n```\n[root@linux-node1 ~]# kubectl get cs --all-namespaces\nNAME                 STATUS    MESSAGE             ERROR\ncontroller-manager   Healthy   ok\nscheduler            Healthy   ok\netcd-0               Healthy   {\"health\":\"true\"}\netcd-2               Healthy   {\"health\":\"true\"}\netcd-1               Healthy   {\"health\":\"true\"}\n```\n\n3.查询node\n```\n[root@linux-node1 ~]# kubectl get node -o wide\nNAME         STATUS                     ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME\n10.33.35.5   Ready,SchedulingDisabled   master   3d13h   v1.15.2   10.33.35.5    <none>        CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64   docker://18.9.6\n10.33.35.6   Ready                      node     3d13h   v1.15.2   10.33.35.6    <none>        CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64   docker://18.9.6\n10.33.35.7   Ready                      node     3d13h   v1.15.2   10.33.35.7    <none>        CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64   docker://18.9.6\n```\n\n4.创建一个测试用的deployment\n```\n[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000\n\n[root@linux-node1 ~]# kubectl get deployment\nNAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\nnet-test   2         2         2            2           2h\n[root@linux-node1 ~]# kubectl delete deployment net-test\n```\n\n5.查看获取IP情况\n```\n[root@linux-node1 ~]# kubectl get pod -o wide\nNAME                        READY     STATUS    RESTARTS   AGE       IP          NODE\nnet-test-54ddf4f6c7-qfgw9           1/1     Running   0          22s   172.20.2.131   10.33.35.7   <none>           <none>\nnet-test-54ddf4f6c7-rwgmc           1/1     Running   0          22s   172.20.1.137   10.33.35.6   <none>           <none>\n```\n\n6、创建nginx服务\n```\n#创建deployment文件\n[root@linux-node1 ~]# vim  nginx-deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  labels:\n    app: nginx\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.13.12\n        ports:\n        - containerPort: 80\n\n#创建deployment\n[root@linux-node1 ~]# kubectl create -f nginx-deployment.yaml\ndeployment.apps \"nginx-deployment\" created\n\n\n#查看deployment\n[root@linux-node1 ~]# kubectl get deployment\nNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\nnginx-deployment   3         3         3            2           48s\n\n\n#查看deployment详情\n[root@linux-node1 ~]# kubectl describe deployment nginx-deployment\nName:                   nginx-deployment\nNamespace:              default\nCreationTimestamp:      Tue, 09 Oct 2018 15:11:33 +0800\nLabels:                 app=nginx\nAnnotations:            deployment.kubernetes.io/revision=1\nSelector:               app=nginx\nReplicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable\nStrategyType:           RollingUpdate\nMinReadySeconds:        0\nRollingUpdateStrategy:  25% max unavailable, 25% max surge\nPod Template:\n  Labels:  app=nginx\n  Containers:\n   nginx:\n    Image:        nginx:1.13.12\n    Port:         80/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nConditions:\n  Type           Status  Reason\n  ----           ------  ------\n  Available      True    MinimumReplicasAvailable\n  Progressing    True    NewReplicaSetAvailable\nOldReplicaSets:  <none>\nNewReplicaSet:   nginx-deployment-6c45fc49cb (3/3 replicas created)\nEvents:\n  Type    Reason             Age   From                   Message\n  ----    ------             ----  ----                   -------\n  Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set nginx-deployment-6c45fc49cb to 3\n\n\n#查看pod\n[root@linux-node1 ~]# kubectl get pod -o wide\nNAME                                READY     STATUS    RESTARTS   AGE       IP          NODE\nnginx-deployment-6c45fc49cb-7rwdp   1/1       Running   0          4m        10.2.76.5   192.168.56.12\nnginx-deployment-6c45fc49cb-8dgkd   1/1       Running   0          4m        10.2.76.4   192.168.56.12\nnginx-deployment-6c45fc49cb-clgkl   1/1       Running   0          4m        10.2.76.4   192.168.56.13\n\n\n#查看pod详情\n[root@linux-node1 ~]# kubectl describe pod nginx-deployment-6c45fc49cb-7rwdp\nName:           nginx-deployment-6c45fc49cb-7rwdp\nNamespace:      default\nNode:           192.168.56.12/192.168.56.12\nStart Time:     Tue, 09 Oct 2018 15:11:33 +0800\nLabels:         app=nginx\n                pod-template-hash=2701970576\nAnnotations:    <none>\nStatus:         Running\nIP:             10.2.76.5\nControlled By:  ReplicaSet/nginx-deployment-6c45fc49cb\nContainers:\n  nginx:\n    Container ID:   docker://0ab9b4f9bf3691f16e9cb6836a7375cb7f886398bfa8a81147e9a24f3634d591\n    Image:          nginx:1.13.12\n    Image ID:       docker-pullable://nginx@sha256:b1d09e9718890e6ebbbd2bc319ef1611559e30ce1b6f56b2e3b479d9da51dc35\n    Port:           80/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 09 Oct 2018 15:12:33 +0800\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4cgj8 (ro)\nConditions:\n  Type           Status\n  Initialized    True\n  Ready          True\n  PodScheduled   True\nVolumes:\n  default-token-4cgj8:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-4cgj8\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     <none>\nEvents:\n  Type    Reason                 Age   From                    Message\n  ----    ------                 ----  ----                    -------\n  Normal  Scheduled              4m    default-scheduler       Successfully assigned nginx-deployment-6c45fc49cb-7rwdp to 192.168.56.12\n  Normal  SuccessfulMountVolume  4m    kubelet, 192.168.56.12  MountVolume.SetUp succeeded for volume \"default-token-4cgj8\"\n  Normal  Pulling                4m    kubelet, 192.168.56.12  pulling image \"nginx:1.13.12\"\n  Normal  Pulled                 3m    kubelet, 192.168.56.12  Successfully pulled image \"nginx:1.13.12\"\n  Normal  Created                3m    kubelet, 192.168.56.12  Created container\n  Normal  Started                3m    kubelet, 192.168.56.12  Started container\n\n#导出资源描述\nkubectl get   --export -o yaml 命令会以Yaml格式导出系统中已有资源描述\n\n比如，我们可以将系统中 nginx 部署的描述导成 Yaml 文件\nkubectl get deployment nginx-deployment-6c45fc49cb-7rwdp --export -o yaml > nginx-deployment.yaml\n\n\n#测试pod访问\n测试访问nginx镜像（在对应的节点上测试，本来是其他节点也可以正常访问的）\n[root@linux-node3 ~]# curl --head http://10.2.76.4\nHTTP/1.1 200 OK\nServer: nginx/1.13.12\nDate: Tue, 09 Oct 2018 07:17:55 GMT\nContent-Type: text/html\nContent-Length: 612\nLast-Modified: Mon, 09 Apr 2018 16:01:09 GMT\nConnection: keep-alive\nETag: \"5acb8e45-264\"\nAccept-Ranges: bytes\n\n```\n\n8、更新Deployment\n```\n#--record  记录日志，方便以后回滚\n[root@linux-node1 ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.12.1 --record\ndeployment.apps \"nginx-deployment\" image updated\n\n```\n\n9、查看更新后的Deployment\n```\n#这里发现镜像已经更新为1.12.1版本了，然后CURRENT（当前镜像数为4个，期望值DESIRED为3个，说明正在进行滚动更新）\n[root@linux-node1 ~]# kubectl get deployment -o wide\nNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES         SELECTOR\nnginx-deployment   3         4         1            3           13m       nginx        nginx:1.12.1   app=nginx\n```\n\n10、查看历史记录\n```\n[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment\ndeployments \"nginx-deployment\"\nREVISION  CHANGE-CAUSE\n1         <none>               ---第一个没有，是因为我们创建的时候没有加上--record参数\n4         kubectl set image deployment/nginx-deployment nginx=nginx:1.12.2 --record=true\n5         kubectl set image deployment/nginx-deployment nginx=nginx:1.12.1 --record=true\n```\n\n11、查看具体某一个版本的升级历史\n```\n[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment --revision=1\ndeployments \"nginx-deployment\" with revision #1\nPod Template:\n  Labels:\tapp=nginx\n\tpod-template-hash=2701970576\n  Containers:\n   nginx:\n    Image:\tnginx:1.13.12\n    Port:\t80/TCP\n    Host Port:\t0/TCP\n    Environment:\t<none>\n    Mounts:\t<none>\n  Volumes:\t<none>\n```\n\n12、快速回滚到上一个版本\n```\n[root@linux-node1 ~]# kubectl rollout undo deployment/nginx-deployment\ndeployment.apps \"nginx-deployment\"\n[root@linux-node1 ~]#\n```\n\n13、扩容到5个节点\n```\n[root@linux-node1 ~]# kubectl get pod -o wide   ----之前是3个pod\nNAME                                READY     STATUS    RESTARTS   AGE       IP           NODE\nnginx-deployment-7498dc98f8-48lqg   1/1       Running   0          2m        10.2.76.15   192.168.56.12\nnginx-deployment-7498dc98f8-g4zkp   1/1       Running   0          2m        10.2.76.9    192.168.56.13\nnginx-deployment-7498dc98f8-z2466   1/1       Running   0          2m        10.2.76.16   192.168.56.12\n\n[root@linux-node1 ~]# kubectl scale deployment nginx-deployment --replicas 5\ndeployment.extensions \"nginx-deployment\" scaled\n\n[root@linux-node1 ~]# kubectl get pod -o wide     ----现在扩容到了5个pod\nNAME                                READY     STATUS    RESTARTS   AGE       IP           NODE\nnginx-deployment-7498dc98f8-28894   1/1       Running   0          8s        10.2.76.10   192.168.56.13\nnginx-deployment-7498dc98f8-48lqg   1/1       Running   0          2m        10.2.76.15   192.168.56.12\nnginx-deployment-7498dc98f8-g4zkp   1/1       Running   0          2m        10.2.76.9    192.168.56.13\nnginx-deployment-7498dc98f8-tt7z5   1/1       Running   0          7s        10.2.76.17   192.168.56.12\nnginx-deployment-7498dc98f8-z2466   1/1       Running   0          2m        10.2.76.16   192.168.56.12\n```\n\n14、Pod ip 变化频繁, 引入service-ip\n```\n#创建nginx-server\n[root@linux-node1 ~]# cat nginx-service.yaml\nkind: Service\napiVersion: v1\nmetadata:\n  name: nginx-service\nspec:\n  selector:\n    app: nginx\n  ports:\n  - protocol: TCP\n    port: 80\n    targetPort: 80\n    \n    \n[root@linux-node1 ~]# kubectl create -f nginx-service.yaml\nservice \"nginx-service\" created\n\n#发现给我们创建了一个vip 10.1.46.200 并且通过lvs做了负载均衡\n[root@linux-node1 ~]# kubectl get service --all-namespaces\nNAMESPACE     NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE\ndefault       kubernetes                ClusterIP   172.68.0.1       <none>        443/TCP                       3d13h\ndefault       my-mc-service             ClusterIP   172.68.213.121   <none>        60001/TCP,60002/TCP           23m\ndefault       php-service               ClusterIP   172.68.210.6     <none>        9898/TCP                      18h\ndefault       test-hello                ClusterIP   172.68.248.205   <none>        80/TCP                        23h\nkube-system   heapster                  ClusterIP   172.68.19.198    <none>        80/TCP                        3d13h\nkube-system   kube-dns                  ClusterIP   172.68.0.2       <none>        53/UDP,53/TCP,9153/TCP        3d13h\nkube-system   kubernetes-dashboard      NodePort    172.68.58.252    <none>        443:26400/TCP                 3d13h\nkube-system   metrics-server            ClusterIP   172.68.31.222    <none>        443/TCP                       3d13h\nkube-system   traefik-ingress-service   NodePort    172.68.221.108   <none>        80:23456/TCP,8080:31477/TCP   3d13h\n\n#删除service\n[root@linux-node1 ~]# kubectl delete service nginx-service\nservice \"nginx-service\" deleted\n\n#查看service的后端节点\n[root@linux-node1 ~]# kubectl describe svc nginx-service\nName:              nginx-service\nNamespace:         default\nLabels:            <none>\nAnnotations:       <none>\nSelector:          app=nginx\nType:              ClusterIP\nIP:                172.68.176.9\nPort:              <unset>  80/TCP\nTargetPort:        80/TCP\nEndpoints:         172.20.1.138:80,172.20.2.132:80,172.20.2.133:80   --这里发现有3个后端节点\nSession Affinity:  None\nEvents:            <none>\n```\n\n15.创建自定义Ingress\n有了ingress-controller，我们就可以创建自定义的Ingress了。这里已提前搭建好了nginx服务，我们针对nginx创建一个Ingress：\n```\n#vim nginx-ingress.yaml \n\napiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\n  namespace: default\n\nspec:\n  rules:\n  - host: myk8s.com\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: nginx-service\n          servicePort: 80\n\t\n其中：\n\nrules中的host必须为域名，不能为IP，表示Ingress-controller的Pod所在主机域名，也就是Ingress-controller的IP对应的域名。\npaths中的path则表示映射的路径。如映射/表示若访问myk8s.com，则会将请求转发至nginx的service，端口为80。\n\nkubectl create -f nginx-ingress.yaml\n\nkubectl get ingress -o wide\n\nkubectl delete ingress nginx-ingress\n\n\n#需要找出Ingress-controller的Pod所在主机（这里发现是在node2机器）\n[root@linux-node1 ~]# kubectl get pods --all-namespaces -o wide\nNAMESPACE     NAME                                          READY   STATUS                   RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES\ndefault       busybox                                       1/1     Running                  41         41h     172.20.1.27    10.33.35.6   <none>           <none>\ndefault       my-mc-deployment-76f77494c7-kxv85             2/2     Running                  0          63m     172.20.2.130   10.33.35.7   <none>           <none>\nkube-system   traefik-ingress-controller-766dbfdddd-fzb8d   1/1     Running                  1          3d14h   172.20.1.14    10.33.35.6   <none>           <none>\n\n#然后机器绑定域名\n10.33.35.6 myk8s.com\n\n#访问测试\n[root@linux-node1 ~]# curl http://myk8s.com -I\nHTTP/1.1 200 OK\nServer: nginx/1.12.2\nDate: Fri, 23 Aug 2019 04:07:44 GMT\nContent-Type: text/html\nContent-Length: 3700\nLast-Modified: Fri, 10 May 2019 08:08:40 GMT\nConnection: keep-alive\nETag: \"5cd53188-e74\"\nAccept-Ranges: bytes\n```\n\n\n参考资料：\n\nhttps://www.jianshu.com/p/feeea0bbd73e\n"
  },
  {
    "path": "docs/ca.md",
    "content": "# 手动制作CA证书\n\n```\nKubernetes 系统各组件需要使用 TLS 证书对通信进行加密。\n\nCA证书管理工具:\n• easyrsa       ---openvpn比较常用\n• openssl\n• cfssl         ---使用最多，使用json文件格式，相对简单\n```\n\n## 1.安装 CFSSL\n```\n[root@linux-node1 ~]# cd /usr/local/src\n[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64\n[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64\n[root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64\n[root@linux-node1 src]# chmod +x cfssl*\n[root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo\n[root@linux-node1 src]# mv cfssljson_linux-amd64  /opt/kubernetes/bin/cfssljson\n[root@linux-node1 src]# mv cfssl_linux-amd64  /opt/kubernetes/bin/cfssl\n\n#复制cfssl命令文件到k8s-node1和k8s-node2节点。如果实际中多个节点，就都需要同步复制。\n[root@linux-node1 ~]# scp /opt/kubernetes/bin/cfssl* 192.168.56.12:/opt/kubernetes/bin\n[root@linux-node1 ~]# scp /opt/kubernetes/bin/cfssl* 192.168.56.13:/opt/kubernetes/bin\n```\n\n## 2.初始化cfssl\n```\n[root@linux-node1 src]# mkdir ssl && cd ssl\n[root@linux-node1 ssl]# cfssl print-defaults config > config.json    --生成ca-config.json的样例(可省略)\n[root@linux-node1 ssl]# cfssl print-defaults csr > csr.json    --生成ca-csr.json的样例(可省略)\n```\n\n## 3.创建用来生成 CA 文件的 JSON 配置文件\n```\n[root@linux-node1 ssl]#\ncat > ca-config.json <<EOF\n{\n  \"signing\": {\n    \"default\": {\n      \"expiry\": \"87600h\"\n    },\n    \"profiles\": {\n      \"kubernetes\": {\n         \"expiry\": \"87600h\",\n         \"usages\": [\n            \"signing\",\n            \"key encipherment\",\n            \"server auth\",\n            \"client auth\"\n        ]\n      }\n    }\n  }\n}\nEOF\n[root@linux-node1 ssl]#\n```\n\n\n## 4.创建用来生成 CA 证书签名请求（CSR）的 JSON 配置文件\n```\n[root@linux-node1 ssl]# \ncat > ca-csr.json <<EOF\n{\n    \"CN\": \"kubernetes\",\n    \"key\": {\n        \"algo\": \"rsa\",\n        \"size\": 2048\n    },\n    \"names\": [\n        {\n            \"C\": \"CN\",\n            \"L\": \"Beijing\",\n            \"ST\": \"Beijing\",\n            \"O\": \"k8s\",\n            \"OU\": \"System\"\n        }\n    ]\n}\nEOF\n[root@linux-node1 ssl]# \n```\n\n## 5.生成CA证书（ca.pem）和密钥（ca-key.pem）\n```\n[root@ linux-node1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca\n\n#执行上面的命令后，会生成下面三个文件\nca.csr  ca-key.pem  ca.pem\n\n[root@ linux-node1 ssl]# ls -l ca*\n-rw-r--r-- 1 root root  290 Mar  4 13:45 ca-config.json\n-rw-r--r-- 1 root root 1001 Mar  4 14:09 ca.csr\n-rw-r--r-- 1 root root  208 Mar  4 13:51 ca-csr.json\n-rw------- 1 root root 1679 Mar  4 14:09 ca-key.pem\n-rw-r--r-- 1 root root 1359 Mar  4 14:09 ca.pem\n```\n\n## 6.分发证书\n```\n[root@linux-node1 ssl]# cp ca.csr ca.pem ca-key.pem ca-config.json /opt/kubernetes/ssl\n\n#scp证书到k8s-node1和k8s-node2节点\n[root@linux-node1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.56.12:/opt/kubernetes/ssl \n[root@linux-node1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.56.13:/opt/kubernetes/ssl\n```\n"
  },
  {
    "path": "docs/coredns.md",
    "content": "# Kubernetes CoreDNS\n\nk8s集群内部服务发现是通过dns来实现的，其他pod之间的域名解析服务都是靠dns来实现的，目前支持2种dns，一种kubedns,一种coredns.\n\n## 创建CoreDNS\n\n```\n#将本项目clone到/opt/目录\n\n[root@linux-node1 ~]# kubectl create -f /opt/opsfull/example/coredns/coredns.yaml\n\n[root@linux-node1 ~]# kubectl get pod -n kube-system    --k8s内部的服务默认放在kube-system单独的命名空间\nNAME                                    READY     STATUS    RESTARTS   AGE\ncoredns-77c989547b-9pj8b                1/1       Running   0          6m\ncoredns-77c989547b-kncd5                1/1       Running   0          6m\n\n\n#查看service\n[root@linux-node1 ~]# kubectl get service -n kube-system\nNAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE\ncoredns                ClusterIP   10.1.0.2     <none>        53/UDP,53/TCP   2m\n\n#在node节点使用ipvsadm -Ln查看转发的后端节点（TCP和UDP的53端口）\n[root@linux-node2 ~]# ipvsadm -Ln\nIP Virtual Server version 1.2.1 (size=4096)\nProt LocalAddress:Port Scheduler Flags\n  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn\nTCP  10.1.0.2:53 rr\n  -> 10.2.76.14:53                Masq    1      0          0\n  -> 10.2.76.20:53                Masq    1      0          0\nUDP  10.1.0.2:53 rr\n  -> 10.2.76.14:53                Masq    1      0          0\n  -> 10.2.76.20:53                Masq    1      0          0\n \n#发现是转到这2个pod容器\n[root@linux-node1 ~]# kubectl get pod -n kube-system -o wide\nNAME                                    READY     STATUS    RESTARTS   AGE       IP           NODE\ncoredns-77c989547b-4f9xz                1/1       Running   0          5m        10.2.76.20   192.168.56.12\ncoredns-77c989547b-9zm4m                1/1       Running   0          5m        10.2.76.14   192.168.56.13\n```\n\n## 测试CoreDNS\n\n```\n[root@linux-node1 coredns]# kubectl run dns-test --rm -it --image=alpine /bin/sh\nIf you don't see a command prompt, try pressing enter.\n/ # ping www.qq.com\nPING www.qq.com (121.51.142.21): 56 data bytes\n64 bytes from 121.51.142.21: seq=0 ttl=127 time=20.864 ms\n64 bytes from 121.51.142.21: seq=1 ttl=127 time=19.937 ms\n```\n"
  },
  {
    "path": "docs/dashboard.md",
    "content": "# Kubernetes Dashboard\n\n## 创建Dashboard\n```\n[root@linux-node1 ~]# kubectl create -f /srv/addons/dashboard/\n[root@linux-node1 ~]# kubectl cluster-info\nKubernetes master is running at https://192.168.56.11:6443\nkubernetes-dashboard is running at https://192.168.56.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n\n```\n## 查看Dashboard信息\n```\n#发现Dashboard是运行在node3节点\n[root@linux-node1 ~]# kubectl get pod -n kube-system -o wide\nNAME                                    READY     STATUS    RESTARTS   AGE       IP          NODE\nkubernetes-dashboard-66c9d98865-bqwl5   1/1       Running   0          1h        10.2.76.3   192.168.56.13\n\n#查看Dashboard运行日志\n[root@linux-node1 ~]# kubectl logs pod/kubernetes-dashboard-66c9d98865-bqwl5 -n kube-system\n\n#查看Dashboard服务IP(可以访问任意node节点的34696端口就可以访问到Dashboard页面 https://192.168.56.13:34696/#!/overview?namespace=default,如何master节点安装了kube-proxy也可以访问)\n[root@linux-node1 ~]# kubectl get service -n kube-system\nNAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE\nkubernetes-dashboard   NodePort   10.1.36.42   <none>        443:34696/TCP   1h\n\n```\nhttps://192.168.56.13:34696/#!/overview?namespace=default\n\n  ![dashboard登录](https://github.com/Lancger/opsfull/blob/master/images/Dashboard-login.jpg)\n\n\n## 访问Dashboard\n\nhttps://192.168.56.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy\n用户名:admin  密码：admin 选择令牌模式登录。\n\n### 获取Token\n```\n[root@linux-node1 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')\nName:         admin-user-token-c97bl\nNamespace:    kube-system\nLabels:       <none>\nAnnotations:  kubernetes.io/service-account.name=admin-user\n              kubernetes.io/service-account.uid=379208ff-cb86-11e8-9f1c-080027dc9cd8\n\nType:  kubernetes.io/service-account-token\n\nData\n====\nca.crt:     1359 bytes\nnamespace:  11 bytes\ntoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWM5N2JsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNzkyMDhmZi1jYjg2LTExZTgtOWYxYy0wODAwMjdkYzljZDgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.LopL7AD9feBZmhAuAUlPNjfthlJ1lJAPG6VXgBl-MZdofZpqNU9m-o-7M4hHa5AXkpeLvQrA1UKWWSR9eWEN06ugIkcH4Pk-tKrSVQUM6CDaE7eBdK91x1ltTonLz62_z_X8IvRYx1piv3wRUijoyRHCdziBnOhg67sT974CSPoRSOpl7ZR0Kn_L0LYRMOE9xfU3w4-sCpSx-jgc5oysAix95NqZgIkaZ6TRANpCnHE66fqL6yUwQxQ5yt7pw7J2iuSE3OxPU_cKArjYlWUvr72zG3SxZaR7dzQEggwmjSSeHRs0OK0968QAtCca1NTmcPaTtKhXYfXXdtusVCx7bA\n```\n  ![dashboard预览](https://github.com/Lancger/opsfull/blob/master/images/Dashboard.jpg)\n"
  },
  {
    "path": "docs/dashboard_op.md",
    "content": "# Kubernetes Dashboard\n\n```\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\n\ncd /etc/ansible/\nansible-playbook 07.cluster-addon.yml\n\nansible-playbook 90.setup.yml\n\nsystemctl restart iptables\nsystemctl restart kube-scheduler\nsystemctl restart kube-controller-manager\nsystemctl restart kube-apiserver\nsystemctl restart etcd\nsystemctl restart docker\n\nsystemctl restart iptables\nsystemctl restart kubelet\nsystemctl restart kube-proxy\nsystemctl restart etcd\nsystemctl restart docker\n```\n\n## 1、查看deployment\n```\n[root@node1 ~]# kubectl get deployment -A\nNAMESPACE     NAME                         READY   UP-TO-DATE   AVAILABLE   AGE\ndefault       my-mc-deployment             3/3     3            3           2d18h\ndefault       net                          3/3     3            3           4d15h\ndefault       net-test                     2/2     2            2           4d16h\ndefault       test-hello                   1/1     1            1           6d\ndefault       test-jrr                     1/1     1            1           43h\nkube-system   coredns                      0/2     2            0           4d15h\nkube-system   heapster                     1/1     1            1           8d\nkube-system   kubernetes-dashboard         0/1     1            0           4m42s\nkube-system   metrics-server               0/1     1            0           8d\nkube-system   traefik-ingress-controller   1/1     1            1           2d18h\n\n#查看deployment详情\n[root@node1 ~]# kubectl describe deployment kubernetes-dashboard -n kube-system\n\n#删除deployment\n[root@node1 ~]# kubectl delete deployment kubernetes-dashboard -n kube-system\ndeployment.extensions \"kubernetes-dashboard\" deleted\n```\n\n## 2、查看Service信息\n```\n[root@tw06a2753 ~]# kubectl get service -A -o wide\nNAMESPACE     NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE     SELECTOR\ndefault       kubernetes                ClusterIP   172.68.0.1       <none>        443/TCP                       8d      <none>\ndefault       my-mc-service             ClusterIP   172.68.113.166   <none>        60001/TCP,60002/TCP           3d14h   app=products,department=sales\ndefault       nginx-service             ClusterIP   172.68.176.9     <none>        80/TCP                        5d      app=nginx\ndefault       php-service               ClusterIP   172.68.210.6     <none>        9898/TCP                      5d19h   app=nginx-php\ndefault       test-hello                ClusterIP   172.68.248.205   <none>        80/TCP                        6d      run=test-hello\ndefault       test-jrr-php-service      ClusterIP   172.68.58.202    <none>        9090/TCP                      43h     app=test-jrr-nginx-php\nkube-system   heapster                  ClusterIP   172.68.19.198    <none>        80/TCP                        8d      k8s-app=heapster\nkube-system   kube-dns                  ClusterIP   172.68.0.2       <none>        53/UDP,53/TCP,9153/TCP        4d15h   k8s-app=kube-dns\nkube-system   kubernetes-dashboard      NodePort    172.68.46.171    <none>        443:29107/TCP                 6m31s   k8s-app=kubernetes-dashboard\nkube-system   metrics-server            ClusterIP   172.68.31.222    <none>        443/TCP                       8d      k8s-app=metrics-server\nkube-system   traefik-ingress-service   NodePort    172.68.124.46    <none>        80:33813/TCP,8080:21315/TCP   2d18h   k8s-app=traefik-ingress-lb\nkube-system   traefik-web-ui            ClusterIP   172.68.226.139   <none>        80/TCP                        2d19h   k8s-app=traefik-ingress-lb\n\n#查看service详情\n[root@node1 ~]# kubectl describe svc kubernetes-dashboard -n kube-system\n\n#删除service\n[root@node1 ~]# kubectl delete svc kubernetes-dashboard -n kube-system\nservice \"kubernetes-dashboard\" deleted\n```\n\n## 3、查看Service对应的后端节点\n\n```\n#查看kubernetes-dashboard\n[root@node1 ~]# kubectl describe svc kubernetes-dashboard -n kube-system\n\n#查看服务my-mc-service\n[root@node1 ~]# kubectl describe svc my-mc-service -n default\nName:              my-mc-service\nNamespace:         default\nLabels:            <none>\nAnnotations:       kubectl.kubernetes.io/last-applied-configuration:\n                     {\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"name\":\"my-mc-service\",\"namespace\":\"default\"},\"spec\":{\"ports\":[{\"name\":\"m...\nSelector:          app=products,department=sales\nType:              ClusterIP\nIP:                172.68.113.166\nPort:              my-first-port  60001/TCP\nTargetPort:        50001/TCP\nEndpoints:         172.20.1.209:50001,172.20.2.206:50001,172.20.2.208:50001\nPort:              my-second-port  60002/TCP\nTargetPort:        50002/TCP\nEndpoints:         172.20.1.209:50002,172.20.2.206:50002,172.20.2.208:50002  ---发现这个service有这3个后端\nSession Affinity:  None\nEvents:            <none>\n```\n\n## 4、Dashboard运行在哪个节点\n```\n#发现Dashboard是运行在node3节点\n[root@linux-node1 ~]# kubectl get pod -n kube-system -o wide\nNAME                                    READY     STATUS    RESTARTS   AGE       IP          NODE\nkubernetes-dashboard-66c9d98865-bqwl5   1/1       Running   0          1h        10.2.76.3   192.168.56.13\n\n#查看Dashboard运行日志\n[root@linux-node1 ~]# kubectl logs pod/kubernetes-dashboard-66c9d98865-bqwl5 -n kube-system\n\n#查看Dashboard服务IP(可以访问任意node节点的34696端口就可以访问到Dashboard页面 https://192.168.56.13:34696/#!/overview?namespace=default,如何master节点安装了kube-proxy也可以访问)\n[root@linux-node1 ~]# kubectl get service -n kube-system\nNAME                   TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE\nkubernetes-dashboard   NodePort   10.1.36.42   <none>        443:34696/TCP   1h\n\n```\nhttps://192.168.56.13:34696/#!/overview?namespace=default\n\n  ![dashboard登录](https://github.com/Lancger/opsfull/blob/master/images/Dashboard-login.jpg)\n\n\n## 访问Dashboard\n\nhttps://192.168.56.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy\n用户名:admin  密码：admin 选择令牌模式登录。\n\n### 获取Token\n```\n[root@linux-node1 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')\nName:         admin-user-token-c97bl\nNamespace:    kube-system\nLabels:       <none>\nAnnotations:  kubernetes.io/service-account.name=admin-user\n              kubernetes.io/service-account.uid=379208ff-cb86-11e8-9f1c-080027dc9cd8\n\nType:  kubernetes.io/service-account-token\n\nData\n====\nca.crt:     1359 bytes\nnamespace:  11 bytes\ntoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWM5N2JsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNzkyMDhmZi1jYjg2LTExZTgtOWYxYy0wODAwMjdkYzljZDgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.LopL7AD9feBZmhAuAUlPNjfthlJ1lJAPG6VXgBl-MZdofZpqNU9m-o-7M4hHa5AXkpeLvQrA1UKWWSR9eWEN06ugIkcH4Pk-tKrSVQUM6CDaE7eBdK91x1ltTonLz62_z_X8IvRYx1piv3wRUijoyRHCdziBnOhg67sT974CSPoRSOpl7ZR0Kn_L0LYRMOE9xfU3w4-sCpSx-jgc5oysAix95NqZgIkaZ6TRANpCnHE66fqL6yUwQxQ5yt7pw7J2iuSE3OxPU_cKArjYlWUvr72zG3SxZaR7dzQEggwmjSSeHRs0OK0968QAtCca1NTmcPaTtKhXYfXXdtusVCx7bA\n```\n  ![dashboard预览](https://github.com/Lancger/opsfull/blob/master/images/Dashboard.jpg)\n"
  },
  {
    "path": "docs/delete.md",
    "content": "```\n#master\nsystemctl restart kube-scheduler\nsystemctl restart kube-controller-manager\nsystemctl restart kube-apiserver\nsystemctl restart flannel\nsystemctl restart etcd\nsystemctl restart docker\n\n\nsystemctl stop kube-scheduler\nsystemctl stop kube-controller-manager\nsystemctl stop kube-apiserver\nsystemctl stop flannel\nsystemctl stop etcd\nsystemctl stop docker\n\n#node\nsystemctl restart kubelet\nsystemctl restart kube-proxy\nsystemctl restart flannel\nsystemctl restart etcd\nsystemctl restart docker\n\n\nsystemctl stop kubelet\nsystemctl stop kube-proxy\nsystemctl stop flannel\nsystemctl stop etcd\nsystemctl stop docker\n\n```\n\n```\n# 清理k8s集群\nrm -rf /var/lib/etcd/\nrm -rf /var/lib/docker\nrm -rf /opt/containerd\nrm -rf /opt/kubernetes\nrm -rf /var/lib/kubelet\nrm -rf /var/lib/chrony\nrm -rf /var/lib/kube-proxy\nrm -rf /srv/*\n\n\nsystemctl disable kube-scheduler\nsystemctl disable kube-controller-manager\nsystemctl disable kube-apiserver\nsystemctl disable flannel\nsystemctl disable etcd\nsystemctl disable docker\n\nsystemctl disable kubelet\nsystemctl disable kube-proxy\nsystemctl disable flannel\nsystemctl disable etcd\nsystemctl disable docker\n\n```\n"
  },
  {
    "path": "docs/docker-install.md",
    "content": "# study_docker\n\n## 0.卸载旧版本\n```bash\nyum remove -y docker \\\ndocker-client \\\ndocker-client-latest \\\ndocker-common \\\ndocker-latest \\\ndocker-latest-logrotate \\\ndocker-logrotate \\\ndocker-selinux \\\ndocker-engine-selinux \\\ndocker-engine\n```\n\n## 1.安装Docker\n\n第一步：使用国内Docker源\n```\ncd /etc/yum.repos.d/\nwget -O docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\n\n#或\nyum -y install yum-utils\nyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\n\nyum install -y yum-utils \\\ndevice-mapper-persistent-data \\\nlvm2\n ```\n\n第二步：Docker安装：\n```\nyum install -y docker-ce\n```\n\n第三步：启动后台进程：\n```bash\n#启动docker服务\nsystemctl restart docker\n\n#设置docker服务开启自启\nsystemctl enable docker\n\n#Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.\n\n#查看是否成功设置docker服务开启自启\nsystemctl list-unit-files|grep docker\n\ndocker.service                                enabled\n\n#关闭docker服务开启自启\nsystemctl disable docker\n\n#Removed symlink /etc/systemd/system/multi-user.target.wants/docker.service.\n```\n\n## 2.脚本安装Docker\n```bash\n#2.1、Docker官方安装脚本\ncurl -sSL https://get.docker.com/ | sh\n\n#这个脚本会添加docker.repo仓库并且安装Docker\n\n#2.2、阿里云的安装脚本\ncurl -sSL http://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/docker-engine/internet | sh -\n\n#2.3、DaoCloud 的安装脚本\ncurl -sSL https://get.daocloud.io/docker | sh\n\n```\n\n### 3.Docker服务文件\n```bash\n# Docker从1.13版本开始调整了默认的防火墙规则，禁用了iptables filter表中FOWARD链，这样会引起Kubernetes集群中跨Node的Pod无法通信，执行下面命令\n#注意，有变量的地方需要使用转义符号\n\ncat > /usr/lib/systemd/system/docker.service << EOF\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\nAfter=network-online.target firewalld.service containerd.service\nWants=network-online.target\nRequires=docker.socket\n\n[Service]\nType=notify\n# the default is not to use systemd for cgroups because the delegate issues still\n# exists and systemd currently does not support the cgroup feature set required\n# for containers run by docker\nExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd\nExecReload=/bin/kill -s HUP \\$MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\nTimeoutSec=0\nRestartSec=2\nRestart=always\n\n# Note that StartLimit* options were moved from \"Service\" to \"Unit\" in systemd 229.\n# Both the old, and new location are accepted by systemd 229 and up, so using the old location\n# to make them work for either version of systemd.\nStartLimitBurst=3\n\n# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.\n# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make\n# this option work for either version of systemd.\nStartLimitInterval=60s\n\n# Having non-zero Limit*s causes performance problems due to accounting overhead\n# in the kernel. We recommend using cgroups to do container-local accounting.\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\n\n# Comment TasksMax if your systemd version does not support it.\n# Only systemd 226 and above support this option.\nTasksMax=infinity\n\n# set delegate yes so that systemd does not reset the cgroups of docker containers\nDelegate=yes\n\n# kill only the docker process, not all processes in the cgroup\nKillMode=process\n\n[Install]\nWantedBy=multi-user.target\nEOF\n```\n## 3.1、配置docker加速器\n```bash\nmkdir -p /data0/docker-data\n\ncat > /etc/docker/daemon.json << \\EOF\n{\n  \"exec-opts\": [\"native.cgroupdriver=systemd\"],\n  \"data-root\": \"/data0/docker-data\",\n  \"registry-mirrors\" : [\n    \"https://ot2k4d59.mirror.aliyuncs.com/\"\n  ],\n  \"insecure-registries\": [\"reg.hub.com\"]\n}\nEOF\n\n或者\ncurl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io\n```\n\n### 3.2、重新加载docker的配置文件\n```bash\nsystemctl daemon-reload\nsystemctl restart docker\n```\n### 3.3、内核参数配置\n```bash\n#编辑文件\nvim /etc/sysctl.conf\n\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\n\n#然后执行\nsysctl -p\n\n#查看docker信息是否生效\ndocker info\n```\n\n## 4.通过测试镜像运行一个容器来验证Docker是否安装正确\n```bash\ndocker run hello-world\n```\n"
  },
  {
    "path": "docs/etcd-install.md",
    "content": "\n# 手动部署ETCD集群\n\n## 0.准备etcd软件包\n```\n[root@linux-node1 src]# wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz\n[root@linux-node1 src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz\n[root@linux-node1 src]# cd etcd-v3.2.18-linux-amd64\n[root@linux-node1 etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/ \n[root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 192.168.56.12:/opt/kubernetes/bin/\n[root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 192.168.56.13:/opt/kubernetes/bin/\n```\n\n## 1.创建 etcd 证书签名请求：\n```\n#约定所有证书都放在 /usr/local/src/ssl 目录中，然后同步到其他机器\n\n[root@linux-node1 ~]# cd /usr/local/src/ssl\n[root@linux-node1 ssl]# \ncat > etcd-csr.json <<EOF\n{\n  \"CN\": \"etcd\",\n  \"hosts\": [\n    \"127.0.0.1\",\n    \"192.168.56.11\",\n    \"192.168.56.12\",\n    \"192.168.56.13\"\n  ],\n  \"key\": {\n    \"algo\": \"rsa\",\n    \"size\": 2048\n  },\n  \"names\": [\n    {\n      \"C\": \"CN\",\n      \"ST\": \"BeiJing\",\n      \"L\": \"BeiJing\",\n      \"O\": \"k8s\",\n      \"OU\": \"System\"\n    }\n  ]\n}\nEOF\n[root@linux-node1 ssl]#\n```\n\n## 2.生成 etcd 证书和私钥：\n```\n[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \\\n  -ca-key=/opt/kubernetes/ssl/ca-key.pem \\\n  -config=/opt/kubernetes/ssl/ca-config.json \\\n  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd\n  \n2018/10/08 19:26:26 [INFO] generate received request\n2018/10/08 19:26:26 [INFO] received CSR\n2018/10/08 19:26:26 [INFO] generating key: rsa-2048\n2018/10/08 19:26:26 [INFO] encoded CSR\n2018/10/08 19:26:26 [INFO] signed certificate with serial number 674737706082810466537199547419623349216126693730\n2018/10/08 19:26:26 [WARNING] This certificate lacks a \"hosts\" field. This makes it unsuitable for\nwebsites. For more information see the Baseline Requirements for the Issuance and Management\nof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);\nspecifically, section 10.2.3 (\"Information Requirements\").\n[root@linux-node1 ssl]#\n\n#会生成以下证书文件\n[root@linux-node1 ssl]# ls -l etcd*\n-rw-r--r--. 1 root root 1062 Oct  8 19:26 etcd.csr\n-rw-r--r--. 1 root root  299 Oct  8 19:24 etcd-csr.json\n-rw-------. 1 root root 1675 Oct  8 19:26 etcd-key.pem\n-rw-r--r--. 1 root root 1436 Oct  8 19:26 etcd.pem\n```\n\n## 3.将证书移动到/opt/kubernetes/ssl目录下\n```\n[root@linux-node1 ssl]# cp etcd*.pem /opt/kubernetes/ssl\n[root@linux-node1 ssl]# scp etcd*.pem 192.168.56.12:/opt/kubernetes/ssl\n[root@linux-node1 ssl]# scp etcd*.pem 192.168.56.13:/opt/kubernetes/ssl\n[root@linux-node1 ssl]# rm -f etcd.csr etcd-csr.json\n```\n\n## 4.设置ETCD配置文件\n```\n#注意修改项（集群不同的地方）\n##################################################\nETCD_NAME=\"etcd-node1\"\nETCD_LISTEN_PEER_URLS=\"https://192.168.56.11:2380\"   --2380集群监听的端口\nETCD_LISTEN_CLIENT_URLS=\"https://192.168.56.11:2379,https://127.0.0.1:2379\"   --2379客户端监听的端口\n##################################################\n\n[root@linux-node1 ~]# vim /opt/kubernetes/cfg/etcd.conf\n#[member]\nETCD_NAME=\"etcd-node1\"\nETCD_DATA_DIR=\"/var/lib/etcd/default.etcd\"\n#ETCD_SNAPSHOT_COUNTER=\"10000\"\n#ETCD_HEARTBEAT_INTERVAL=\"100\"\n#ETCD_ELECTION_TIMEOUT=\"1000\"\nETCD_LISTEN_PEER_URLS=\"https://192.168.56.11:2380\"\nETCD_LISTEN_CLIENT_URLS=\"https://192.168.56.11:2379,https://127.0.0.1:2379\"\n#ETCD_MAX_SNAPSHOTS=\"5\"\n#ETCD_MAX_WALS=\"5\"\n#ETCD_CORS=\"\"\n#[cluster]\nETCD_INITIAL_ADVERTISE_PEER_URLS=\"https://192.168.56.11:2380\"\n# if you use different ETCD_NAME (e.g. test),\n# set ETCD_INITIAL_CLUSTER value for this name, i.e. \"test=http://...\"\nETCD_INITIAL_CLUSTER=\"etcd-node1=https://192.168.56.11:2380,etcd-node2=https://192.168.56.12:2380,etcd-node3=https://192.168.56.13:2380\"\nETCD_INITIAL_CLUSTER_STATE=\"new\"\nETCD_INITIAL_CLUSTER_TOKEN=\"k8s-etcd-cluster\"\nETCD_ADVERTISE_CLIENT_URLS=\"https://192.168.56.11:2379\"\n#[security]\nCLIENT_CERT_AUTH=\"true\"\nETCD_CA_FILE=\"/opt/kubernetes/ssl/ca.pem\"\nETCD_CERT_FILE=\"/opt/kubernetes/ssl/etcd.pem\"\nETCD_KEY_FILE=\"/opt/kubernetes/ssl/etcd-key.pem\"\nPEER_CLIENT_CERT_AUTH=\"true\"\nETCD_PEER_CA_FILE=\"/opt/kubernetes/ssl/ca.pem\"\nETCD_PEER_CERT_FILE=\"/opt/kubernetes/ssl/etcd.pem\"\nETCD_PEER_KEY_FILE=\"/opt/kubernetes/ssl/etcd-key.pem\"\n```\n\n## 5.创建ETCD系统服务\n```\n[root@linux-node1 ~]# vim /etc/systemd/system/etcd.service\n[Unit]\nDescription=Etcd Server\nAfter=network.target\n\n[Service]\nType=simple\nWorkingDirectory=/var/lib/etcd\nEnvironmentFile=-/opt/kubernetes/cfg/etcd.conf\n# set GOMAXPROCS to number of processors\nExecStart=/bin/bash -c \"GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd\"\nType=notify\n\n[Install]\nWantedBy=multi-user.target\n```\n\n## 6.重新加载系统服务\n```\n[root@linux-node1 ~]# scp /opt/kubernetes/cfg/etcd.conf 192.168.56.12:/opt/kubernetes/cfg/\n[root@linux-node1 ~]# scp /etc/systemd/system/etcd.service 192.168.56.12:/etc/systemd/system/\n[root@linux-node1 ~]# scp /opt/kubernetes/cfg/etcd.conf 192.168.56.13:/opt/kubernetes/cfg/\n[root@linux-node1 ~]# scp /etc/systemd/system/etcd.service 192.168.56.13:/etc/systemd/system/\n\n[root@linux-node1 ~]# systemctl daemon-reload\n[root@linux-node1 ~]# systemctl enable etcd\n\n#在所有节点上创建etcd存储目录并启动etcd\nmkdir /var/lib/etcd\nsystemctl start etcd\nsystemctl status etcd\n\nsystemctl daemon-reload\nsystemctl enable etcd\n\n```\n## 7.修改集群其他节点差异配置文件\n```\n#node2配置文件\n[root@linux-node2 src]# cat /opt/kubernetes/cfg/etcd.conf\n#[member]\nETCD_NAME=\"etcd-node2\"\nETCD_DATA_DIR=\"/var/lib/etcd/default.etcd\"\n#ETCD_SNAPSHOT_COUNTER=\"10000\"\n#ETCD_HEARTBEAT_INTERVAL=\"100\"\n#ETCD_ELECTION_TIMEOUT=\"1000\"\nETCD_LISTEN_PEER_URLS=\"https://192.168.56.12:2380\"\nETCD_LISTEN_CLIENT_URLS=\"https://192.168.56.12:2379,https://127.0.0.1:2379\"\n#ETCD_MAX_SNAPSHOTS=\"5\"\n#ETCD_MAX_WALS=\"5\"\n#ETCD_CORS=\"\"\n#[cluster]\nETCD_INITIAL_ADVERTISE_PEER_URLS=\"https://192.168.56.12:2380\"\n# if you use different ETCD_NAME (e.g. test),\n# set ETCD_INITIAL_CLUSTER value for this name, i.e. \"test=http://...\"\nETCD_INITIAL_CLUSTER=\"etcd-node1=https://192.168.56.11:2380,etcd-node2=https://192.168.56.12:2380,etcd-node3=https://192.168.56.13:2380\"\nETCD_INITIAL_CLUSTER_STATE=\"new\"\nETCD_INITIAL_CLUSTER_TOKEN=\"k8s-etcd-cluster\"\nETCD_ADVERTISE_CLIENT_URLS=\"https://192.168.56.12:2379\"\n#[security]\nCLIENT_CERT_AUTH=\"true\"\nETCD_CA_FILE=\"/opt/kubernetes/ssl/ca.pem\"\nETCD_CERT_FILE=\"/opt/kubernetes/ssl/etcd.pem\"\nETCD_KEY_FILE=\"/opt/kubernetes/ssl/etcd-key.pem\"\nPEER_CLIENT_CERT_AUTH=\"true\"\nETCD_PEER_CA_FILE=\"/opt/kubernetes/ssl/ca.pem\"\nETCD_PEER_CERT_FILE=\"/opt/kubernetes/ssl/etcd.pem\"\nETCD_PEER_KEY_FILE=\"/opt/kubernetes/ssl/etcd-key.pem\"\n\n#node3配置文件\n[root@linux-node3 src]# cat /opt/kubernetes/cfg/etcd.conf\n#[member]\nETCD_NAME=\"etcd-node3\"\nETCD_DATA_DIR=\"/var/lib/etcd/default.etcd\"\n#ETCD_SNAPSHOT_COUNTER=\"10000\"\n#ETCD_HEARTBEAT_INTERVAL=\"100\"\n#ETCD_ELECTION_TIMEOUT=\"1000\"\nETCD_LISTEN_PEER_URLS=\"https://192.168.56.13:2380\"\nETCD_LISTEN_CLIENT_URLS=\"https://192.168.56.13:2379,https://127.0.0.1:2379\"\n#ETCD_MAX_SNAPSHOTS=\"5\"\n#ETCD_MAX_WALS=\"5\"\n#ETCD_CORS=\"\"\n#[cluster]\nETCD_INITIAL_ADVERTISE_PEER_URLS=\"https://192.168.56.13:2380\"\n# if you use different ETCD_NAME (e.g. test),\n# set ETCD_INITIAL_CLUSTER value for this name, i.e. \"test=http://...\"\nETCD_INITIAL_CLUSTER=\"etcd-node1=https://192.168.56.11:2380,etcd-node2=https://192.168.56.12:2380,etcd-node3=https://192.168.56.13:2380\"\nETCD_INITIAL_CLUSTER_STATE=\"new\"\nETCD_INITIAL_CLUSTER_TOKEN=\"k8s-etcd-cluster\"\nETCD_ADVERTISE_CLIENT_URLS=\"https://192.168.56.13:2379\"\n#[security]\nCLIENT_CERT_AUTH=\"true\"\nETCD_CA_FILE=\"/opt/kubernetes/ssl/ca.pem\"\nETCD_CERT_FILE=\"/opt/kubernetes/ssl/etcd.pem\"\nETCD_KEY_FILE=\"/opt/kubernetes/ssl/etcd-key.pem\"\nPEER_CLIENT_CERT_AUTH=\"true\"\nETCD_PEER_CA_FILE=\"/opt/kubernetes/ssl/ca.pem\"\nETCD_PEER_CERT_FILE=\"/opt/kubernetes/ssl/etcd.pem\"\nETCD_PEER_KEY_FILE=\"/opt/kubernetes/ssl/etcd-key.pem\"\n```\n\n下面需要大家在所有的 etcd 节点重复上面的步骤，直到所有机器的 etcd 服务都已启动。\n\n## 8.验证集群\n```\n[root@linux-node1 ~]# etcdctl --endpoints=https://192.168.56.11:2379 \\\n  --ca-file=/opt/kubernetes/ssl/ca.pem \\\n  --cert-file=/opt/kubernetes/ssl/etcd.pem \\\n  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health\nmember 435fb0a8da627a4c is healthy: got healthy result from https://192.168.56.12:2379\nmember 6566e06d7343e1bb is healthy: got healthy result from https://192.168.56.11:2379\nmember ce7b884e428b6c8c is healthy: got healthy result from https://192.168.56.13:2379\ncluster is healthy\n```\n"
  },
  {
    "path": "docs/flannel.md",
    "content": "1.为Flannel生成证书\n```\n[root@linux-node1 ~]# cd /usr/local/src/ssl/\n[root@linux-node1 ssl]#\ncat > flanneld-csr.json <<EOF\n{\n  \"CN\": \"flanneld\",\n  \"hosts\": [],\n  \"key\": {\n    \"algo\": \"rsa\",\n    \"size\": 2048\n  },\n  \"names\": [\n    {\n      \"C\": \"CN\",\n      \"ST\": \"BeiJing\",\n      \"L\": \"BeiJing\",\n      \"O\": \"k8s\",\n      \"OU\": \"System\"\n    }\n  ]\n}\nEOF\n```\n\n2.生成证书\n```\n[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \\\n   -ca-key=/opt/kubernetes/ssl/ca-key.pem \\\n   -config=/opt/kubernetes/ssl/ca-config.json \\\n   -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld\n```\n\n3.分发证书\n```\n[root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/\n[root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.12:/opt/kubernetes/ssl/\n[root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.13:/opt/kubernetes/ssl/\n```\n\n4.下载Flannel软件包\n```\n[root@linux-node1 ~]# cd /usr/local/src\n\n# wget\n https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz\n \n[root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz\n[root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/\n\n#复制到linux-node2节点\n[root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.12:/opt/kubernetes/bin/\n[root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.13:/opt/kubernetes/bin/\n\n#复制对应脚本到/opt/kubernetes/bin目录下。\n[root@linux-node1 ~]# cd /usr/local/src/kubernetes/cluster/centos/node/bin/\n[root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/\n[root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.12:/opt/kubernetes/bin/\n[root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.13:/opt/kubernetes/bin/\n```\n\n5.配置Flannel\n```\n[root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel\nFLANNEL_ETCD=\"-etcd-endpoints=https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379\"\nFLANNEL_ETCD_KEY=\"-etcd-prefix=/kubernetes/network\"\nFLANNEL_ETCD_CAFILE=\"--etcd-cafile=/opt/kubernetes/ssl/ca.pem\"\nFLANNEL_ETCD_CERTFILE=\"--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem\"\nFLANNEL_ETCD_KEYFILE=\"--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem\"\n\n#复制配置到其它节点上\n[root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.12:/opt/kubernetes/cfg/\n[root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.13:/opt/kubernetes/cfg/\n```\n\n6.设置Flannel系统服务\n```\n[root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service\n[Unit]\nDescription=Flanneld overlay address etcd agent\nAfter=network.target\nBefore=docker.service\n\n[Service]\nEnvironmentFile=-/opt/kubernetes/cfg/flannel\nExecStartPre=/opt/kubernetes/bin/remove-docker0.sh\nExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}\nExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker\n\nType=notify\n\n[Install]\nWantedBy=multi-user.target\nRequiredBy=docker.service\n\n#复制系统服务脚本到其它节点上\n[root@linux-node1 ~]# scp /usr/lib/systemd/system/flannel.service 192.168.56.12:/usr/lib/systemd/system/\n[root@linux-node1 ~]# scp /usr/lib/systemd/system/flannel.service 192.168.56.13:/usr/lib/systemd/system/\n```\n\n## Flannel CNI集成\n下载CNI插件\n```\nhttps://github.com/containernetworking/plugins/releases\n\n[root@linux-node1 ~]# cd /usr/local/src/\n[root@linux-node1 src]# wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz\n[root@linux-node1 src]# mkdir /opt/kubernetes/bin/cni\n[root@linux-node1 src]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni\n[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.12:/opt/kubernetes/bin/cni/\n[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.13:/opt/kubernetes/bin/cni/\n```\n\n创建Etcd的key\n```\n/opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \\\n      --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \\\nmk /kubernetes/network/config '{ \"Network\": \"10.2.0.0/16\", \"Backend\": { \"Type\": \"vxlan\", \"VNI\": 1 }}' >/dev/null 2>&1\n```\n\n启动flannel\n```\n[root@linux-node1 ~]# systemctl daemon-reload\n[root@linux-node1 ~]# systemctl enable flannel\n[root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*\n[root@linux-node1 ~]# systemctl start flannel\n```\n\n查看服务状态\n```\n[root@linux-node1 ~]# systemctl status flannel\n```\n\n## 配置Docker使用Flannel\n```\n[root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service\n[Unit] #在Unit下面修改After和增加Requires\nAfter=network-online.target flannel.service\nWants=network-online.target\nRequires=flannel.service\n\n[Service] #增加EnvironmentFile=-/run/flannel/docker\nType=notify\nEnvironmentFile=-/run/flannel/docker\nExecStart=/usr/bin/dockerd $DOCKER_OPTS\n\n#最终配置\ncat /usr/lib/systemd/system/docker.service\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=http://docs.docker.com\nAfter=network.target flannel.service\nRequires=flannel.service\n\n[Service]\nType=notify\nEnvironmentFile=-/run/flannel/docker\nEnvironmentFile=-/opt/kubernetes/cfg/docker\nExecStart=/usr/bin/dockerd $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPTS\nLimitNOFILE=1048576\nLimitNPROC=1048576\nExecReload=/bin/kill -s HUP $MAINPID\n# Having non-zero Limit*s causes performance problems due to accounting overhead\n# in the kernel. We recommend using cgroups to do container-local accounting.\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\n# Uncomment TasksMax if your systemd version supports it.\n# Only systemd 226 and above support this version.\n#TasksMax=infinity\nTimeoutStartSec=0\n# set delegate yes so that systemd does not reset the cgroups of docker containers\nDelegate=yes\n# kill only the docker process, not all processes in the cgroup\nKillMode=process\n# restart the docker process if it exits prematurely\nRestart=on-failure\nStartLimitBurst=3\nStartLimitInterval=60s\n\n[Install]\nWantedBy=multi-user.target\n```\n\n将配置复制到另外两个阶段\n```\n[root@linux-node1 ~]# scp /usr/lib/systemd/system/docker.service 192.168.56.12:/usr/lib/systemd/system/\n[root@linux-node1 ~]# scp /usr/lib/systemd/system/docker.service 192.168.56.13:/usr/lib/systemd/system/\n```\n\n重启Docker\n```\nsystemctl daemon-reload\nsystemctl restart docker\n```\n"
  },
  {
    "path": "docs/k8s-error-resolution.md",
    "content": "## 报错一：flanneld 启动不了\n```\nOct 10 10:42:19 linux-node1 flanneld: E1010 10:42:19.499080    1816 main.go:349] Couldn't fetch network config: 100: Key not found (/coreos.com) [11]\n```\n## 解决办法：\n```\n#首先查看flannel使用的那种类型的网络模式是对应的etcd中的key是哪个（/kubernetes/network/config 或 /coreos.com/network ）\n[root@linux-node3 cfg]# cat /opt/kubernetes/cfg/flannel\nFLANNEL_ETCD=\"-etcd-endpoints=https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379\"\nFLANNEL_ETCD_KEY=\"-etcd-prefix=/coreos.com/network\"   ----这个参数值\nFLANNEL_ETCD_CAFILE=\"--etcd-cafile=/opt/kubernetes/ssl/ca.pem\"\nFLANNEL_ETCD_CERTFILE=\"--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem\"\nFLANNEL_ETCD_KEYFILE=\"--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem\"\n\n#etcd集群集群执行下面命令，清空etcd数据\nrm -rf /var/lib/etcd/default.etcd/\n\n#下面这条只需在一个节点执行就可以\n#如果是/coreos.com/network则执行下面的\n[root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem \\\n    --cert-file /opt/kubernetes/ssl/flanneld.pem \\\n    --key-file /opt/kubernetes/ssl/flanneld-key.pem \\\n    --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \\\n    mk /coreos.com/network/config '{\"Network\":\"172.17.0.0/16\"}'\n\n#如果是/kubernetes/network/config则执行下面的\n[root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem \\\n    --cert-file /opt/kubernetes/ssl/flanneld.pem \\\n    --key-file /opt/kubernetes/ssl/flanneld-key.pem \\\n    --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \\\n    mk /kubernetes/network/config '{ \"Network\": \"10.2.0.0/16\", \"Backend\": { \"Type\": \"vxlan\", \"VNI\": 1 }}'\n```\n参考文档：https://stackoverflow.com/questions/34439659/flannel-and-docker-dont-start\n\n## 报错二：flanneld 启动不了\n```\nOct 10 11:40:11 linux-node1 flanneld: E1010 11:40:11.797324   20669 main.go:349] Couldn't fetch network config: 104: Not a directory (/kubernetes/network/config) [12]\n\n问题原因：在初次配置的时候，把flannel的配置文件中的etcd-prefix-key配置成了/kubernetes/network/config，实际上应该是/kubernetes/network\n\n[root@linux-node1 ~]# cat /opt/kubernetes/cfg/flannel\nFLANNEL_ETCD=\"-etcd-endpoints=https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379\"\nFLANNEL_ETCD_KEY=\"-etcd-prefix=/kubernetes/network/config\"    --正确的应该为 /kubernetes/network/\nFLANNEL_ETCD_CAFILE=\"--etcd-cafile=/opt/kubernetes/ssl/ca.pem\"\nFLANNEL_ETCD_CERTFILE=\"--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem\"\nFLANNEL_ETCD_KEYFILE=\"--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem\"\n\n```\n参考文档：https://www.cnblogs.com/lyzw/p/6016789.html\n"
  },
  {
    "path": "docs/k8s_pv_local.md",
    "content": "参考文档：\n\nhttps://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/  \n"
  },
  {
    "path": "docs/k8s重启pod.md",
    "content": "通过kubectl delete批量删除全部Pod\n```\nkubectl delete pod --all\n```\n\n```\n在没有pod 的yaml文件时，强制重启某个pod\n\nkubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -\n\n```\n\n```\nQ:如何进入一个 pod ？\n\nkubectl  get  pod   查看pod name\n\nkubectl describe pod    name_of_pod  查看pod详细信息\n\n进入pod:\n\n[root@test001 ~]# kubectl get pod -o wide\nNAME                                READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES\nnginx-deployment-68c7f5464c-p52rl   1/1     Running   0          17m   172.20.1.22   10.33.35.6   <none>           <none>\nnginx-deployment-68c7f5464c-qfd24   1/1     Running   0          17m   172.20.2.16   10.33.35.7   <none>           <none>\n\nkubectl exec -it name-of-pod /bin/bash\n```\n参考资料：\n\nhttps://www.jianshu.com/p/baa6b11062de\n"
  },
  {
    "path": "docs/master.md",
    "content": "## 一.部署Kubernetes API服务部署\n### 0.准备软件包\n```\n[root@linux-node1 ~]# cd /usr/local/src/kubernetes\n[root@linux-node1 kubernetes]# cp server/bin/kube-apiserver /opt/kubernetes/bin/\n[root@linux-node1 kubernetes]# cp server/bin/kube-controller-manager /opt/kubernetes/bin/\n[root@linux-node1 kubernetes]# cp server/bin/kube-scheduler /opt/kubernetes/bin/\n```\n\n### 1.创建生成CSR的 JSON 配置文件\n```\n[root@linux-node1 ~]# cd /usr/local/src/ssl\n[root@linux-node1 ssl]#\ncat > kubernetes-csr.json <<EOF\n{\n  \"CN\": \"kubernetes\",\n  \"hosts\": [\n    \"127.0.0.1\",\n    \"192.168.56.11\",\n    \"10.1.0.1\",\n    \"kubernetes\",\n    \"kubernetes.default\",\n    \"kubernetes.default.svc\",\n    \"kubernetes.default.svc.cluster\",\n    \"kubernetes.default.svc.cluster.local\"\n  ],\n  \"key\": {\n    \"algo\": \"rsa\",\n    \"size\": 2048\n  },\n  \"names\": [\n    {\n      \"C\": \"CN\",\n      \"ST\": \"BeiJing\",\n      \"L\": \"BeiJing\",\n      \"O\": \"k8s\",\n      \"OU\": \"System\"\n    }\n  ]\n}\nEOF\n```\n\n### 2.生成 kubernetes 证书和私钥\n```\n[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \\\n   -ca-key=/opt/kubernetes/ssl/ca-key.pem \\\n   -config=/opt/kubernetes/ssl/ca-config.json \\\n   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes\n[root@linux-node1 ssl]# cp kubernetes*.pem /opt/kubernetes/ssl/\n[root@linux-node1 ssl]# scp kubernetes*.pem 192.168.56.12:/opt/kubernetes/ssl/\n[root@linux-node1 ssl]# scp kubernetes*.pem 192.168.56.13:/opt/kubernetes/ssl/\n```\n\n### 3.创建 kube-apiserver 使用的客户端 token 文件\n```\n[root@linux-node1 ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '\nad6d5bb607a186796d8861557df0d17f \n[root@linux-node1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv\nad6d5bb607a186796d8861557df0d17f,kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"\n```\n\n### 4.创建基础用户名/密码认证配置\n```\n[root@linux-node1 ~]# vim /opt/kubernetes/ssl/basic-auth.csv\nadmin,admin,1\nreadonly,readonly,2\n```\n\n### 5.部署Kubernetes API Server\n```\n#正常日志在 /opt/kubernetes/log 目录中查看，启动异常日志在 /var/log/messages 中查看\n\n[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service\n[Unit]\nDescription=Kubernetes API Server\nDocumentation=https://github.com/GoogleCloudPlatform/kubernetes\nAfter=network.target\n\n[Service]\nExecStart=/opt/kubernetes/bin/kube-apiserver \\\n  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \\\n  --bind-address=192.168.56.11 \\\n  --insecure-bind-address=127.0.0.1 \\\n  --authorization-mode=Node,RBAC \\\n  --runtime-config=rbac.authorization.k8s.io/v1 \\\n  --kubelet-https=true \\\n  --anonymous-auth=false \\\n  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \\\n  --enable-bootstrap-token-auth \\\n  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \\\n  --service-cluster-ip-range=10.1.0.0/16 \\\n  --service-node-port-range=20000-40000 \\\n  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \\\n  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \\\n  --client-ca-file=/opt/kubernetes/ssl/ca.pem \\\n  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\\n  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\\n  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \\\n  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \\\n  --etcd-servers=https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \\\n  --enable-swagger-ui=true \\\n  --allow-privileged=true \\\n  --audit-log-maxage=30 \\\n  --audit-log-maxbackup=3 \\\n  --audit-log-maxsize=100 \\\n  --audit-log-path=/opt/kubernetes/log/api-audit.log \\\n  --event-ttl=1h \\\n  --v=2 \\\n  --logtostderr=false \\\n  --log-dir=/opt/kubernetes/log\nRestart=on-failure\nRestartSec=5\nType=notify\nLimitNOFILE=65536\n\n[Install]\nWantedBy=multi-user.target\n```\n\n### 6.启动API Server服务\n```\n[root@linux-node1 ~]# systemctl daemon-reload\n[root@linux-node1 ~]# systemctl enable kube-apiserver\n[root@linux-node1 ~]# systemctl start kube-apiserver\n```\n\n查看API Server服务状态\n```\n[root@linux-node1 ~]# systemctl status kube-apiserver\n\n[root@linux-node1 ~]# netstat -ntlp\nActive Internet connections (only servers)\nProto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name\ntcp        0      0 192.168.56.11:6443      0.0.0.0:*               LISTEN      1508/kube-apiserver\ntcp        0      0 192.168.56.11:2379      0.0.0.0:*               LISTEN      987/etcd\ntcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      987/etcd\ntcp        0      0 192.168.56.11:2380      0.0.0.0:*               LISTEN      987/etcd\ntcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      1508/kube-apiserver\ntcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      985/sshd\ntcp6       0      0 :::22                   :::*                    LISTEN      985/sshd\n\n#发现 kube-apiserver 会监听2个端口一个6443（需要认证），一个本地的8080(给kube-controller-manager和kube-scheduler服务使用，不需要认证，其他的服务访问apiserver就需要认证)\n```\n\n## 二.部署Controller Manager服务\n```\n[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service\n[Unit]\nDescription=Kubernetes Controller Manager\nDocumentation=https://github.com/GoogleCloudPlatform/kubernetes\n\n[Service]\nExecStart=/opt/kubernetes/bin/kube-controller-manager \\\n  --address=127.0.0.1 \\\n  --master=http://127.0.0.1:8080 \\\n  --allocate-node-cidrs=true \\\n  --service-cluster-ip-range=10.1.0.0/16 \\\n  --cluster-cidr=10.2.0.0/16 \\\n  --cluster-name=kubernetes \\\n  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\\n  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\\n  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\\n  --root-ca-file=/opt/kubernetes/ssl/ca.pem \\\n  --leader-elect=true \\\n  --v=2 \\\n  --logtostderr=false \\\n  --log-dir=/opt/kubernetes/log\n\nRestart=on-failure\nRestartSec=5\n\n[Install]\nWantedBy=multi-user.target\n```\n\n### 3.启动Controller Manager\n```\n[root@linux-node1 ~]# systemctl daemon-reload\n[root@linux-node1 scripts]# systemctl enable kube-controller-manager\n[root@linux-node1 scripts]# systemctl start kube-controller-manager\n```\n\n### 4.查看服务状态\n```\n[root@linux-node1 scripts]# systemctl status kube-controller-manager\n```\n\n\n## 三.部署Kubernetes Scheduler\n```\n[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service\n[Unit]\nDescription=Kubernetes Scheduler\nDocumentation=https://github.com/GoogleCloudPlatform/kubernetes\n\n[Service]\nExecStart=/opt/kubernetes/bin/kube-scheduler \\\n  --address=127.0.0.1 \\\n  --master=http://127.0.0.1:8080 \\\n  --leader-elect=true \\\n  --v=2 \\\n  --logtostderr=false \\\n  --log-dir=/opt/kubernetes/log\n\nRestart=on-failure\nRestartSec=5\n\n[Install]\nWantedBy=multi-user.target\n```\n\n### 2.部署服务\n```\n[root@linux-node1 ~]# systemctl daemon-reload\n[root@linux-node1 scripts]# systemctl enable kube-scheduler\n[root@linux-node1 scripts]# systemctl start kube-scheduler\n[root@linux-node1 scripts]# systemctl status kube-scheduler\n```\n\n## 四.部署kubectl命令行工具(管理k8s集群的工具，跟apiserver交互，通信需要认证)\n（为了安全，只在master服务器上安装）\n\n1.准备二进制命令包\n```\n[root@linux-node1 ~]# cd /usr/local/src/kubernetes/client/bin\n[root@linux-node1 bin]# cp kubectl /opt/kubernetes/bin/\n```\n\n2.创建 admin 证书签名请求\n```\n[root@linux-node1 ~]# cd /usr/local/src/ssl/\n[root@linux-node1 ssl]# \ncat > admin-csr.json <<EOF\n{\n  \"CN\": \"admin\",\n  \"hosts\": [],\n  \"key\": {\n    \"algo\": \"rsa\",\n    \"size\": 2048\n  },\n  \"names\": [\n    {\n      \"C\": \"CN\",\n      \"ST\": \"BeiJing\",\n      \"L\": \"BeiJing\",\n      \"O\": \"system:masters\",\n      \"OU\": \"System\"\n    }\n  ]\n}\nEOF\n[root@linux-node1 ssl]#\n```\n\n3.生成 admin 证书和私钥：\n```\n[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \\\n   -ca-key=/opt/kubernetes/ssl/ca-key.pem \\\n   -config=/opt/kubernetes/ssl/ca-config.json \\\n   -profile=kubernetes admin-csr.json | cfssljson -bare admin\n[root@linux-node1 ssl]# ls -l admin*\n-rw-r--r-- 1 root root 1009 Mar  5 12:29 admin.csr\n-rw-r--r-- 1 root root  229 Mar  5 12:28 admin-csr.json\n-rw------- 1 root root 1675 Mar  5 12:29 admin-key.pem\n-rw-r--r-- 1 root root 1399 Mar  5 12:29 admin.pem\n\n[root@linux-node1 src]# cp admin*.pem /opt/kubernetes/ssl/\n```\n\n4.设置集群参数\n\napiserver通过RBAC给客户端授权，RBAC预定义了一些角色，我们需要对其进行配置\n\n```\n[root@linux-node1 src]# kubectl config set-cluster kubernetes \\\n   --certificate-authority=/opt/kubernetes/ssl/ca.pem \\\n   --embed-certs=true \\\n   --server=https://192.168.56.11:6443\nCluster \"kubernetes\" set.\n```\n\n5.设置客户端认证参数\n```\n[root@linux-node1 src]# kubectl config set-credentials admin \\\n   --client-certificate=/opt/kubernetes/ssl/admin.pem \\\n   --embed-certs=true \\\n   --client-key=/opt/kubernetes/ssl/admin-key.pem\nUser \"admin\" set.\n```\n\n6.设置上下文参数\n```\n[root@linux-node1 src]# kubectl config set-context kubernetes \\\n   --cluster=kubernetes \\\n   --user=admin\nContext \"kubernetes\" created.\n```\n\n7.设置默认上下文\n```\n[root@linux-node1 src]# kubectl config use-context kubernetes\nSwitched to context \"kubernetes\".\n\n#以上步骤4-7这么多操作，就是在当前家目录下生成了一个这个文件(如果其他节点也需要正常使用kubectl命令，需要将这个文件也同步到对应的目录)\n[root@linux-node1 ~]# cat ~/.kube/config\napiVersion: v1\nclusters:\n- cluster:\n    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVYmY4ODdHRDJMWDRubnJXOWwzR05hanVwcnBnd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU0TVRBd09EQTRNalV3TUZvWERUSXpNVEF3TnpBNE1qVXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdXdURWFLM2xXeVJ4UDBJNjlPSnYKdy9hM3lkK0dRMW5FaWFuNVZBaFc3b25NQlF6OGJiUjRYU0E0SXJrMWJ5bTJacVNnQjduVDI2TlZvby94eDVSTAppUXJLN294SzlvZ01YUHJ4Y3NPYmxocG03eHFzMnEyUWhrOWN6YW5XTm1icnlYZ1BmbXg0NDNlZVZPaGZ6aWZ6ClRXbHozSDNpdDRvUW5YRytNSkpKZ2FhU211YUJBOVlHMUNZUGJ5Ym1JZENMZlc4ZFZOUjhyZkJkQnVGV1poRzEKMVVIS1UwNGs2ZS9uNDNJYTZ3bElNVTl2Y1Z0R1g2d0N5K3ozY291c1pqUGczakdzUEU5UC9HT2FyN2FPaVM3VQpNQXJMMkZWcnBnT3BJWmhyMFdBOW1iQjRyVS9Cb0VSUW5OWXhEQktHSWpCT1pIMTlrbmhKNlllVm5JRGphZHZGCkhRSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVVU1wY1kzS1pCYzB0a1UyNmw4OVpUSHhMRlNFd0h3WURWUjBqQkJnd0ZvQVVVTXBjWTNLWgpCYzB0a1UyNmw4OVpUSHhMRlNFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDUWNoS05uMFNxMFIxNkJ1dFlQCmNBU2lqQnZxNHM1enZmdy9Ua29HaVVrVCtsYWI5ME0wNHJMbzRYK1prZk1WT3hIb0RBc3R5Uy9JN3ZXdU16K3EKeU82UzgwVlhBd200dDhkOEhXYlZtbStnSzFJcEE5Smg3TUJTa2VBZGsxM0FTcy90S1NpT3EwMFIwRklEWGxPWgpCd3lza0orN0FJU2prZlAvZGVXTGlhL2QzaUdISnA4UkZnb09EbWxpMWxtWklsMUQySVFJU1VCTE9GbTg0VGtxCmFtZzRscWNJdzlSM0VhT3l3YkJDeGtJaTk3T1JXT3NncVpmekR2MFUwLzhTZ0dPdis5bGJMdE95QjRwY09iRTkKZGdmWXVHMXZ2clpib01yeDFxdlhNckRDTitWZGxiZ0QrYkJ1NUxOTmpIWlZkRzdwYlc5bXZuUnV2UDVIMEFXLwpBT3M9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K\n    server: https://192.168.56.11:6443\n  name: kubernetes\ncontexts:\n- context:\n    cluster: kubernetes\n    user: admin\n  name: kubernetes\ncurrent-context: kubernetes\nkind: Config\npreferences: {}\nusers:\n- name: admin\n  user:\n    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzVENDQXNXZ0F3SUJBZ0lVR29PYThEaE9WQTNKajNzRkR2eUErVk9WdGhVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU0TVRBd09ERXlNekF3TUZvWERUSTRNVEF3TlRFeU16QXdNRm93YXpFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbGFVcHBibWN4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RGpBTUJnTlZCQU1UCkJXRmtiV2x1TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF0NExsNmRYQjVEMFQKZkNmN2xTR05rc2paRmNsYmFvZUlnTkhLbSs1NVI1aXZIQUN6dXBIM3J5bkEwMVE5VnE1NGRHQnRmZ0dQLzQveAp4Sy9kNnduMFVnaFZUWnRnQWVtRGVPTXhVSTVsT3ZmbURvNkwraFBCMjV6WldmSEltR1NJcXR6NWExMno5VURUCjNZdUlqNlpobWFzOGhIK2tKNCszL1FMZzlKTy9KYWFQTkgvT0pYZjNiS0N3YmxpMDBBdllQV0Mxa3NtbWxLZlIKNWdKV1lySUl2NEh2Y3plWFVqWkE4K0Nnc0hSdktOWlkybjh6RHRjZkFmSEhEY0FZTjZhVlZBODZJUytic3NlLwpLVS82T1BHbXZRNktWM05qZ1FUTml6eldUU0FhTFJyRVZiVkdraC9CRDg2QlNpbzI3aHBHeGtQekErbEJQK2xrClNLQXAzYUpyRHdJREFRQUJvMzh3ZlRBT0JnTlZIUThCQWY4RUJBTUNCYUF3SFFZRFZSMGxCQll3RkFZSUt3WUIKQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0hRWURWUjBPQkJZRUZHR0Q1ZDhyTXlhSgpEdHhUa0pQaHNTaFk1MEZQTUI4R0ExVWRJd1FZTUJhQUZGREtYR055bVFYTkxaRk51cGZQV1V4OFN4VWhNQTBHCkNTcUdTSWIzRFFFQkN3VUFBNElCQVFCc2lHL25oRjZpUEMvTGZFR1RQaEdWejBwbVNYOEU2MVR4eVdXM2drYVYKZjB4Ry94RXBzRFRXUVhpQTBhbDRVbEJlQ1RJbFA0ZldLY0tOZS9BTXZlYkF2SnB0Q1ZjWTUrUkV0d214dnBCYwptcWRwdDhGTGdJdkNuYmN3RFppWVFNTGRkWWFRWHI3STRIeUxITDhBTm5CbmI5S3BZT0VMdFNYb2ltR1MydzFpCkJqVGgrK2lDb2htVXJFT3J4K1NjeWFadWV2L0RtczN1TFZnN1lscXVWUlJVMzNkdWJQaEx2bVJHRjB5ZjRHQXgKeUlaRCtYM0J5M2VBTzhyWW9oQVk3VXhXTDVzY0d2YjVMY3M2K0xZaUZtWmc0a3Z4dWptbm9tb0g3bFp1WGpsMApXM1NLQ0Y2Nno5YWpLMHQ1SFlDc0dOQXNmRjlsVlhOYlU1SU5peDVMbkc2NgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==\n    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBdDRMbDZkWEI1RDBUZkNmN2xTR05rc2paRmNsYmFvZUlnTkhLbSs1NVI1aXZIQUN6CnVwSDNyeW5BMDFROVZxNTRkR0J0ZmdHUC80L3h4Sy9kNnduMFVnaFZUWnRnQWVtRGVPTXhVSTVsT3ZmbURvNkwKK2hQQjI1elpXZkhJbUdTSXF0ejVhMTJ6OVVEVDNZdUlqNlpobWFzOGhIK2tKNCszL1FMZzlKTy9KYWFQTkgvTwpKWGYzYktDd2JsaTAwQXZZUFdDMWtzbW1sS2ZSNWdKV1lySUl2NEh2Y3plWFVqWkE4K0Nnc0hSdktOWlkybjh6CkR0Y2ZBZkhIRGNBWU42YVZWQTg2SVMrYnNzZS9LVS82T1BHbXZRNktWM05qZ1FUTml6eldUU0FhTFJyRVZiVkcKa2gvQkQ4NkJTaW8yN2hwR3hrUHpBK2xCUCtsa1NLQXAzYUpyRHdJREFRQUJBb0lCQUZCbVdDYkgwVWdXL2pkeQpLUVpnaWU5YWNjbmF5Mk56OS9sQWNQMDZVUVp1UGFJT0tMQkFEWDAvMU15QjV0SFlaTXZRQjRpaVZKMktTa2w3Cko4WTNPVVRMZzl3Wmk4bXFya0JEZ2JLaWdIV0NjTmZGMmt2NVpnQzZ5bnRldEIwWVJzeGRQaVd0Q3hBVGsvOUgKaDlBdi9DamdYZ1pMQ2ZlUFB2UHAwL2N6MkJZOUVPTkhoOUt0UXNFN09zeEs0bXJOQUVPVFV3TzFtRS9vTmxjMQpUdXl3c2VETXVzZy9pZkRmeGc5V2VMbTVQUG8yT29ZWFhYcWhMd09ncDl5UCs3U3NKRk4ySmNSWGlhdWUyaTR3CitSUklva1dzSDRmM3l0cVBWYXBjSnZhWHJyYjg1TndIeFlRQzRSSWZWQmpabzRwTWN3eFpiRktkR0RocitZTDgKalRvWis2RUNnWUVBd1FZakhzcXBBL3AvbGNhZU5LbWVpZEMrUnlkQ1Rpdk5Va25Ba1c0Ym5HRkJvZGZFZGtCVQpNMWpxY1BGamdzb3l4eDZsbW1aZy9vK0dvVEFOUmVLaG1xN1o0d0NsQzUySHU4SHk1ZEU0QUpwSWZJUnl4NUhyCm1DbkxRQ1l4SDU4SnJVVFVpRkFza0VSZ2FhS3lvUExIV29jNVk5c3luckg0OEdKTzBQN2VJTEVDZ1lFQTgySTkKVHkyS3drZVRBT1NKaVpoQ3JlR0VBNktLOU1EQ1ExZzI2S0NzM2tmOTYyOXdPSTh5aTdmNEVUTHlBOHZrUHp1YgpGUmlpWitpd20xSXo5THQvei9nRjhSbFg3Z1RKYXJPOE0wQnErczZCdG1DRk5QRzFpVnBDV1AzbjgwaFc4Q0dXCjVjR3poYUR4VGZvSHJnNFZPT1c2TFZXVlFYQlBiTGJBVkZPQ043OENnWUJETlVMWFB0TTRxbWp3R3BjTldSMzEKZUhRNFRDZ2ZGY3RJNHBzbFNBUmZIOUg5YXlaaDBpWS9OcTl5b2VuM0tUWWk5TDNPaytVajNZK1A0aTVNN2dzOAowN0xVQW01MUsrV043NHNHa0NHQ3ZEV08vWU1Gai81TEhncENETW8vNjEwd01tNGFCR2h2MXc4RzJQcC9aZWtaCjBVbWZSanhLMjBjRlZBV0RhYXFvRVFLQmdEUW11ZEpzaE00cWZoSno1aERJd29qMXlNN3FsbkhwbC9iTVFUL0oKcGlFZk5nYXI0MVVMUWg1ME5rQ2hOUUNoUVBCWHVseGo0ZkQ0Q0ZmUDNuZ3pjU2pFRWFuZTcxdCtSUmFMR3VtMApoUGZuSmg1SlFtSGM1VFJnVmRVeDJ2RGpjRldXTFBwZ2JqSlZFVC9QTXJRV0ttLzlzYzRqQjQ5MUhGL0VMU1FrCm5NT0xBb0dBU2lQWWQzNS8vZ0VwMnpFK2RjMEVUVDFzY0hxYyt1dXRTQ2NxWnFiYkhpK2JlMDRUclIxVnZvOEkKcTlRSUd3SkROV3lZRm05RXByZjJpRVc2VkQzVTQ5czFORjlQTC9ENHZjMGxTS0RtaE1ReldRZDhWMUZsMnJYWApaQzZjYmhiR2tqODZQWnllTU1zNnlLaWRjZnpaYXNlRndvTmI4SVJqM2pUYWNSalZjbGM9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==\n[root@linux-node1 src]# \n```\n\n8.使用kubectl工具\n```\n[root@linux-node1 ~]# kubectl get cs   \nNAME                 STATUS    MESSAGE             ERROR\ncontroller-manager   Healthy   ok                  \nscheduler            Healthy   ok                  \netcd-1               Healthy   {\"health\":\"true\"}   \netcd-2               Healthy   {\"health\":\"true\"}   \netcd-0               Healthy   {\"health\":\"true\"}   \n\n疑问：为何这里不显示apiserver的一个健康状况？\n答疑：因为执行kubectl命令就是和apiserver通信，能执行成功apiserver肯定是正常的，如果不正常是没有输出的\n```\n"
  },
  {
    "path": "docs/node.md",
    "content": "## 部署kubelet\n\n1.二进制包准备\n将软件包从linux-node1复制到linux-node2中去。\n```\n[root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/\n[root@linux-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/\n[root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.12:/opt/kubernetes/bin/\n[root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.13:/opt/kubernetes/bin/\n```\n\n2.创建角色绑定\n\nkubelet启动的时候会向kube-apiserver发送tls-bootstrap的请求，所以说需要将bootstrap的token设置为对应的角色，这样kubectl才有权限去创建请求，这个请求是怎么回事呢？kubelet起来的时候会访问apiserver,来动态获取证书。\n\n```\n[root@linux-node1 ~]# cd /usr/local/src/ssl\n[root@linux-node1 ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap\nclusterrolebinding \"kubelet-bootstrap\" created\n```\n\n3.创建 kubelet bootstrapping kubeconfig 文件\n设置集群参数\n```\n[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \\\n   --certificate-authority=/opt/kubernetes/ssl/ca.pem \\\n   --embed-certs=true \\\n   --server=https://192.168.56.11:6443 \\\n   --kubeconfig=bootstrap.kubeconfig\nCluster \"kubernetes\" set.\n```\n\n设置客户端认证参数\n```\n#注意这里的token需要跟之前kube-apiserver配置的一致（/usr/lib/systemd/system/kube-apiserver.service 中 /opt/kubernetes/ssl/bootstrap-token.csv 中的一致）\n\n[root@linux-node1 ssl]# kubectl config set-credentials kubelet-bootstrap \\\n   --token=ad6d5bb607a186796d8861557df0d17f \\\n   --kubeconfig=bootstrap.kubeconfig   \nUser \"kubelet-bootstrap\" set.\n```\n\n设置上下文参数\n```\n[root@linux-node1 ssl]# kubectl config set-context default \\\n   --cluster=kubernetes \\\n   --user=kubelet-bootstrap \\\n   --kubeconfig=bootstrap.kubeconfig\nContext \"default\" created.\n```\n\n选择默认上下文\n```\n[root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig\nSwitched to context \"default\".\n\n#以上操作，就是为了生成这个文件 bootstrap.kubeconfig (需要往所有节点上拷贝过去)\n[root@linux-node1 ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg\n[root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.12:/opt/kubernetes/cfg\n[root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.13:/opt/kubernetes/cfg\n```\n\n部署kubelet\n1.设置CNI支持(k8s网络接口的插件)\n```\n[root@linux-node2 ~]# mkdir -p /etc/cni/net.d\n[root@linux-node2 ~]# \ncat > /etc/cni/net.d/10-default.conf <<EOF\n{\n        \"name\": \"flannel\",\n        \"type\": \"flannel\",\n        \"delegate\": {\n            \"bridge\": \"docker0\",\n            \"isDefaultGateway\": true,\n            \"mtu\": 1400\n        }\n}\nEOF\n[root@linux-node2 ~]# \n```\n\n2.创建kubelet目录\n```\n[root@linux-node2 ~]# mkdir /var/lib/kubelet\n```\n\n3.创建kubelet服务配置\n```\n[root@k8s-node2 ~]# vim /usr/lib/systemd/system/kubelet.service\n[Unit]\nDescription=Kubernetes Kubelet\nDocumentation=https://github.com/GoogleCloudPlatform/kubernetes\nAfter=docker.service\nRequires=docker.service\n\n[Service]\nWorkingDirectory=/var/lib/kubelet\nExecStart=/opt/kubernetes/bin/kubelet \\\n  --address=192.168.56.12 \\\n  --hostname-override=192.168.56.12 \\\n  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \\\n  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\\n  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\\n  --cert-dir=/opt/kubernetes/ssl \\\n  --network-plugin=cni \\\n  --cni-conf-dir=/etc/cni/net.d \\\n  --cni-bin-dir=/opt/kubernetes/bin/cni \\\n  --cluster-dns=10.1.0.2 \\\n  --cluster-domain=cluster.local. \\\n  --hairpin-mode hairpin-veth \\\n  --allow-privileged=true \\\n  --fail-swap-on=false \\\n  --logtostderr=true \\\n  --v=2 \\\n  --logtostderr=false \\\n  --log-dir=/opt/kubernetes/log\n  \nRestart=on-failure\nRestartSec=5\n\n[Install]\nWantedBy=multi-user.target\n```\n\n4.启动Kubelet\n```\n[root@linux-node2 ~]# systemctl daemon-reload\n[root@linux-node2 ~]# systemctl enable kubelet\n[root@linux-node2 ~]# systemctl restart kubelet\n```\n\n5.查看服务状态\n```\n[root@linux-node2 kubernetes]# systemctl status kubelet\n```\n\n6.查看csr请求\n注意是在linux-node1上执行。\n```\n[root@linux-node1 ~]# kubectl get csr\nNAME                                                   AGE       REQUESTOR           CONDITION\nnode-csr-0_w5F1FM_la_SeGiu3Y5xELRpYUjjT2icIFk9gO9KOU   1m        kubelet-bootstrap   Pending\n```\n\n7.批准kubelet 的 TLS 证书请求\n```\n[root@linux-node1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve\n```\n执行完毕后，查看节点状态已经是Ready的状态了\n```\n[root@linux-node1 ~]# kubectl get node\nNAME            STATUS   ROLES    AGE    VERSION\n192.168.56.12   Ready    <none>   103s   v1.12.1\n192.168.56.13   Ready    <none>   103s   v1.12.1\n```\n## 部署Kubernetes Proxy\n1.配置kube-proxy使用LVS\n```\n[root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack\n```\n\n2.创建 kube-proxy 证书请求\n```\n[root@linux-node1 ~]# cd /usr/local/src/ssl/\n[root@linux-node1 ssl]#\ncat > kube-proxy-csr.json <<EOF\n{\n  \"CN\": \"system:kube-proxy\",\n  \"hosts\": [],\n  \"key\": {\n    \"algo\": \"rsa\",\n    \"size\": 2048\n  },\n  \"names\": [\n    {\n      \"C\": \"CN\",\n      \"ST\": \"BeiJing\",\n      \"L\": \"BeiJing\",\n      \"O\": \"k8s\",\n      \"OU\": \"System\"\n    }\n  ]\n}\nEOF\n```\n   \n3.生成证书\n```\n[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \\\n   -ca-key=/opt/kubernetes/ssl/ca-key.pem \\\n   -config=/opt/kubernetes/ssl/ca-config.json \\\n   -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy\n```\n\n4.分发证书到所有Node节点\n```\n[root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/\n[root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.12:/opt/kubernetes/ssl/\n[root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.13:/opt/kubernetes/ssl/\n```\n\n5.创建kube-proxy配置文件\n```\n[root@linux-node1 ssl]# kubectl config set-cluster kubernetes \\\n   --certificate-authority=/opt/kubernetes/ssl/ca.pem \\\n   --embed-certs=true \\\n   --server=https://192.168.56.11:6443 \\\n   --kubeconfig=kube-proxy.kubeconfig\nCluster \"kubernetes\" set.\n\n[root@linux-node1 ssl]# kubectl config set-credentials kube-proxy \\\n   --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \\\n   --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \\\n   --embed-certs=true \\\n   --kubeconfig=kube-proxy.kubeconfig\nUser \"kube-proxy\" set.\n\n[root@linux-node1 ssl]# kubectl config set-context default \\\n   --cluster=kubernetes \\\n   --user=kube-proxy \\\n   --kubeconfig=kube-proxy.kubeconfig\nContext \"default\" created.\n\n[root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig\nSwitched to context \"default\".\n```\n\n6.分发kubeconfig配置文件\n```\n[root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/\n[root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.12:/opt/kubernetes/cfg/\n[root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.13:/opt/kubernetes/cfg/\n```\n\n7.创建kube-proxy服务配置(在node节点上配置)\n```\n[root@linux-node2 ~]# mkdir /var/lib/kube-proxy\n\n[root@linux-node2 ~]# vim /usr/lib/systemd/system/kube-proxy.service\n[Unit]\nDescription=Kubernetes Kube-Proxy Server\nDocumentation=https://github.com/GoogleCloudPlatform/kubernetes\nAfter=network.target\n\n[Service]\nWorkingDirectory=/var/lib/kube-proxy\nExecStart=/opt/kubernetes/bin/kube-proxy \\\n  --bind-address=192.168.56.12 \\\n  --hostname-override=192.168.56.12 \\\n  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \\\n--masquerade-all \\\n  --feature-gates=SupportIPVSProxyMode=true \\\n  --proxy-mode=ipvs \\\n  --ipvs-min-sync-period=5s \\\n  --ipvs-sync-period=5s \\\n  --ipvs-scheduler=rr \\\n  --logtostderr=true \\\n  --v=2 \\\n  --logtostderr=false \\\n  --log-dir=/opt/kubernetes/log\n\nRestart=on-failure\nRestartSec=5\nLimitNOFILE=65536\n\n[Install]\nWantedBy=multi-user.target\n\n8.启动Kubernetes Proxy\n[root@linux-node2 ~]# systemctl daemon-reload\n[root@linux-node2 ~]# systemctl enable kube-proxy\n[root@linux-node2 ~]# systemctl start kube-proxy\n```\n\n9.查看服务状态\n查看kube-proxy服务状态\n```\n[root@linux-node2 scripts]# systemctl status kube-proxy\n\n检查LVS状态\n[root@linux-node2 ~]# ipvsadm -L -n\nIP Virtual Server version 1.2.1 (size=4096)\nProt LocalAddress:Port Scheduler Flags\n  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn\nTCP  10.1.0.1:443 rr persistent 10800\n  -> 192.168.56.11:6443           Masq    1      0          0         \n```\n如果你在两台实验机器都安装了kubelet和proxy服务，使用下面的命令可以检查状态：\n```\n[root@linux-node1 ssl]#  kubectl get node\nNAME            STATUS    ROLES     AGE       VERSION\n192.168.56.12   Ready     <none>    22m       v1.10.1\n192.168.56.13   Ready     <none>    3m        v1.10.1\n```\nlinux-node3节点请自行部署。\n"
  },
  {
    "path": "docs/operational.md",
    "content": "\n## 一、服务重启\n```\n#master\nsystemctl restart kube-scheduler\nsystemctl restart kube-controller-manager\nsystemctl restart kube-apiserver\nsystemctl restart flannel\nsystemctl restart etcd\n\nsystemctl stop kube-scheduler\nsystemctl stop kube-controller-manager\nsystemctl stop kube-apiserver\nsystemctl stop flannel\nsystemctl stop etcd\n\nsystemctl status kube-apiserver\nsystemctl status kube-scheduler\nsystemctl status kube-controller-manager\nsystemctl status etcd\n\n#node\nsystemctl restart kubelet\nsystemctl restart kube-proxy\nsystemctl restart flannel\nsystemctl restart etcd\n\nsystemctl stop kubelet\nsystemctl stop kube-proxy\nsystemctl stop flannel\nsystemctl stop etcd\n\nsystemctl status kubelet\nsystemctl status kube-proxy\nsystemctl status flannel\nsystemctl status etcd\n```\n\n## 二、常用查询\n```\n#查询命名空间\n[root@linux-node1 ~]# kubectl get namespace --all-namespaces\nNAME              STATUS   AGE\ndefault           Active   3d13h\nkube-node-lease   Active   3d13h\nkube-public       Active   3d13h\nkube-system       Active   3d13h\n\n#查询健康状况\n[root@linux-node1 ~]# kubectl get cs --all-namespaces\nNAME                 STATUS    MESSAGE             ERROR\ncontroller-manager   Healthy   ok\nscheduler            Healthy   ok\netcd-0               Healthy   {\"health\":\"true\"}\netcd-2               Healthy   {\"health\":\"true\"}\netcd-1               Healthy   {\"health\":\"true\"}\n\n#查询node\n[root@linux-node1 ~]# kubectl get node -o wide\nNAME            STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME\n192.168.56.12   Ready     <none>    2m        v1.10.3   <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1\n192.168.56.13   Ready     <none>    2m        v1.10.3   <none>        CentOS Linux 7 (Core)   3.10.0-862.el7.x86_64   docker://18.6.1\n\n#创建测试deployment\n[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000\n\n#查看创建的deployment\nkubectl get deployment -o wide --all-namespaces\n\n#查询pod\n[root@linux-node1 ~]# kubectl get pod -o wide --all-namespaces\nNAME                        READY     STATUS    RESTARTS   AGE       IP          NODE\nnet-test-5767cb94df-6smfk   1/1       Running   1          1h        10.2.69.3   192.168.56.12\nnet-test-5767cb94df-ctkhz   1/1       Running   1          1h        10.2.17.3   192.168.56.13\n\n#查询service\n[root@linux-node1 ~]# kubectl get service\nNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE\nkubernetes   ClusterIP   10.1.0.1     <none>        443/TCP   4m\n\n#Etcd集群健康状况查询\n[root@linux-node1 ~]# etcdctl --endpoints=https://192.168.56.11:2379 \\\n  --ca-file=/opt/kubernetes/ssl/ca.pem \\\n  --cert-file=/opt/kubernetes/ssl/etcd.pem \\\n  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health\n```\n\n## 三、修改POD的IP地址段\n```\n#修改一\n[root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service\n[Unit]\nDescription=Kubernetes Controller Manager\nDocumentation=https://github.com/GoogleCloudPlatform/kubernetes\n\n[Service]\nExecStart=/opt/kubernetes/bin/kube-controller-manager \\\n  --address=127.0.0.1 \\\n  --master=http://127.0.0.1:8080 \\\n  --allocate-node-cidrs=true \\\n  --service-cluster-ip-range=10.1.0.0/16 \\\n  --cluster-cidr=10.2.0.0/16 \\          ---POD的IP地址段\n  --cluster-name=kubernetes \\\n  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\\n  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\\n  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\\n  --root-ca-file=/opt/kubernetes/ssl/ca.pem \\\n  --leader-elect=true \\\n  --v=2 \\\n  --logtostderr=false \\\n  --log-dir=/opt/kubernetes/log\n\nRestart=on-failure\nRestartSec=5\n\n[Install]\nWantedBy=multi-user.target\n\n#修改二（修改etcd key中的值）\n\n#创建etcd的key值\n/opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \\\n      --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \\\nmk /kubernetes/network/config '{ \"Network\": \"10.2.0.0/16\", \"Backend\": { \"Type\": \"vxlan\", \"VNI\": 1 }}'\n\n获取etcd中key的值\n/opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \\\n      --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \\\nget /kubernetes/network/config\n\n修改etcd中key的值  \n/opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \\\n      --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \\\nset /kubernetes/network/config '{ \"Network\": \"10.3.0.0/16\", \"Backend\": { \"Type\": \"vxlan\", \"VNI\": 1 }}'\n```\n\n"
  },
  {
    "path": "docs/外部访问K8s中Pod的几种方式.md",
    "content": "```\nIngress是个什么鬼，网上资料很多（推荐官方），大家自行研究。简单来讲，就是一个负载均衡的玩意，其主要用来解决使用NodePort暴露Service的端口时Node IP会漂移的问题。同时，若大量使用NodePort暴露主机端口，管理会非常混乱。\n\n好的解决方案就是让外界通过域名去访问Service，而无需关心其Node IP及Port。那为什么不直接使用Nginx？这是因为在K8S集群中，如果每加入一个服务，我们都在Nginx中添加一个配置，其实是一个重复性的体力活，只要是重复性的体力活，我们都应该通过技术将它干掉。\n\nIngress就可以解决上面的问题，其包含两个组件Ingress Controller和Ingress：\n\nIngress\n将Nginx的配置抽象成一个Ingress对象，每添加一个新的服务只需写一个新的Ingress的yaml文件即可\n\nIngress Controller\n将新加入的Ingress转化成Nginx的配置文件并使之生效\n```\n\n参考文档：\n\nhttps://blog.csdn.net/qq_23348071/article/details/87185025  从外部访问K8s中Pod的五种方式\n"
  },
  {
    "path": "docs/虚拟机环境准备.md",
    "content": "# 一、安装环境准备\n\n下载系统镜像:可以在阿里云镜像站点下载 CentOS \n\n镜像: http://mirrors.aliyun.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1804.iso\n\n创建虚拟机:步骤略。\n\n# 二、操作系统安装\n 为了统一环境，保证实验的通用性，将网卡名称设置为 eth*，不使用 CentOS 7 默认的网卡命名规则。所以需要在安装的时候，增加内核参数。\n \n ## 1)光标选择“Install CentOS 7”\n \n  ![](https://github.com/Lancger/opsfull/blob/master/images/install%20centos7.png)\n\n ## 2)点击 Tab，打开 kernel 启动选项后，增加 net.ifnames=0 biosdevname=0，如下图所示。\n \n  ![](https://github.com/Lancger/opsfull/blob/master/images/change%20network.png)\n\n# 三、设置网络\n\n## 1.vmware-workstation设置网络。\n\n如果你的默认 NAT 地址段不是 192.168.56.0/24 可以修改 VMware Workstation 的配置，点击编辑 -> 虚拟 网络配置，然后进行配置。\n\n  ![](https://github.com/Lancger/opsfull/blob/master/images/vmware-network.png)\n\n## 2.virtualbox设置网络。\n\n  ![eth0](https://github.com/Lancger/opsfull/blob/master/images/virtualbox-network-eth0.jpg)\n  \n  ![eth1](https://github.com/Lancger/opsfull/blob/master/images/virtualbox-network-eth1.png)\n\n# 四、系统配置\n\n## 1.设置主机名\n```\n[root@localhost ~]# vi /etc/hostname \nlinux-node1.example.com\n或\n#修改本机hostname\n[root@localhost ~]# hostnamectl set-hostname linux-node1.example.com\n\n#让主机名修改生效\n[root@localhost ~]# su -l\nLast login: Sun Sep 30 04:30:53 EDT 2018 on pts/0\n[root@linux-node1 ~]#\n```\n\n## 2.安装依赖\n```\n#为了保证各服务器间时间一致，使用ntpdate同步时间。\n# 安装ntpdate\n[root@linux-node1 ~]# yum install -y wget lrzsz vim net-tools openssh-clients ntpdate unzip xz\n\n$ 加入crontab\n1 * * * *  (/usr/sbin/ntpdate -s ntp1.aliyun.com;/usr/sbin/hwclock -w) > /dev/null 2>&1\n1 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null 2>&1\n\n#设置时区\n[root@linux-node1 ~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime\n```\n\n## 3.设置 IP 地址\n  \n  请配置静态 IP 地址。注意将 UUID 和 MAC 地址已经其它配置删除掉，便于进行虚 拟机克隆，请参考下面的配置。\n```\n[root@linux-node1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 \nTYPE=Ethernet\nBOOTPROTO=static \nNAME=eth0 \nDEVICE=eth0 \nONBOOT=yes \nIPADDR=192.168.56.11 \nNETMASK=255.255.255.0 \n#GATEWAY=192.168.56.2\n\n#重启网络服务\n[root@linux-node1 ~]# systemctl restart network\n```\n\n\n\n## 4.关闭 NetworkManager 和防火墙开启自启动\n```\n[root@linux-node1 ~]# systemctl disable firewalld \n[root@linux-node1 ~]# systemctl disable NetworkManager\n```\n\n## 5.设置主机名解析\n```\n[root@linux-node1 ~]#\ncat > /etc/hosts <<EOF\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n192.168.56.11 linux-node1 linux-node1.example.com\nEOF\n[root@linux-node1 ~]#\n```\n\n## 6.关闭并确认 SELinux 处于关闭状态\n```\n[root@linux-node1 ~]# vim /etc/sysconfig/selinux \nSELINUX=enforcing      #修改为 disabled\n或\n#关闭selinux\n[root@linux-node1 ~]# sed -i \"s/SELINUX=enforcing/SELINUX=disabled/g\" /etc/sysconfig/selinux\n[root@linux-node1 ~]# sed -i \"s/SELINUXTYPE=targeted/SELINUXTYPE=disabled/g\" /etc/sysconfig/selinux\n\n#使配置立即生效\n[root@linux-node1 ~]# setenforce 0 \n```\n\n## 7.其他配置\n```\n#SSH登录慢\n[root@linux-node1 ~]# sed -i \"s/#UseDNS yes/UseDNS no/\"  /etc/ssh/sshd_config\n[root@linux-node1 ~]# sed -i \"s/GSSAPIAuthentication yes/GSSAPIAuthentication no/\"  /etc/ssh/sshd_config\n[root@linux-node1 ~]# systemctl restart sshd.service\n\n###Centos7禁用ipv6###\n[root@linux-node1 ~]# vim /etc/sysctl.conf \nnet.ipv6.conf.all.disable_ipv6=1\n或\n[root@linux-node1 ~]# echo \"net.ipv6.conf.all.disable_ipv6=1\" >> /etc/sysctl.conf \n\n[root@linux-node1 ~]# sysctl -p\n```\n\n## 8.重启\n```\n[root@linux-node1 ~]# reboot\n```\n\n## 9.克隆虚拟机\n\n关闭虚拟机，并克隆当前虚拟机 linux-node1 到 linux-node2 linux-node3，建议选择“创建完整克隆”，而不是“创 建链接克隆”。\n克隆完毕后请给 linux-node2 linux-node3 设置正确的 IP 地址和主机名。\n\n## 10.给虚拟机做快照\n\n分别给三台虚拟机做快照。以便于随时回到一个刚初始化完毕的系统中。可以有效的减少学习过程中 的环境准备时间。同时，请确保实验环境的一致性，便于顺利的完成所有实验。\n\n"
  },
  {
    "path": "example/coredns/coredns.yaml",
    "content": "apiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: coredns\n  namespace: kube-system\n  labels:\n      kubernetes.io/cluster-service: \"true\"\n      addonmanager.kubernetes.io/mode: Reconcile\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  labels:\n    kubernetes.io/bootstrapping: rbac-defaults\n    addonmanager.kubernetes.io/mode: Reconcile\n  name: system:coredns\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - endpoints\n  - services\n  - pods\n  - namespaces\n  verbs:\n  - list\n  - watch\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  annotations:\n    rbac.authorization.kubernetes.io/autoupdate: \"true\"\n  labels:\n    kubernetes.io/bootstrapping: rbac-defaults\n    addonmanager.kubernetes.io/mode: EnsureExists\n  name: system:coredns\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: system:coredns\nsubjects:\n- kind: ServiceAccount\n  name: coredns\n  namespace: kube-system\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: coredns\n  namespace: kube-system\n  labels:\n      addonmanager.kubernetes.io/mode: EnsureExists\ndata:\n  Corefile: |\n    .:53 {\n        errors\n        health\n        kubernetes cluster.local. in-addr.arpa ip6.arpa {\n            pods insecure\n            upstream\n            fallthrough in-addr.arpa ip6.arpa\n        }\n        prometheus :9153\n        proxy . /etc/resolv.conf\n        cache 30\n    }\n---\napiVersion: extensions/v1beta1\nkind: Deployment\nmetadata:\n  name: coredns\n  namespace: kube-system\n  labels:\n    k8s-app: coredns\n    kubernetes.io/cluster-service: \"true\"\n    addonmanager.kubernetes.io/mode: Reconcile\n    kubernetes.io/name: \"CoreDNS\"\nspec:\n  replicas: 2\n  strategy:\n    type: RollingUpdate\n    rollingUpdate:\n      maxUnavailable: 1\n  selector:\n    matchLabels:\n      k8s-app: coredns\n  template:\n    metadata:\n      labels:\n        k8s-app: coredns\n    spec:\n      serviceAccountName: coredns\n      tolerations:\n        - key: node-role.kubernetes.io/master\n          effect: NoSchedule\n        - key: \"CriticalAddonsOnly\"\n          operator: \"Exists\"\n      containers:\n      - name: coredns\n        image: coredns/coredns:1.0.6\n        imagePullPolicy: IfNotPresent\n        resources:\n          limits:\n            memory: 170Mi\n          requests:\n            cpu: 100m\n            memory: 70Mi\n        args: [ \"-conf\", \"/etc/coredns/Corefile\" ]\n        volumeMounts:\n        - name: config-volume\n          mountPath: /etc/coredns\n        ports:\n        - containerPort: 53\n          name: dns\n          protocol: UDP\n        - containerPort: 53\n          name: dns-tcp\n          protocol: TCP\n        livenessProbe:\n          httpGet:\n            path: /health\n            port: 8080\n            scheme: HTTP\n          initialDelaySeconds: 60\n          timeoutSeconds: 5\n          successThreshold: 1\n          failureThreshold: 5\n      dnsPolicy: Default\n      volumes:\n        - name: config-volume\n          configMap:\n            name: coredns\n            items:\n            - key: Corefile\n              path: Corefile\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: coredns\n  namespace: kube-system\n  labels:\n    k8s-app: coredns\n    kubernetes.io/cluster-service: \"true\"\n    addonmanager.kubernetes.io/mode: Reconcile\n    kubernetes.io/name: \"CoreDNS\"\nspec:\n  selector:\n    k8s-app: coredns\n  clusterIP: 10.1.0.2\n  ports:\n  - name: dns\n    port: 53\n    protocol: UDP\n  - name: dns-tcp\n    port: 53\n    protocol: TCP\n"
  },
  {
    "path": "example/nginx/nginx-daemonset.yaml",
    "content": "apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: nginx-daemonset\n  labels:\n    app: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.13.12\n        ports:\n        - containerPort: 80\n"
  },
  {
    "path": "example/nginx/nginx-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  labels:\n    app: nginx\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.13.12\n        ports:\n        - containerPort: 80\n"
  },
  {
    "path": "example/nginx/nginx-ingress.yaml",
    "content": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\nspec:\n  rules:\n  - host: www.example.com\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: nginx-service\n          servicePort: 80\n"
  },
  {
    "path": "example/nginx/nginx-pod.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: nginx-pod\n  labels:\n    app: nginx\nspec:\n  containers:\n  - name: nginx\n    image: nginx:1.13.12\n    ports:\n    - containerPort: 80\n"
  },
  {
    "path": "example/nginx/nginx-rc.yaml",
    "content": "apiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: nginx-rc\nspec:\n  replicas: 3\n  selector:\n    app: nginx\n  template:\n    metadata:\n      name: nginx\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.13.12\n        ports:\n        - containerPort: 80\n"
  },
  {
    "path": "example/nginx/nginx-rs.yaml",
    "content": "apiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n  name: nginx-rs\n  labels:\n    app: nginx\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.13.12\n        ports:\n        - containerPort: 80\n"
  },
  {
    "path": "example/nginx/nginx-service-nodeport.yaml",
    "content": "kind: Service\napiVersion: v1\nmetadata:\n  name: nginx-service\nspec:\n  selector:\n    app: nginx\n  ports:\n  - protocol: TCP\n    port: 80\n    targetPort: 80\n  type: NodePort\n  \n"
  },
  {
    "path": "example/nginx/nginx-service.yaml",
    "content": "kind: Service\napiVersion: v1\nmetadata:\n  name: nginx-service\nspec:\n  selector:\n    app: nginx\n  ports:\n  - protocol: TCP\n    port: 80\n    targetPort: 80\n"
  },
  {
    "path": "helm/README.md",
    "content": "# 一、Helm - K8S的包管理器\n\n类似Centos的yum\n\n## 1、Helm架构\n```bash\nhelm包括chart和release.\nhelm包含2个组件,Helm客户端和Tiller服务器.\n```\n\n## 2、Helm客户端安装\n\n1、脚本安装\n```bash\n#安装\ncurl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get |bash\n\n#查看\nwhich helm\n\n#因服务器端还没安装,这里会报无法连接\nhelm version \n\n#添加命令补全\nhelm completion bash > .helmrc\necho \"source .helmrc\" >> .bashrc\n```\n\n2、源码安装\n```bash\n#源码安装\n#curl -O https://get.helm.sh/helm-v2.16.0-linux-amd64.tar.gz\n\nwget -O helm-v2.16.0-linux-amd64.tar.gz https://get.helm.sh/helm-v2.16.0-linux-amd64.tar.gz\ntar -zxvf helm-v2.16.0-linux-amd64.tar.gz\ncd linux-amd64 #若采用容器化部署到kubernetes中，则可以不用管tiller，只需将helm复制到/usr/bin目录即可\ncp helm /usr/bin/\necho \"source <(helm completion bash)\" >> /root/.bashrc # 命令自动补全\n```\n\n## 3、Tiller服务器端安装\n\n1、安装\n\n```bash\nhelm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts\n\n#查看\nkubectl get --namespace=kube-system service tiller-deploy\nkubectl get --namespace=kube-system deployments. tiller-deploy\nkubectl get --namespace=kube-system pods|grep tiller-deploy\n\n#能够看到服务器版本信息\nhelm version\n\n#添加新的repo\nhelm repo add stable http://mirror.azure.cn/kubernetes/charts/\n```\n\n2、创建helm-rbac.yaml文件\n```bash\ncat >helm-rbac.yaml<<\\EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: tiller\n  namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRoleBinding\nmetadata:\n  name: tiller\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: cluster-admin\nsubjects:\n  - kind: ServiceAccount\n    name: tiller\n    namespace: kube-system\nEOF\n\nkubectl apply -f helm-rbac.yaml\n```\n\n## 4、Helm使用\n```bash\n#搜索 \nhelm search\n\n#执行命名添加权限\nkubectl create serviceaccount --namespace kube-system tiller\nkubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller\nkubectl patch deploy --namespace kube-system tiller-deploy -p '{\"spec\":{\"template\":{\"spec\":{\"serviceAccount\":\"tiller\"}}}}'\n\n#安装chart的mysql应用\nhelm install stable/mysql\n\n会自动部署 Service,Deployment,Secret 和 PersistentVolumeClaim,并给与很多提示信息,比如mysql密码获取,连接端口等.\n\n#查看release各个对象\nkubectl get service doltish-beetle-mysql\nkubectl get deployments. doltish-beetle-mysql\nkubectl get pods doltish-beetle-mysql-75fbddbd9d-f64j4\nkubectl get pvc doltish-beetle-mysql\nhelm list # 显示已经部署的release\n\n#删除\nhelm delete doltish-beetle\nkubectl get pods\nkubectl get service\nkubectl get deployments.\nkubectl get pvc\n```\n\n# 二、使用Helm部署Nginx Ingress\n\n## 1、标记标签\n\n我们将kub1(192.168.56.11)做为边缘节点，打上Label\n\n```bash\n#查看node标签\nkubectl get nodes --show-labels\n\nkubectl label node k8s-master-01 node-role.kubernetes.io/edge=\n\n$ kubectl get node\nNAME            STATUS   ROLES         AGE   VERSION\nk8s-master-01   Ready    edge,master   59m   v1.16.2\nk8s-master-02   Ready    <none>        58m   v1.16.2\nk8s-master-03   Ready    <none>        58m   v1.16.2\n```\n\n## 2、编写chart的值文件ingress-nginx.yaml\n\n```bash\ncat >ingress-nginx.yaml<<\\EOF\ncontroller:\n  hostNetwork: true\n  daemonset:\n    useHostPort: false\n    hostPorts:\n      http: 80\n      https: 443\n  service:\n    type: ClusterIP\n  tolerations:\n    - operator: \"Exists\"\n  nodeSelector:\n    node-role.kubernetes.io/edge: ''\n\ndefaultBackend:\n  tolerations:\n    - operator: \"Exists\"\n  nodeSelector:\n    node-role.kubernetes.io/edge: ''\nEOF\n```\n\n## 3、安装nginx-ingress\n\n```bash\nhelm del --purge nginx-ingress\n \nhelm repo update\n\nhelm install stable/nginx-ingress \\\n--name nginx-ingress \\\n--namespace kube-system  \\\n-f ingress-nginx.yaml\n\n如果访问 http://192.168.56.11 返回default backend，则部署完成。\n\n#nginx-ingress\ndocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1\ndocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1\ndocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1\n\n#defaultbackend\ndocker pull googlecontainer/defaultbackend-amd64:1.5\ndocker tag googlecontainer/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5\ndocker rmi googlecontainer/defaultbackend-amd64:1.5\n```\n\n## 4、查看 nginx-ingress 的 Pod\n\n```bash\nkubectl get pods -n kube-system | grep nginx-ingress\n```\n\n# 三、Helm 安装部署Kubernetes的dashboard\n\n## 1、创建tls secret\n\n```bash\nopenssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout ./tls.key -out ./tls.crt -subj \"/CN=k8s.test.com\"\n```\n\n## 2、安装tls secret\n```bash\nkubectl delete secret dashboard-tls-secret -n kube-system\n\nkubectl -n kube-system  create secret tls dashboard-tls-secret --key ./tls.key --cert ./tls.crt\n\nkubectl get secret -n kube-system |grep dashboard\n```\n\n## 3、安装\n\n```bash\ncat >kubernetes-dashboard.yaml<<\\EOF\nimage:\n  repository: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64\n  tag: v1.10.1\ningress:\n  enabled: true\n  hosts: \n    - k8s.test.com\n  annotations:\n    nginx.ingress.kubernetes.io/ssl-redirect: \"false\"\n    nginx.ingress.kubernetes.io/backend-protocol: \"HTTPS\"\n  tls:\n    - secretName: dashboard-tls-secret\n      hosts:\n      - k8s.test.com\nnodeSelector:\n    node-role.kubernetes.io/edge: ''\ntolerations:\n    - key: node-role.kubernetes.io/master\n      operator: Exists\n      effect: NoSchedule\n    - key: node-role.kubernetes.io/master\n      operator: Exists\n      effect: PreferNoSchedule\nrbac:\n  clusterAdminRole: true\nEOF\n\n相比默认配置，修改了以下配置项：\n\n  ingress.enabled - 置为 true 开启 Ingress，用 Ingress 将 Kubernetes Dashboard 服务暴露出来，以便让我们浏览器能够访问\n  \n  ingress.annotations - 指定 ingress.class 为 nginx，让我们安装 Nginx Ingress Controller 来反向代理 Kubernetes Dashboard 服务；由于 Kubernetes Dashboard 后端服务是以 https 方式监听的，而 Nginx Ingress Controller 默认会以 HTTP 协议将请求转发给后端服务，用secure-backends这个 annotation 来指示 Nginx Ingress Controller 以 HTTPS 协议将请求转发给后端服务\n  \n  ingress.hosts - 这里替换为证书配置的域名\n  \n  Ingress.tls - secretName 配置为 cert-manager 生成的免费证书所在的 Secret 资源名称，hosts 替换为证书配置的域名\n  \n  rbac.clusterAdminRole - 置为 true 让 dashboard 的权限够大，这样我们可以方便操作多个 namespace\n\n```\n\n## 4、命令安装\n\n1、安装\n```bash\n#删除\nhelm delete kubernetes-dashboard\nhelm del --purge kubernetes-dashboard\n\n#安装\nhelm install stable/kubernetes-dashboard \\\n-n kubernetes-dashboard \\\n--namespace kube-system  \\\n-f kubernetes-dashboard.yaml\n```\n\n2、查看pod\n```bash\nkubectl get pods -n kube-system -o wide\n```\n\n3、查看详细信息\n```bash\nkubectl describe pod `kubectl get pod -A|grep dashboard|awk '{print $2}'` -n kube-system\n```\n\n4、访问\n```bash\n#获取token\nkubectl describe -n kube-system secret/`kubectl -n kube-system get secret | grep kubernetes-dashboard-token|awk '{print $1}'`\n\n#访问\nhttps://k8s.test.com\n```\n\n参考文档：\n\nhttps://www.cnblogs.com/hongdada/p/11395200.html  镜像问题\n\nhttps://www.qikqiak.com/post/install-nginx-ingress/\n\nhttps://www.cnblogs.com/bugutian/p/11366556.html  国内不fq安装K8S三: 使用helm安装kubernet-dashboard\n\nhttps://www.cnblogs.com/hongdada/p/11284534.html  Helm 安装部署Kubernetes的dashboard\n\nhttps://www.cnblogs.com/chanix/p/11731388.html  Helm - K8S的包管理器\n\nhttps://www.cnblogs.com/peitianwang/p/11649621.html\n\n\n"
  },
  {
    "path": "kubeadm/K8S-HA-V1.13.4-关闭防火墙版.md",
    "content": "# 环境介绍：\n```bash\nCentOS： 7.6\nDocker： 18.06.1-ce\nKubernetes： 1.13.4\nKuberadm： 1.13.4\nKuberlet： 1.13.4\nKuberctl： 1.13.4\n```  \n# 部署介绍：\n\n&#8195;创建高可用首先先有一个 Master 节点，然后再让其他服务器加入组成三个 Master 节点高可用，然后再将工作节点 Node 加入。下面将描述每个节点要执行的步骤：\n```bash\nMaster01： 二、三、四、五、六、七、八、九、十一\nMaster02、Master03： 二、三、五、六、四、九\nnode01、node02、node03： 二、五、六、九\n```\n# 集群架构：\n\n  ![kubeadm高可用架构图](https://github.com/Lancger/opsfull/blob/master/images/kubeadm-ha.jpg)\n \n# 一、kuberadm 简介\n\n### 1、Kuberadm 作用\n\n&#8195;Kubeadm 是一个工具，它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。\n\n&#8195;kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群。它被故意设计为只关心启动集群，而不是之前的节点准备工作。同样的，诸如安装各种各样值得拥有的插件，例如 Kubernetes Dashboard、监控解决方案以及特定云提供商的插件，这些都不在它负责的范围。\n\n&#8195;相反，我们期望由一个基于 kubeadm 从更高层设计的更加合适的工具来做这些事情；并且，理想情况下，使用 kubeadm 作为所有部署的基础将会使得创建一个符合期望的集群变得容易。\n\n### 2、Kuberadm 功能\n```bash\nkubeadm init： 启动一个 Kubernetes 主节点\nkubeadm join： 启动一个 Kubernetes 工作节点并且将其加入到集群\nkubeadm upgrade： 更新一个 Kubernetes 集群到新版本\nkubeadm config： 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化集群，您需要对集群做一些配置以便使用 kubeadm upgrade 命令\nkubeadm token： 管理 kubeadm join 使用的令牌\nkubeadm reset： 还原 kubeadm init 或者 kubeadm join 对主机所做的任何更改\nkubeadm version： 打印 kubeadm 版本\nkubeadm alpha： 预览一组可用的新功能以便从社区搜集反馈\n```\n### 3、功能版本\n\n<table border=\"0\">\n    <tr>\n        <td><strong>Area<strong></td>\n        <td><strong>Maturity Level<strong></td>\n    </tr>\n    <tr>\n        <td>Command line UX</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Implementation</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Config file API</td>\n        <td>beta</td>\n    </tr>\n    <tr>\n        <td>CoreDNS</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>kubeadm alpha subcommands</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>High availability</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>DynamicKubeletConfig</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>Self-hosting</td>\n        <td>alpha</td>\n    </tr>\n</table>\n            \n# 二、前期准备\n\n### 1、虚拟机分配说明\n\n<table border=\"0\">\n    <tr>\n        <td><strong>地址<strong></td>\n        <td><strong>主机名</td>\n        <td><strong>内存&CPU</td>\n        <td><strong>角色</td>\n    </tr>\n    <tr>\n        <td>10.19.2.200</td>\n        <td>-</td>\n        <td>-</td>\n        <td>vip</td>\n    </tr>\n    <tr>\n        <td>10.19.2.56</td>\n        <td>k8s-master-01</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.19.2.57</td>\n        <td>k8s-master-02</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.19.2.58</td>\n        <td>k8s-master-03</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.19.2.246</td>\n        <td>k8s-node-01</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>10.19.2.247</td>\n        <td>k8s-node-02</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>10.19.2.248</td>\n        <td>k8s-node-03</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n</table>\n\n### 2、各个节点端口占用\n\n- Master 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>6443*</td>\n        <td>Kubernetes API</td>\n        <td>server All</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>2379-2380</td>\n        <td>etcd server</td>\n        <td>client API kube-apiserver, etcd</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10251</td>\n        <td>kube-scheduler</td>\n        <td>Self</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10252</td>\n        <td>kube-controller-manager</td>\n        <td>Self</td>\n    </tr>\n</table>\n\n- node 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>30000-32767</td>\n        <td>NodePort Services**</td>\n        <td>All</td>\n    </tr>\n</table>\n    \n### 3、基础环境设置\n\n&#8195;Kubernetes 需要一定的环境来保证正常运行，如各个节点时间同步，主机名称解析，关闭防火墙等等。\n\n1、主机名称解析\n\n&#8195;分布式系统环境中的多主机通信通常基于主机名称进行，这在 IP 地址存在变化的可能性时为主机提供了固定的访问人口，因此一般需要有专用的 DNS 服务负责解决各节点主机 不过，考虑到此处部署的是测试集群，因此为了降低系复杂度，这里将基于 hosts 的文件进行主机名称解析。\n\n2、修改hosts和免key登录\n\n```bash\n#分别进入不同服务器，进入 /etc/hosts 进行编辑\n\ncat > /etc/hosts << \\EOF\n127.0.0.1     localhost  localhost.localdomain localhost4 localhost4.localdomain4\n::1           localhost  localhost.localdomain localhost6 localhost6.localdomain6\n10.19.2.200   k8s-vip         master      master.k8s.io\n10.19.2.56    k8s-master-01   master01    master01.k8s.io\n10.19.2.57    k8s-master-02   master02    master02.k8s.io\n10.19.2.58    k8s-master-03   master03    master03.k8s.io\n10.19.2.246   k8s-node-01     node01      node01.k8s.io\n10.19.2.247   k8s-node-02     node02      node02.k8s.io\n10.19.2.248   k8s-node-03     node03      node03.k8s.io\nEOF\n\n#root用户免密登录\nmkdir -p /root/.ssh/\nchmod 700 /root/.ssh/\necho 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bRm20od1b3rzW3ZPLB5NZn3jQesvfiz2p0WlfcYJrFHfF5Ap0ubIBUSQpVNLn94u8ABGBLboZL8Pjo+rXQPkIcObJxoKS8gz6ZOxcxJhldudbadabdanKAAKAKKKKKKKKKKKKKKKKKKKKKKK root@k8s-master-01' > /root/.ssh/authorized_keys\nchmod 400 /root/.ssh/authorized_keys\n```\n\n3、修改hostname\n\n```bash\n#分别进入不同的服务器修改 hostname 名称\n\n# 修改 10.19.2.56 服务器\nhostnamectl  set-hostname  k8s-master-01\n\n# 修改 10.19.2.57 服务器\nhostnamectl  set-hostname  k8s-master-02\n\n# 修改 10.19.2.58 服务器\nhostnamectl  set-hostname  k8s-master-03\n\n# 修改 10.19.2.246 服务器\nhostnamectl  set-hostname  k8s-node-01\n\n# 修改 10.19.2.247 服务器\nhostnamectl  set-hostname  k8s-node-02\n\n# 修改 10.19.2.248 服务器\nhostnamectl  set-hostname  k8s-node-03\n```\n\n4、主机时间同步\n\n```bash\n#将各个服务器的时间同步，并设置开机启动同步时间服务\n\nsystemctl start chronyd.service\nsystemctl enable chronyd.service\n```\n\n5、关闭防火墙服务\n```bash\nsystemctl stop firewalld\nsystemctl disable firewalld\n```\n\n6、关闭并禁用SELinux\n```bash\n# 若当前启用了 SELinux 则需要临时设置其当前状态为 permissive\nsetenforce 0\n\n# 编辑／etc/sysconfig selinux 文件，以彻底禁用 SELinux\nsed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config\n\n# 查看selinux状态\ngetenforce \n\n如果为permissive，则执行reboot重新启动即可\n\n```\n\n7、禁用 Swap 设备\n\n&#8195;kubeadm 默认会预先检当前主机是否禁用了 Swap 设备，并在未用时强制止部署 过程因此，在主机内存资惊充裕的条件下，需要禁用所有的 Swap 设备\n\n```\n# 关闭当前已启用的所有 Swap 设备\nswapoff -a && sysctl -w vm.swappiness=0\n\nsed -ri 's/.*swap.*/#&/' /etc/fstab\n或\n# 编辑 fstab 配置文件，注释掉标识为 Swap 设备的所有行\nvi /etc/fstab\n\nUUID=9be41058-76a6-4588-8e3f-5b44604d8de1 /                       xfs     defaults,noatime        0 0\nUUID=4489cc8f-1885-4e17-bfe7-8652fd1d3feb /boot                   xfs     defaults,noatime        0 0\n#UUID=0f5ae5f1-4872-471f-9f3a-f172a43fc1ff swap                    swap    defaults,noatime        0 0\n```\n\n8、设置系统参数\n\n&#8195;设置允许路由转发，不对bridge的数据进行处理\n\n```bash\n#创建 /etc/sysctl.d/k8s.conf 文件\n\ncat > /etc/sysctl.d/k8s.conf << \\EOF\nvm.swappiness = 0\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nEOF\n\n#挂载br_netfilter\nmodprobe br_netfilter\n\n#生效配置文件\nsysctl -p /etc/sysctl.d/k8s.conf\n\n#查看是否生成相关文件\nls /proc/sys/net/bridge\n```\n\n9、资源配置文件\n\n`/etc/security/limits.conf` 是 Linux 资源使用配置文件，用来限制用户对系统资源的使用\n\n```bash\necho \"* soft nofile 65536\" >> /etc/security/limits.conf\necho \"* hard nofile 65536\" >> /etc/security/limits.conf\necho \"* soft nproc 65536\"  >> /etc/security/limits.conf\necho \"* hard nproc 65536\"  >> /etc/security/limits.conf\necho \"* soft memlock unlimited\"  >> /etc/security/limits.conf\necho \"* hard memlock unlimited\"  >> /etc/security/limits.conf\n```\n\n10、安装依赖包以及相关工具\n\n```bash\nyum install -y epel-release\n\nyum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl\n```\n\n# 三、安装Keepalived\n\n- keepalived介绍： 是集群管理中保证集群高可用的一个服务软件，其功能类似于heartbeat，用来防止单点故障\n- Keepalived作用： 为haproxy提供vip（10.19.2.200）在三个haproxy实例之间提供主备，降低当其中一个haproxy失效的时对服务的影响。\n\n### 1、yum安装Keepalived\n```bash\n# 安装keepalived\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\nyum install -y keepalived\n```\n\n### 2、配置Keepalived\n```bash\ncat <<EOF > /etc/keepalived/keepalived.conf\n! Configuration File for keepalived\n\n# 主要是配置故障发生时的通知对象以及机器标识。\nglobal_defs {\n   # 标识本节点的字条串，通常为 hostname，但不一定非得是 hostname。故障发生时，邮件通知会用到。\n   router_id LVS_k8s\n}\n\n# 用来做健康检查的，当时检查失败时会将 vrrp_instance 的 priority 减少相应的值。\nvrrp_script check_haproxy {\n    script \"killall -0 haproxy\"   #根据进程名称检测进程是否存活\n    interval 3\n    weight -2\n    fall 10\n    rise 2\n}\n\n# rp_instance用来定义对外提供服务的 VIP 区域及其相关属性。\nvrrp_instance VI_1 {\n    state MASTER   #当前节点为MASTER，其他两个节点设置为BACKUP\n    interface eth0 #改为自己的网卡\n    virtual_router_id 51\n    priority 250\n    advert_int 1\n    authentication {\n        auth_type PASS\n        auth_pass 35f18af7190d51c9f7f78f37300a0cbd\n    }\n    virtual_ipaddress {\n        10.19.2.200   #虚拟ip，即VIP\n    }\n    track_script {\n        check_haproxy\n    }\n\n}\nEOF\n```\n当前节点的配置中 state 配置为 MASTER，其它两个节点设置为 BACKUP\n\n```bash\n配置说明：\n\n    virtual_ipaddress： vip\n    track_script： 执行上面定义好的检测的script\n    interface： 节点固有IP（非VIP）的网卡，用来发VRRP包。\n    virtual_router_id： 取值在0-255之间，用来区分多个instance的VRRP组播\n    advert_int： 发VRRP包的时间间隔，即多久进行一次master选举（可以认为是健康查检时间间隔）。\n    authentication： 认证区域，认证类型有PASS和HA（IPSEC），推荐使用PASS（密码只识别前8位）。\n    state： 可以是MASTER或BACKUP，不过当其他节点keepalived启动时会将priority比较大的节点选举为MASTER，因此该项其实没有实质用途。\n    priority： 用来选举master的，要成为master，那么这个选项的值最好高于其他机器50个点，该项取值范围是1-255（在此范围之外会被识别成默认值100）。\n    \n# 1、注意防火墙需要放开vrrp协议(不然会出现脑裂现象，三台主机都存在VIP的情况)\n#-A INPUT -p vrrp -j ACCEPT\n-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT\n    \n#2、注意上面配置script \"killall -0 haproxy\"   #根据进程名称检测进程是否存活，会在/var/log/messages每隔一秒执行检测的日志记录\n# tail -100f /var/log/message\n\nSep 27 10:54:16 tw19410s1 Keepalived_vrrp[9113]: /usr/bin/killall -0 haproxy exited with status 1\n```\n\n### 3、启动Keepalived\n```bash\n# 设置开机启动\nsystemctl enable keepalived\n\n# 启动keepalived\nsystemctl start keepalived\n\n# 查看启动状态\nsystemctl status keepalived\n```\n### 4、查看网络状态\n\nkepplived 配置中 state 为 MASTER 的节点启动后，查看网络状态，可以看到虚拟IP已经加入到绑定的网卡中\n\n```bash\n[root@k8s-master-01 ~]# ip address show eth0\n2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000\n    link/ether 00:50:56:be:86:af brd ff:ff:ff:ff:ff:ff\n    inet 10.19.2.56/22 brd 10.19.3.255 scope global eth0\n       valid_lft forever preferred_lft forever\n    inet 10.19.2.200/32 scope global eth0\n       valid_lft forever preferred_lft forever\n\n当关掉当前节点的keeplived服务后将进行虚拟IP转移，将会推选state 为 BACKUP 的节点的某一节点为新的MASTER，可以在那台节点上查看网卡，将会查看到虚拟IP\n```\n\n# 四、安装haproxy\n\n&#8195;此处的haproxy为apiserver提供反向代理，haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量，这种方式更加合理、健壮。\n\n### 1、yum安装haproxy\n```bash\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\n\nyum install -y haproxy\n```\n\n### 2、配置haproxy\n```bash\ncat > /etc/haproxy/haproxy.cfg << EOF\n#---------------------------------------------------------------------\n# Global settings\n#---------------------------------------------------------------------\nglobal\n    # to have these messages end up in /var/log/haproxy.log you will\n    # need to:\n    # 1) configure syslog to accept network log events.  This is done\n    #    by adding the '-r' option to the SYSLOGD_OPTIONS in\n    #    /etc/sysconfig/syslog\n    # 2) configure local2 events to go to the /var/log/haproxy.log\n    #   file. A line like the following can be added to\n    #   /etc/sysconfig/syslog\n    #\n    #    local2.*                       /var/log/haproxy.log\n    #\n    log         127.0.0.1 local2\n    \n    chroot      /var/lib/haproxy\n    pidfile     /var/run/haproxy.pid\n    maxconn     4000\n    user        haproxy\n    group       haproxy\n    daemon \n       \n    # turn on stats unix socket\n    stats socket /var/lib/haproxy/stats\n#---------------------------------------------------------------------\n# common defaults that all the 'listen' and 'backend' sections will\n# use if not designated in their block\n#---------------------------------------------------------------------  \ndefaults\n    mode                    http\n    log                     global\n    option                  httplog\n    option                  dontlognull\n    option http-server-close\n    option forwardfor       except 127.0.0.0/8\n    option                  redispatch\n    retries                 3\n    timeout http-request    10s\n    timeout queue           1m\n    timeout connect         10s\n    timeout client          1m\n    timeout server          1m\n    timeout http-keep-alive 10s\n    timeout check           10s\n    maxconn                 3000\n#---------------------------------------------------------------------\n# kubernetes apiserver frontend which proxys to the backends\n#--------------------------------------------------------------------- \nfrontend kubernetes-apiserver\n    mode                 tcp\n    bind                 *:16443\n    option               tcplog\n    default_backend      kubernetes-apiserver    \n#---------------------------------------------------------------------\n# round robin balancing between the various backends\n#---------------------------------------------------------------------\nbackend kubernetes-apiserver\n    mode        tcp\n    balance     roundrobin\n    server      master01.k8s.io   10.19.2.56:6443 check\n    server      master02.k8s.io   10.19.2.57:6443 check\n    server      master03.k8s.io   10.19.2.58:6443 check\n#---------------------------------------------------------------------\n# collection haproxy statistics message\n#---------------------------------------------------------------------\nlisten stats\n    bind                 *:1080\n    stats auth           admin:awesomePassword\n    stats refresh        5s\n    stats realm          HAProxy\\ Statistics\n    stats uri            /admin?stats\nEOF\n```\nhaproxy配置在其他master节点上(10.19.2.57和10.19.2.58)相同\n\n### 3、启动并检测haproxy\n```bash\n# 设置开机启动\nsystemctl enable haproxy\n\n# 开启haproxy\nsystemctl start haproxy\n\n# 查看启动状态\nsystemctl status haproxy\n```\n\n### 4、检测haproxy端口\n```bash\nss -lnt | grep -E \"16443|1080\"\n```\n\n# 五、安装Docker (所有节点)\n\n### 1、移除之前安装过的Docker\n```bash\nsudo yum remove -y docker \\\n                  docker-client \\\n                  docker-client-latest \\\n                  docker-common \\\n                  docker-latest \\\n                  docker-latest-logrotate \\\n                  docker-logrotate \\\n                  docker-selinux \\\n                  docker-engine-selinux \\\n                  docker-ce-cli \\\n                  docker-engine\n                  \n# 查看还有没有存在的docker组件\nrpm -qa|grep docker\n\n# 有则通过命令 yum -y remove XXX 来删除,比如：\nyum remove docker-ce-cli\n```\n\n### 2、配置docker的yum源\n\n下面两个镜像源选择其一即可，由于官方下载速度比较慢，推荐用阿里镜像源\n\n- 阿里镜像源\n\n```bash\nsudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\n```\n\n- Docker官方镜像源\n```bash\nsudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo\n```\n\n### 3、安装Docker：\n\n```\n# 显示docker-ce所有可安装版本：\nyum list docker-ce --showduplicates | sort -r\n\n# 安装指定docker版本\nsudo yum install docker-ce-18.06.1.ce-3.el7 -y\n\n# 启动docker并设置docker开机启动\nsystemctl enable docker\nsystemctl start docker\n\n# 确认一下iptables\n确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。\n\niptables -nvL\n\nChain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target     prot opt in     out     source               destination         \n    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED\n    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0      \n    \nDocker从1.13版本开始调整了默认的防火墙规则，禁用了iptables filter表中FOWARD链，这样会引起Kubernetes集群中跨Node的Pod无法通信。但这里通过安装docker 1806，发现默认策略又改回了ACCEPT，这个不知道是从哪个版本改回的，因为我们线上版本使用的1706还是需要手动调整这个策略的。\n\n# 执行下面命令\niptables -P FORWARD ACCEPT\n\n# 修改docker的配置\nvim /usr/lib/systemd/system/docker.service\n\n# 增加下面命令\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\n\n# 配置docker加速器\ncat > /etc/docker/daemon.json << \\EOF\n{\n  \"registry-mirrors\": [\n    \"https://dockerhub.azk8s.cn\",\n    \"https://i37dz0y4.mirror.aliyuncs.com\"\n  ],\n  \"insecure-registries\": [\"reg.hub.com\"]\n}\nEOF\n\n# 重启Docker\nsystemctl daemon-reload\nsystemctl restart docker\n```\n### 4、docker最终的服务文件\n```\n#注意，有变量的地方需要使用转义符号\n\ncat > /usr/lib/systemd/system/docker.service << EOF\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\nAfter=network-online.target firewalld.service containerd.service\nWants=network-online.target\nRequires=docker.socket\n\n[Service]\nType=notify\n# the default is not to use systemd for cgroups because the delegate issues still\n# exists and systemd currently does not support the cgroup feature set required\n# for containers run by docker\nExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd\nExecReload=/bin/kill -s HUP \\$MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\nTimeoutSec=0\nRestartSec=2\nRestart=always\n\n# Note that StartLimit* options were moved from \"Service\" to \"Unit\" in systemd 229.\n# Both the old, and new location are accepted by systemd 229 and up, so using the old location\n# to make them work for either version of systemd.\nStartLimitBurst=3\n\n# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.\n# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make\n# this option work for either version of systemd.\nStartLimitInterval=60s\n\n# Having non-zero Limit*s causes performance problems due to accounting overhead\n# in the kernel. We recommend using cgroups to do container-local accounting.\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\n\n# Comment TasksMax if your systemd version does not support it.\n# Only systemd 226 and above support this option.\nTasksMax=infinity\n\n# set delegate yes so that systemd does not reset the cgroups of docker containers\nDelegate=yes\n\n# kill only the docker process, not all processes in the cgroup\nKillMode=process\n\n[Install]\nWantedBy=multi-user.target\nEOF\n```\n\n# 六、安装kubeadm、kubelet\n\n### 1、配置可用的国内yum源用于安装：\n```\ncat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\ngpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg\nEOF\n```\n\n### 2、安装kubelet\n\n```\n# 需要在每台机器上都安装以下的软件包：\n     kubeadm: 用来初始化集群的指令。\n     kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。\n     kubectl: 用来与集群通信的命令行工具。\n\n# 查看kubelet版本列表\nyum list kubelet --showduplicates | sort -r \n\n# 安装kubelet\nyum install -y kubelet-1.13.4-0\n\n# 启动kubelet并设置开机启动\nsystemctl enable kubelet \nsystemctl start kubelet\n\n# 检查状态\n检查状态,发现是failed状态，正常，kubelet会10秒重启一次，等初始化master节点后即可正常\nsystemctl status kubelet\n```\n\n### 3、安装kubeadm\n\n```\n# 负责初始化集群\n# 1、查看kubeadm版本列表\nyum list kubeadm --showduplicates | sort -r \n\n# 2、安装kubeadm\nyum install -y kubeadm-1.13.4-0\n# 安装 kubeadm 时候会默认安装 kubectl ，所以不需要单独安装kubectl\n\n# 3、重启服务器\n为了防止发生某些未知错误，这里我们重启下服务器，方便进行后续操作\nreboot\n```\n\n# 七、初始化第一个kubernetes master节点\n\n```\n# 因为需要绑定虚拟IP，所以需要首先先查看虚拟IP启动这几台master机子哪台上\n\n[root@k8s-master-01 ~]# ip address show eth0\n2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000\n    link/ether 00:50:56:be:86:af brd ff:ff:ff:ff:ff:ff\n    inet 10.19.2.56/22 brd 10.19.3.255 scope global eth0\n       valid_lft forever preferred_lft forever\n    inet 10.19.2.200/32 scope global eth0\n       valid_lft forever preferred_lft forever\n\n可以看到虚拟IP 10.19.2.200  和 服务器IP 10.19.2.56在一台机子上，所以初始化kubernetes第一个master要在master01机子上进行安装\n```\n\n### 1、创建kubeadm配置的yaml文件\n```\n# 1、创建kubeadm配置的yaml文件\n\ncat > kubeadm-config.yaml << EOF\napiServer:\n  certSANs:\n    - k8s-master-01\n    - k8s-master-02\n    - k8s-master-03\n    - master.k8s.io\n    - 10.19.2.56\n    - 10.19.2.57\n    - 10.19.2.58\n    - 10.19.2.200\n    - 127.0.0.1\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: \"master.k8s.io:16443\"\ncontrollerManager: {}\ndns: \n  type: CoreDNS\netcd:\n  local:    \n    dataDir: /var/lib/etcd\nimageRepository: registry.aliyuncs.com/google_containers\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking: \n  dnsDomain: cluster.local  \n  podSubnet: 10.20.0.0/16\n  serviceSubnet: 10.10.0.0/16\nscheduler: {}\nEOF\n\n以下两个地方设置： \n- certSANs： 虚拟ip地址（为了安全起见，把所有集群地址都加上） \n- controlPlaneEndpoint： 虚拟IP:监控端口号\n\n配置说明：\n\n    imageRepository： registry.aliyuncs.com/google_containers (使用阿里云镜像仓库)\n    podSubnet： 10.20.0.0/16 (#pod地址池)\n    serviceSubnet： 10.10.0.0/16 (#service地址池)\n```\n\n### 2、初始化第一个master节点\n```\nkubeadm init --config kubeadm-config.yaml \n```\n日志\n```\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n  kubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f\n```\n在此处看日志可以知道，通过\n```\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f\n```\n来让节点加入集群\n\n### 3、配置kubectl环境变量\n```bash\n# 配置环境变量\n\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 4、查看组件状态\n```bash\nkubectl get cs\n\nNAME                 STATUS    MESSAGE              ERROR\ncontroller-manager   Healthy   ok                   \nscheduler            Healthy   ok                   \netcd-0               Healthy   {\"health\": \"true\"}   \n\n# 查看pod状态\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                0/1     Pending   0          7m32s    ---coredns没有启动\ncoredns-78d4cf999f-mkgsx                0/1     Pending   0          7m32s    ---coredns没有启动\netcd-k8s-master-01                      1/1     Running   0          6m39s\nkube-apiserver-k8s-master-01            1/1     Running   0          6m43s\nkube-controller-manager-k8s-master-01   1/1     Running   0          6m32s\nkube-proxy-88s74                        1/1     Running   0          7m32s\nkube-scheduler-k8s-master-01            1/1     Running   0          6m45s\n\n可以看到coredns没有启动，这是由于还没有配置网络插件，接下来配置下后再重新查看启动状态\n```\n# 八、安装网络插件\n\n### 1、配置flannel插件的yaml文件\n```bash\ncat > kube-flannel.yaml << EOF\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: flannel\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - pods\n    verbs:\n      - get\n  - apiGroups:\n      - \"\"\n    resources:\n      - nodes\n    verbs:\n      - list\n      - watch\n  - apiGroups:\n      - \"\"\n    resources:\n      - nodes/status\n    verbs:\n      - patch\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: flannel\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: flannel\nsubjects:\n- kind: ServiceAccount\n  name: flannel\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: flannel\n  namespace: kube-system\n---\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: kube-flannel-cfg\n  namespace: kube-system\n  labels:\n    tier: node\n    app: flannel\ndata:\n  cni-conf.json: |\n    {\n      \"name\": \"cbr0\",\n      \"plugins\": [\n        {\n          \"type\": \"flannel\",\n          \"delegate\": {\n            \"hairpinMode\": true,\n            \"isDefaultGateway\": true\n          }\n        },\n        {\n          \"type\": \"portmap\",\n          \"capabilities\": {\n            \"portMappings\": true\n          }\n        }\n      ]\n    }\n  net-conf.json: |\n    {\n      \"Network\": \"10.20.0.0/16\",\n      \"Backend\": {\n        \"Type\": \"vxlan\"\n      }\n    }\n---\napiVersion: extensions/v1beta1\nkind: DaemonSet\nmetadata:\n  name: kube-flannel-ds-amd64\n  namespace: kube-system\n  labels:\n    tier: node\n    app: flannel\nspec:\n  template:\n    metadata:\n      labels:\n        tier: node\n        app: flannel\n    spec:\n      hostNetwork: true\n      nodeSelector:\n        beta.kubernetes.io/arch: amd64\n      tolerations:\n      - operator: Exists\n        effect: NoSchedule\n      serviceAccountName: flannel\n      initContainers:\n      - name: install-cni\n        image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64\n        command:\n        - cp\n        args:\n        - -f\n        - /etc/kube-flannel/cni-conf.json\n        - /etc/cni/net.d/10-flannel.conflist\n        volumeMounts:\n        - name: cni\n          mountPath: /etc/cni/net.d\n        - name: flannel-cfg\n          mountPath: /etc/kube-flannel/\n      containers:\n      - name: kube-flannel\n        image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64\n        command:\n        - /opt/bin/flanneld\n        args:\n        - --ip-masq\n        - --kube-subnet-mgr\n        resources:\n          requests:\n            cpu: \"100m\"\n            memory: \"50Mi\"\n          limits:\n            cpu: \"100m\"\n            memory: \"50Mi\"\n        securityContext:\n          privileged: true\n        env:\n        - name: POD_NAME\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.name\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.namespace\n        volumeMounts:\n        - name: run\n          mountPath: /run\n        - name: flannel-cfg\n          mountPath: /etc/kube-flannel/\n      volumes:\n        - name: run\n          hostPath:\n            path: /run\n        - name: cni\n          hostPath:\n            path: /etc/cni/net.d\n        - name: flannel-cfg\n          configMap:\n            name: kube-flannel-cfg\nEOF\n\n“Network”: “10.20.0.0/16”要和kubeadm-config.yaml配置文件中podSubnet: 10.20.0.0/16相同\n```\n\n### 2、创建flanner相关role和pod\n\n```\n# 应用生效\n[root@k8s-master-01 ~]# kubectl apply -f kube-flannel.yaml\nclusterrole.rbac.authorization.k8s.io/flannel created\nclusterrolebinding.rbac.authorization.k8s.io/flannel created\nserviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.extensions/kube-flannel-ds-amd64 created\n\n# 等待一会时间，再次查看各个pods的状态\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                1/1     Running   0          12m    ---coredns启动成功\ncoredns-78d4cf999f-mkgsx                1/1     Running   0          12m    ---coredns启动成功\netcd-k8s-master-01                      1/1     Running   0          11m\nkube-apiserver-k8s-master-01            1/1     Running   0          12m\nkube-controller-manager-k8s-master-01   1/1     Running   0          11m\nkube-flannel-ds-amd64-7lj6m             1/1     Running   0          13s\nkube-proxy-88s74                        1/1     Running   0          12m\nkube-scheduler-k8s-master-01            1/1     Running   0          12m\n```\n\n\n# 九、加入集群\n\n### 1、Master加入集群构成高可用\n```\n复制秘钥到各个节点\n\n在master01 服务器上执行下面命令，将kubernetes相关文件复制到 master02、master03\n\n如果其他节点为初始化第一个master节点，则将该节点的配置文件复制到其余两个主节点，例如master03为第一个master节点，则将它的k8s配置复制到master02和master01。\n```\n- 复制文件到 master02\n```\nssh root@master02.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master02.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master02.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master02.k8s.io:/etc/kubernetes/pki/etcd\n```\n- 复制文件到 master03\n\n```\nssh root@master03.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master03.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master03.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master03.k8s.io:/etc/kubernetes/pki/etcd\n```\n- master节点加入集群\n\n&#8195;master02 和 master03 服务器上都执行加入集群操作\n\n```bash\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f --experimental-control-plane\n```\n&#8195;如果加入失败想重新尝试，请输入 kubeadm reset 命令清除之前的设置，重新执行从“复制秘钥”和“加入集群”这两步\n\n&#8195;如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n\n```bash\n# 显示安装过程:\n\nThis node has joined the cluster and a new control plane instance was created:\n\n* Certificate signing request was sent to apiserver and approval was received.\n* The Kubelet was informed of the new secure connection details.\n* Master label and taint were applied to the new node.\n* The Kubernetes control plane instances scaled up.\n* A new etcd member was added to the local/stacked etcd cluster.\n\nTo start administering your cluster from this node, you need to run the following as a regular user:\n\n        mkdir -p $HOME/.kube\n        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n        sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nRun 'kubectl get nodes' to see this node join the cluster.\n```\n- 配置kubectl环境变量\n```bash\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 2、node节点加入集群\n\n&#8195;除了让master节点加入集群组成高可用外，slave节点也要加入集群中。\n\n&#8195;这里将k8s-node-01、k8s-node-02、k8s-node-03加入集群，进行工作\n\n&#8195;输入初始化k8s master时候提示的加入命令，如下：\n\n```\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f\n```\n&#8195;node节点加入，不需要加上 –experimental-control-plane 这个参数\n\n### 3、如果忘记加入集群的token和sha256 (如正常则跳过)\n\n- 显示获取token列表\n\n```\nkubeadm token list\n```\n\n默认情况下 Token 过期是时间是24小时，如果 Token 过期以后，可以输入以下命令，生成新的 Token\n\n```\nkubeadm token create\n```\n\n- 获取ca证书sha256编码hash值\n\n```\nopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'\n```\n\n拼接命令\n```\nkubeadm join master.k8s.io:16443 --token 882ik4.9ib2kb0eftvuhb58 --discovery-token-ca-cert-hash sha256:0b1a836894d930c8558b350feeac8210c85c9d35b6d91fde202b870f3244016a\n\n如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n```\n\n### 4、查看各个节点加入集群情况\n```\nkubectl get nodes -o wide\n\n```\n\n# 十、从集群中删除 Node\n\n- Master节点：\n\n```\nkubectl drain <node name> --delete-local-data --force --ignore-daemonsets\nkubectl delete node <node name>\n```\n\n- Slave节点：\n\n```\nkubeadm reset\n```\n\n\n\n## 初始化失败\n```bash\nkubeadm reset\nifconfig cni0 down\nip link delete cni0\nifconfig flannel.1 down\nip link delete flannel.1\nrm -rf /var/lib/cni/\nrm -rf /var/lib/etcd/*\n```\n\n参考资料：\n\nhttp://www.mydlq.club/article/4/\n"
  },
  {
    "path": "kubeadm/K8S-HA-V1.16.x-云环境-Calico.md",
    "content": "# 环境介绍：\n```bash\nCentOS： 7.6\nDocker： docker-ce-18.09.9\nKubernetes： 1.16.2\n- calico 3.8.2\n- Kubeadm： 1.16.2\n- nginx-ingress 1.5.3\n- Kubelet： 1.16.2\n```  \n# 部署介绍：\n```\n三个 master 组成主节点集群，通过内网 loader balancer 实现负载均衡；至少需要三个 master 节点才可组成高可用集群，否则会出现 脑裂 现象\n\n多个 worker 组成工作节点集群，通过外网 loader balancer 实现负载均衡\n```\n\n# 集群架构：\n\n  ![kubeadm高可用架构图](https://github.com/Lancger/opsfull/blob/master/images/kubeadm-ha.jpg)\n \n# 一、kuberadm 简介\n\n### 1、Kuberadm 作用\n\n&#8195;Kubeadm 是一个工具，它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。\n\n&#8195;kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群。它被故意设计为只关心启动集群，而不是之前的节点准备工作。同样的，诸如安装各种各样值得拥有的插件，例如 Kubernetes Dashboard、监控解决方案以及特定云提供商的插件，这些都不在它负责的范围。\n\n&#8195;相反，我们期望由一个基于 kubeadm 从更高层设计的更加合适的工具来做这些事情；并且，理想情况下，使用 kubeadm 作为所有部署的基础将会使得创建一个符合期望的集群变得容易。\n\n### 2、Kuberadm 功能\n```bash\nkubeadm init： 启动一个 Kubernetes 主节点\nkubeadm join： 启动一个 Kubernetes 工作节点并且将其加入到集群\nkubeadm upgrade： 更新一个 Kubernetes 集群到新版本\nkubeadm config： 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化集群，您需要对集群做一些配置以便使用 kubeadm upgrade 命令\nkubeadm token： 管理 kubeadm join 使用的令牌\nkubeadm reset： 还原 kubeadm init 或者 kubeadm join 对主机所做的任何更改\nkubeadm version： 打印 kubeadm 版本\nkubeadm alpha： 预览一组可用的新功能以便从社区搜集反馈\n```\n### 3、功能版本\n\n<table border=\"0\">\n    <tr>\n        <td><strong>Area<strong></td>\n        <td><strong>Maturity Level<strong></td>\n    </tr>\n    <tr>\n        <td>Command line UX</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Implementation</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Config file API</td>\n        <td>beta</td>\n    </tr>\n    <tr>\n        <td>CoreDNS</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>kubeadm alpha subcommands</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>High availability</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>DynamicKubeletConfig</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>Self-hosting</td>\n        <td>alpha</td>\n    </tr>\n</table>\n            \n# 二、前期准备\n\n### 1、虚拟机分配说明\n\n<table border=\"0\">\n    <tr>\n        <td><strong>地址<strong></td>\n        <td><strong>主机名</td>\n        <td><strong>内存&CPU</td>\n        <td><strong>角色</td>\n    </tr>\n    <tr>\n        <td>10.10.1.100</td>\n        <td>-</td>\n        <td>-</td>\n        <td>vip</td>\n    </tr>\n    <tr>\n        <td>10.10.0.24</td>\n        <td>k8s-master-01</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.10.0.32</td>\n        <td>k8s-master-02</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.10.0.23</td>\n        <td>k8s-master-03</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.10.0.25</td>\n        <td>k8s-node-01</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>10.10.0.29</td>\n        <td>k8s-node-02</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>10.10.0.12</td>\n        <td>k8s-node-03</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n</table>\n\n### 2、各个节点端口占用\n\n- Master 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>6443*</td>\n        <td>Kubernetes API</td>\n        <td>server All</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>2379-2380</td>\n        <td>etcd server</td>\n        <td>client API kube-apiserver, etcd</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10251</td>\n        <td>kube-scheduler</td>\n        <td>Self</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10252</td>\n        <td>kube-controller-manager</td>\n        <td>Self</td>\n    </tr>\n</table>\n\n- node 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>30000-32767</td>\n        <td>NodePort Services**</td>\n        <td>All</td>\n    </tr>\n</table>\n    \n### 3、基础环境设置\n\n&#8195;Kubernetes 需要一定的环境来保证正常运行，如各个节点时间同步，主机名称解析，关闭防火墙等等。\n\n1、主机名称解析\n\n&#8195;分布式系统环境中的多主机通信通常基于主机名称进行，这在 IP 地址存在变化的可能性时为主机提供了固定的访问人口，因此一般需要有专用的 DNS 服务负责解决各节点主机 不过，考虑到此处部署的是测试集群，因此为了降低系复杂度，这里将基于 hosts 的文件进行主机名称解析。\n\n2、修改hosts和免key登录\n\n```bash\n#分别进入不同服务器，进入 /etc/hosts 进行编辑\n\ncat > /etc/hosts << \\EOF\n127.0.0.1     localhost  localhost.localdomain localhost4 localhost4.localdomain4\n::1           localhost  localhost.localdomain localhost6 localhost6.localdomain6\n10.10.1.100   k8s-vip         master      master.k8s.io\n10.10.0.24    k8s-master-01   master01    master01.k8s.io\n10.10.0.32    k8s-master-02   master02    master02.k8s.io\n10.10.0.23    k8s-master-03   master03    master03.k8s.io\n10.10.0.25    k8s-node-01     node01      node01.k8s.io\n10.10.0.29    k8s-node-02     node02      node02.k8s.io\n10.10.0.12    k8s-node-03     node03      node03.k8s.io\nEOF\n\n#root用户免密登录\nmkdir -p /root/.ssh/\nchmod 700 /root/.ssh/\necho 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bRm20od1b3rzW3ZPLB5NZn3jQesvfiz2p0WlfcYJrFHfF5Ap0ubIBUSQpVNLn94u8ABGBLboZL8Pjo+rXQPkIcObJxoKS8gz6ZOxcxJhldudbadabdanKAAKAKKKKKKKKKKKKKKKKKKKKKKK root@k8s-master-01' > /root/.ssh/authorized_keys\nchmod 400 /root/.ssh/authorized_keys\n```\n\n3、修改hostname\n\n```bash\n#分别进入不同的服务器修改 hostname 名称\n\n# 修改 10.10.0.24 服务器\nhostnamectl  set-hostname  k8s-master-01\n\n# 修改 10.10.0.32 服务器\nhostnamectl  set-hostname  k8s-master-02\n\n# 修改 10.10.0.23 服务器\nhostnamectl  set-hostname  k8s-master-03\n\n# 修改 10.10.0.25 服务器\nhostnamectl  set-hostname  k8s-node-01\n\n# 修改 10.10.0.29 服务器\nhostnamectl  set-hostname  k8s-node-02\n\n# 修改 10.10.0.12 服务器\nhostnamectl  set-hostname  k8s-node-03\n```\n\n4、主机时间同步\n\n```bash\n#将各个服务器的时间同步，并设置开机启动同步时间服务\n\nyum install chrony -y\nsystemctl restart chronyd.service\nsystemctl enable chronyd.service\n```\n\n5、关闭防火墙服务\n```bash\nsystemctl stop firewalld\nsystemctl disable firewalld\n```\n\n6、关闭并禁用SELinux\n```bash\n# 若当前启用了 SELinux 则需要临时设置其当前状态为 permissive\nsetenforce 0\n\n# 编辑／etc/sysconfig selinux 文件，以彻底禁用 SELinux\nsed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config\n\n# 查看selinux状态\ngetenforce \n\n如果为permissive，则执行reboot重新启动即可\n\n```\n\n7、禁用 Swap 设备\n\n&#8195;kubeadm 默认会预先检当前主机是否禁用了 Swap 设备，并在未用时强制止部署 过程因此，在主机内存资惊充裕的条件下，需要禁用所有的 Swap 设备\n\n```\n# 关闭当前已启用的所有 Swap 设备\nswapoff -a && sysctl -w vm.swappiness=0\n\nsed -ri 's/.*swap.*/#&/' /etc/fstab\ncat /etc/fstab\n或\n# 编辑 fstab 配置文件，注释掉标识为 Swap 设备的所有行\nvi /etc/fstab\n\nUUID=9be41058-76a6-4588-8e3f-5b44604d8de1 /                       xfs     defaults,noatime        0 0\nUUID=4489cc8f-1885-4e17-bfe7-8652fd1d3feb /boot                   xfs     defaults,noatime        0 0\n#UUID=0f5ae5f1-4872-471f-9f3a-f172a43fc1ff swap                    swap    defaults,noatime        0 0\n```\n\n8、设置系统参数\n\n&#8195;设置允许路由转发，不对bridge的数据进行处理\n\n```bash\n#创建 /etc/sysctl.d/k8s.conf 文件\n\ncat > /etc/sysctl.d/k8s.conf << \\EOF\nvm.swappiness = 0\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nEOF\n\n#挂载br_netfilter\nmodprobe br_netfilter\n\n#生效配置文件\nsysctl -p /etc/sysctl.d/k8s.conf\n\n#查看是否生成相关文件\nls /proc/sys/net/bridge\n```\n\n9、资源配置文件\n\n`/etc/security/limits.conf` 是 Linux 资源使用配置文件，用来限制用户对系统资源的使用\n\n```bash\necho \"* soft nofile 65536\" >> /etc/security/limits.conf\necho \"* hard nofile 65536\" >> /etc/security/limits.conf\necho \"* soft nproc 65536\"  >> /etc/security/limits.conf\necho \"* hard nproc 65536\"  >> /etc/security/limits.conf\necho \"* soft memlock unlimited\"  >> /etc/security/limits.conf\necho \"* hard memlock unlimited\"  >> /etc/security/limits.conf\n```\n\n10、安装依赖包以及相关工具\n\n```bash\nyum install -y epel-release\n\nyum install -y yum-utils nfs-utils expect device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl\n```\n\n# 五、安装Docker (所有节点)\n\n### 1、移除之前安装过的Docker\n```bash\nsudo yum remove -y docker \\\n                  docker-client \\\n                  docker-client-latest \\\n                  docker-common \\\n                  docker-latest \\\n                  docker-latest-logrotate \\\n                  docker-logrotate \\\n                  docker-selinux \\\n                  docker-engine-selinux \\\n                  docker-ce-cli \\\n                  docker-engine\n                  \n# 查看还有没有存在的docker组件\nrpm -qa|grep docker\n\n# 有则通过命令 yum -y remove XXX 来删除,比如：\nyum remove docker-ce-cli\n```\n\n### 2、配置docker的yum源\n\n下面两个镜像源选择其一即可，由于官方下载速度比较慢，推荐用阿里镜像源\n\n- 阿里镜像源\n\n```bash\nyum install -y yum-utils \\\ndevice-mapper-persistent-data \\\nlvm2\nyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\n```\n\n- Docker官方镜像源\n```bash\nyum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo\n```\n\n### 3、安装Docker：\n\n```\n# 显示docker-ce所有可安装版本：\nyum list docker-ce --showduplicates | sort -r\n\n# 安装指定docker版本\nyum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io\n\n# 启动docker并设置docker开机启动\nsystemctl enable docker\nsystemctl start docker\n\n# 确认一下iptables\n确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。\n\niptables -nvL\n\nChain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target     prot opt in     out     source               destination         \n    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED\n    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0      \n    \nDocker从1.13版本开始调整了默认的防火墙规则，禁用了iptables filter表中FOWARD链，这样会引起Kubernetes集群中跨Node的Pod无法通信。但这里通过安装docker 1806，发现默认策略又改回了ACCEPT，这个不知道是从哪个版本改回的，因为我们线上版本使用的1706还是需要手动调整这个策略的。\n\n# 执行下面命令\niptables -P FORWARD ACCEPT\n\n# 修改docker的配置\nvim /usr/lib/systemd/system/docker.service\n\n# 增加下面命令(ExecReload后面新增ExecStartPost=...)\n...\nExecReload=/bin/kill -s HUP $MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\n...\n\n# 修改docker Cgroup Driver为systemd\n# sed -i \"s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g\" /usr/lib/systemd/system/docker.service\n\n# 设置 docker 镜像，提高 docker 镜像下载速度和稳定性\ncurl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io\n\n# 或者直接配置文件docker加速器\ncat > /etc/docker/daemon.json << \\EOF\n{\n  \"exec-opts\": [\"native.cgroupdriver=systemd\"],\n  \"registry-mirrors\": [\n    \"https://dockerhub.azk8s.cn\",\n    \"https://i37dz0y4.mirror.aliyuncs.com\"\n  ],\n  \"insecure-registries\": [\"reg.hub.com\"]\n}\nEOF\n\n# 重启Docker\nsystemctl daemon-reload\nsystemctl restart docker\n\ndocker info|grep -i Cgroup\n```\n### 4、docker最终的服务文件\n```\n#注意，有变量的地方需要使用转义符号\n\ncat > /usr/lib/systemd/system/docker.service << EOF\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\nAfter=network-online.target firewalld.service containerd.service\nWants=network-online.target\nRequires=docker.socket\n\n[Service]\nType=notify\n# the default is not to use systemd for cgroups because the delegate issues still\n# exists and systemd currently does not support the cgroup feature set required\n# for containers run by docker\nExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd\nExecReload=/bin/kill -s HUP \\$MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\nTimeoutSec=0\nRestartSec=2\nRestart=always\n\n# Note that StartLimit* options were moved from \"Service\" to \"Unit\" in systemd 229.\n# Both the old, and new location are accepted by systemd 229 and up, so using the old location\n# to make them work for either version of systemd.\nStartLimitBurst=3\n\n# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.\n# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make\n# this option work for either version of systemd.\nStartLimitInterval=60s\n\n# Having non-zero Limit*s causes performance problems due to accounting overhead\n# in the kernel. We recommend using cgroups to do container-local accounting.\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\n\n# Comment TasksMax if your systemd version does not support it.\n# Only systemd 226 and above support this option.\nTasksMax=infinity\n\n# set delegate yes so that systemd does not reset the cgroups of docker containers\nDelegate=yes\n\n# kill only the docker process, not all processes in the cgroup\nKillMode=process\n\n[Install]\nWantedBy=multi-user.target\nEOF\n\n# 重启Docker\nsystemctl daemon-reload\nsystemctl restart docker\nsystemctl enable docker\n```\n\n# 六、安装kubeadm、kubelet\n\n### 1、配置yum源用于安装：\n\n- 1、配置国内yum源\n```\ncat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\ngpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg\nEOF\n\n# 安装kubelet、kubeadm、kubectl\nyum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes\n\nsystemctl daemon-reload\nsystemctl restart kubelet.service\nsystemctl enable kubelet.service\n```\n- 2、kubeadm 官方镜像源\n```\ncat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64\nenabled=1\ngpgcheck=1\nrepo_gpgcheck=1\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg\nEOF\n\n# 安装kubelet、kubeadm、kubectl\nyum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes\n\nsystemctl daemon-reload\nsystemctl restart kubelet.service\nsystemctl enable kubelet.service\n```\n\n\n### 2、安装kubelet\n\n```\n# 需要在每台机器上都安装以下的软件包：\n     kubeadm: 用来初始化集群的指令。\n     kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。\n     kubectl: 用来与集群通信的命令行工具。\n\n# 查看kubelet版本列表\nyum list kubelet --showduplicates | sort -r \n\n# 安装kubelet\nyum install -y kubelet-1.16.2\n\n# 启动kubelet并设置开机启动\nsystemctl daemon-reload\nsystemctl enable kubelet \nsystemctl restart kubelet\n\n# 检查状态\n检查状态,发现是failed状态，正常，kubelet会10秒重启一次，需等下面完成初始化master节点后即可正常\nsystemctl status kubelet\n\n# 查看kubelet日志\njournalctl -u kubelet --no-pager\n```\n\n### 3、安装kubeadm\n\n```\n# 负责初始化集群\n# 1、查看kubeadm版本列表\nyum list kubeadm --showduplicates | sort -r \n\n# 2、安装kubeadm\nyum install -y kubeadm-1.16.2\n\n# 安装 kubeadm 时候会默认安装 kubectl ，所以不需要单独安装kubectl\n\n# 3、重启服务器\n为了防止发生某些未知错误，这里我们重启下服务器，方便进行后续操作\nreboot\n```\n\n# 七、初始化第一个kubernetes master节点\n\n以 `root` 身份在 `k8s-master-01` 机器上执行\n\n初始化 `master` 节点时，如果因为中间某些步骤的配置出错，想要重新初始化 `master` 节点，请先执行 `yes | kubeadm reset` 操作\n\n```bash\n#查看初始化配置文件\n\nkubeadm config view\n```\n\n1、精简配置文件初始化\n\n```\n# 替换 apiserver.demo 为 您想要的 dnsName\nexport APISERVER_NAME=master.k8s.io\n\n# Kubernetes 容器组所在的网段，该网段安装完成后，由 kubernetes 创建，事先并不存在于您的物理网络中\nexport VER=v1.16.2\nexport POD_SUBNET=10.244.0.0/16\nexport SVC_SUBNET=10.96.0.0/12\n\nrm -f ./kubeadm-config.yaml\ncat <<EOF > ./kubeadm-config.yaml\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nkubernetesVersion: ${VER}\n#imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers\ncontrolPlaneEndpoint: \"${APISERVER_NAME}:6443\"\nnetworking:\n  serviceSubnet: \"${SVC_SUBNET}\"\n  podSubnet: \"${POD_SUBNET}\"\n  dnsDomain: \"cluster.local\"\nEOF\n\n# kubeadm init\n# 根据您服务器网速的情况，您需要等候 3 - 10 分钟\nkubeadm init --config=kubeadm-config.yaml --upload-certs\n\n# 配置 kubectl\nrm -rf /root/.kube/\nmkdir /root/.kube/\nyes | cp -i /etc/kubernetes/admin.conf /root/.kube/config\n```\n\n2、详细配置文件初始化\n\n```\n# 1、创建kubeadm配置的yaml文件\n\nrm -f ./kubeadm-config.yaml\n\nexport VER=v1.16.2\nexport MASTER_NODE1=10.10.0.24\nexport APISERVER_NAME=master.k8s.io\nexport POD_SUBNET=10.244.0.0/16\nexport SVC_SUBNET=10.96.0.0/12\n\ncat <<EOF > ./kubeadm-config.yaml\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- groups:\n  - system:bootstrappers:kubeadm:default-node-token\n  token: abcdef.0123456789abcdef\n  ttl: 24h0m0s\n  usages:\n  - signing\n  - authentication\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: ${MASTER_NODE1}  #这里填写第一个初始化的master的ip\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  name: k8s-master-01 #注意这里需要调整为自己的节点\n  taints:\n  - effect: NoSchedule\n    key: node-role.kubernetes.io/master\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nclusterName: kubernetes\nkubernetesVersion: ${VER}\ncertificatesDir: /etc/kubernetes/pki\ncontrollerManager: {}\ncontrolPlaneEndpoint: \"${APISERVER_NAME}:16443\" # 这里写vip的地址或域名加上端口\nimageRepository: k8s.gcr.io\n#imageRepository: registry.aliyuncs.com/google_containers # 使用阿里云镜像\napiServer:\n  timeoutForControlPlane: 4m0s\n  certSANs:\n    - k8s-master-01\n    - k8s-master-02\n    - k8s-master-03\n    - master.k8s.io\n    - 10.10.1.100\n    - 10.10.0.24\n    - 10.10.0.32\n    - 10.10.0.23\n    - 127.0.0.1\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: ${POD_SUBNET}\n  serviceSubnet: ${SVC_SUBNET}\nscheduler: {}\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: ipvs # kube-proxy 模式\nEOF\n\nkubeadm init --config=kubeadm-config.yaml --upload-certs\n\n以下两个地方设置： \n- certSANs： 虚拟ip地址（为了安全起见，把所有集群地址都加上） \n- controlPlaneEndpoint： VIP:端口号\n\n配置说明：\n    imageRepository： registry.aliyuncs.com/google_containers (使用阿里云镜像仓库)\n    podSubnet： 10.244.0.0/16 (#pod地址池)\n    serviceSubnet： 10.96.0.0/12 (#service地址池)\n```\n\n3、查看初始化配置文件\n```\n# 查看kubeadm配置文件\nroot># kubeadm config view\napiServer:\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: master.k8s.io:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.2\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n```\n\n### 2、初始化第一个master节点\n```\nkubeadm init --config=kubeadm-config.yaml --upload-certs   #使用这个就不用做拷贝证书的操作\n```\n日志\n```\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of the control-plane node running the following command on each as root:\n\n  kubeadm join master.k8s.io:16443 --token wf0eoe.liqcp0nhtlov4ioi \\\n    --discovery-token-ca-cert-hash sha256:e43bbb08bb5decae1ce0001f2988ff79095e6be5a3dea77a7c6af180562c7e56 \\\n    --control-plane --certificate-key 6054323448a1aeb661b78763262db5c30e12026c54341400d48401a853194ec2\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use \n\"kubeadm init phase upload-certs --upload-certs\" to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join master.k8s.io:16443 --token wf0eoe.liqcp0nhtlov4ioi \\\n    --discovery-token-ca-cert-hash sha256:e43bbb08bb5decae1ce0001f2988ff79095e6be5a3dea77a7c6af180562c7e56\n```\n### 执行结果中\n\n用于初始化第二、三个 master 节点\n\n```\n#初始化第二个master节点\nexport MASTER_NODE2=10.10.0.32\nkubeadm join master.k8s.io:16443 --apiserver-advertise-address ${MASTER_NODE2} --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5 \\\n    --control-plane --certificate-key 13284467f0141778898ffa33d340c0598cb757c6aa016f00da2165cd3eab4523\n\n#初始化第三个master节点    \nexport MASTER_NODE3=10.10.0.23\nkubeadm join master.k8s.io:16443 --apiserver-advertise-address ${MASTER_NODE3} --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5 \\\n    --control-plane --certificate-key 13284467f0141778898ffa33d340c0598cb757c6aa016f00da2165cd3eab4523\n```\n\n用于初始化 worker 节点\n```\nkubeadm join master.k8s.io:16443 --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5\n```\n\n### 3、配置kubectl环境变量\n```bash\n# 配置环境变量\n\nrm -rf $HOME/.kube\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 4、查看组件状态\n```bash\nkubectl get cs\n\nNAME                 STATUS    MESSAGE              ERROR\ncontroller-manager   Healthy   ok                   \nscheduler            Healthy   ok                   \netcd-0               Healthy   {\"health\": \"true\"}   \n\n# 查看pod状态\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                0/1     Pending   0          7m32s    ---coredns没有启动\ncoredns-78d4cf999f-mkgsx                0/1     Pending   0          7m32s    ---coredns没有启动\netcd-k8s-master-01                      1/1     Running   0          6m39s\nkube-apiserver-k8s-master-01            1/1     Running   0          6m43s\nkube-controller-manager-k8s-master-01   1/1     Running   0          6m32s\nkube-proxy-88s74                        1/1     Running   0          7m32s\nkube-scheduler-k8s-master-01            1/1     Running   0          6m45s\n\n可以看到coredns没有启动，这是由于还没有配置网络插件，接下来配置下后再重新查看启动状态\n\n#检查ETCD服务\ndocker exec -it $(docker ps |grep etcd_etcd|awk '{print $1}') sh\netcdctl --endpoints=https://192.168.56.11:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key member list\n\netcdctl --endpoints=https://192.168.56.11:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key cluster-health\n```\n# 八、安装网络插件\n\n### 1、安装 calico 网络插件\n```\n# 安装 calico 网络插件\n# 参考文档 https://docs.projectcalico.org/v3.9/getting-started/kubernetes/\n\nexport POD_SUBNET=10.244.0.0/16\nrm -f calico.yaml\nwget https://docs.projectcalico.org/v3.9/manifests/calico.yaml\nsed -i \"s#192\\.168\\.0\\.0/16#${POD_SUBNET}#\" calico.yaml\nkubectl apply -f calico.yaml\n```\n\n### 2、等待一会时间，再次查看各个pods的状态\n```\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                1/1     Running   0          12m    ---coredns启动成功\ncoredns-78d4cf999f-mkgsx                1/1     Running   0          12m    ---coredns启动成功\netcd-k8s-master-01                      1/1     Running   0          11m\nkube-apiserver-k8s-master-01            1/1     Running   0          12m\nkube-controller-manager-k8s-master-01   1/1     Running   0          11m\nkube-flannel-ds-amd64-7lj6m             1/1     Running   0          13s\nkube-proxy-88s74                        1/1     Running   0          12m\nkube-scheduler-k8s-master-01            1/1     Running   0          12m\n```\n\n# 九、加入集群\n\n### 1、Master加入集群构成高可用\n```\n复制秘钥到各个节点\n\n在master01 服务器上执行下面命令，将kubernetes相关文件复制到 master02、master03\n\n如果其他节点为初始化第一个master节点，则将该节点的配置文件复制到其余两个主节点，例如master03为第一个master节点，则将它的k8s配置复制到master02和master01。\n```\n- 复制文件到 master02\n```\nssh root@master02.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master02.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master02.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master02.k8s.io:/etc/kubernetes/pki/etcd\n```\n- 复制文件到 master03\n\n```\nssh root@master03.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master03.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master03.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master03.k8s.io:/etc/kubernetes/pki/etcd\n```\n- master节点加入集群\n\n&#8195;master02 和 master03 服务器上都执行加入集群操作\n\n```bash\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f --experimental-control-plane\n```\n&#8195;如果加入失败想重新尝试，请输入 kubeadm reset 命令清除之前的设置，重新执行从“复制秘钥”和“加入集群”这两步\n\n&#8195;如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n\n```bash\n# 显示安装过程:\n\nThis node has joined the cluster and a new control plane instance was created:\n\n* Certificate signing request was sent to apiserver and approval was received.\n* The Kubelet was informed of the new secure connection details.\n* Master label and taint were applied to the new node.\n* The Kubernetes control plane instances scaled up.\n* A new etcd member was added to the local/stacked etcd cluster.\n\nTo start administering your cluster from this node, you need to run the following as a regular user:\n\n        mkdir -p $HOME/.kube\n        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n        sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nRun 'kubectl get nodes' to see this node join the cluster.\n```\n- 配置kubectl环境变量\n```bash\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 2、node节点加入集群\n\n&#8195;除了让master节点加入集群组成高可用外，slave节点也要加入集群中。\n\n&#8195;这里将k8s-node-01、k8s-node-02、k8s-node-03加入集群，进行工作\n\n&#8195;输入初始化k8s master时候提示的加入命令，如下：\n\n```\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f\n```\n&#8195;node节点加入，不需要加上 –experimental-control-plane 这个参数\n\n### 3、如果忘记加入集群的token和sha256 (如正常则跳过)\n\n- 显示获取token列表\n\n```\nkubeadm token list\n```\n\n默认情况下 Token 过期是时间是24小时，如果 Token 过期以后，可以输入以下命令，生成新的 Token\n\n```\nkubeadm token create\n```\n\n- 获取ca证书sha256编码hash值\n\n```\nopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'\n```\n\n拼接命令\n```\nkubeadm join master.k8s.io:16443 --token 882ik4.9ib2kb0eftvuhb58 --discovery-token-ca-cert-hash sha256:0b1a836894d930c8558b350feeac8210c85c9d35b6d91fde202b870f3244016a\n\n如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n```\n\n### 4、查看各个节点加入集群情况\n```\nkubectl get nodes -o wide\n\n```\n\n# 十、从集群中删除 Node\n\n- Master节点：\n\n```\nkubectl drain <node name> --delete-local-data --force --ignore-daemonsets\nkubectl delete node <node name>\n```\n\n- Slave节点：\n\n```\nkubeadm reset\n```\n\n## 初始化失败\n```bash\nyes | kubeadm reset\nifconfig cni0 down\nip link delete cni0\nifconfig flannel.1 down\nip link delete flannel.1\nrm -rf /var/lib/cni/\nrm -rf /var/lib/etcd/*\n```\n\n# 十一、安装Kubernetes Dashboard 2.0\n```\n#安装\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml\n\n#卸载\nkubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml\n```\n\n\n参考资料：\n\nhttp://www.mydlq.club/article/4/\n\nhttps://kuboard.cn/install/install-kubernetes.html#%E5%88%9D%E5%A7%8B%E5%8C%96%E7%AC%AC%E4%B8%80%E4%B8%AAmaster%E8%8A%82%E7%82%B9\n\nhttps://blog.51cto.com/fengwan/2426528?source=dra  kubeadm搭建高可用kubernetes 1.15.1\n\nhttps://segmentfault.com/a/1190000018741112?utm_source=tag-newest  Kubernetes的几种主流部署方式02-kubeadm部署高可用集群\n"
  },
  {
    "path": "kubeadm/K8S-V1.16.2-开启防火墙-Flannel.md",
    "content": "Table of Contents\n=================\n\n   * [一、防火墙配置](#一防火墙配置)\n   * [二、初始化](#二初始化)\n   * [三、初始化集群](#三初始化集群)\n      * [1、命令行初始化](#1命令行初始化)\n      * [2、通过配置文件进行初始化](#2通过配置文件进行初始化)\n      * [3、初始化进行的操作](#3初始化进行的操作)\n      * [4、单独部署coredns（选择操作）](#4单独部署coredns选择操作)\n      * [5、集群移除节点](#5集群移除节点)\n      * [6、kube-proxy开启ipvs](#6kube-proxy开启ipvs)\n   * [四、Master操作](#四master操作)\n   * [五、Node操作](#五node操作)\n   * [六、集群操作](#六集群操作)\n   * [七、网络插件部署](#七网络插件部署)\n      * [1、master上部署flannel插件](#1master上部署flannel插件)\n      * [2、master上部署calico插件](#2master上部署calico插件)\n      * [3、性能对比](#3性能对比)\n   * [八、安装 Dashboard](#八安装-dashboard)\n      * [1、下载yaml文件](#1下载yaml文件)\n      * [2、修改配置](#2修改配置)\n      * [3、查看dashboard](#3查看dashboard)\n      * [4、然后创建一个具有全局所有权限的用户来登录Dashboard：(admin.yaml)](#4然后创建一个具有全局所有权限的用户来登录dashboardadminyaml)\n   * [九、问题排查](#九问题排查)\n      * [1、coredns异常问题](#1coredns异常问题)\n         * [1.1、解决办法](#11解决办法)\n      * [2、kubelet异常问题1](#2kubelet异常问题1)\n      * [3、kubelet异常问题2](#3kubelet异常问题2)\n      \n# 一、防火墙配置\n\n```bash\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\n\nyum install iptables iptables-services -y\n\ncat > /etc/sysconfig/iptables << \\EOF\n# Generated by iptables-save v1.4.21 on Thu Aug  1 01:26:09 2019\n*filter\n:INPUT ACCEPT [0:0]\n:FORWARD ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:RH-Firewall-1-INPUT - [0:0]\n-A INPUT -j RH-Firewall-1-INPUT\n-A FORWARD -j RH-Firewall-1-INPUT\n-A RH-Firewall-1-INPUT -i lo -j ACCEPT\n-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.0/24 -p tcp -m tcp --dport 22 -j ACCEPT\n-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 22 -j DROP\n### k8s ###\n-A RH-Firewall-1-INPUT -s 192.168.56.11/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.12/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.13/32 -j ACCEPT\n# serviceSubnet rules\n-A RH-Firewall-1-INPUT -s 10.96.0.0/12 -j ACCEPT\n# podSubnet rules\n-A RH-Firewall-1-INPUT -s 10.244.0.0/16 -j ACCEPT\n# keepalived rules\n-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT\n# port rules\n-A RH-Firewall-1-INPUT -s 192.168.56.1/32 -p tcp -m multiport --dports 80,443,1080,6443,16443,30000:32767 -j ACCEPT\n### k8s ###\n-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT\n-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited\nCOMMIT\n# Completed on Thu Aug  1 01:26:09 2019\nEOF\n\nsystemctl restart iptables.service\nsystemctl enable iptables.service\n\niptables -nvL\n```\n\n# 二、初始化\n\n```bash\ncat > /etc/hosts << \\EOF\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n192.168.56.11 linux-node1 linux-node1.example.com\n192.168.56.12 linux-node2 linux-node2.example.com\n192.168.56.13 linux-node3 linux-node3.example.com\nEOF\n\nsystemctl stop firewalld\nsystemctl disable firewalld\n\nsetenforce 0\nsed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config\nsed -i 's/SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config\n\n# 关闭 swap\nswapoff -a\n#sed -ir 's/.*swap.*/#&/' /etc/fstab\n#或\nyes | cp /etc/fstab /etc/fstab_bak\ncat /etc/fstab_bak |grep -v swap > /etc/fstab\n\n#export Time=`date \"+%Y%m%d%H%M%S\"`\n#cp /etc/fstab /etc/fstab_$Time\n\ncat > /etc/sysctl.d/k8s.conf << \\EOF\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nvm.swappiness = 0\nEOF\n\n#加载 br_netfilter 模块\nmodprobe br_netfilter\nsysctl -p /etc/sysctl.d/k8s.conf\n\n#创建/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块\ncat > /etc/sysconfig/modules/ipvs.modules <<EOF\n#!/bin/bash\nmodprobe -- ip_vs\nmodprobe -- ip_vs_rr\nmodprobe -- ip_vs_wrr\nmodprobe -- ip_vs_sh\nmodprobe -- nf_conntrack_ipv4\nEOF\n\nchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4\n\nyum install -y ipset ipvsadm\n\nyum install chrony -y\nsystemctl enable chronyd\nsystemctl restart chronyd\nchronyc sources\n\nyum install -y yum-utils \\\n  device-mapper-persistent-data \\\n  lvm2\n  \nyum-config-manager \\\n    --add-repo \\\n    https://download.docker.com/linux/centos/docker-ce.repo\n    \n#yum list docker-ce --showduplicates | sort -r\n\nyum install -y docker-ce-18.09.9-3.el7.x86_64\nsystemctl start docker\nsystemctl enable docker\n\nmkdir -p /data0/docker-data\n\ncat > /etc/docker/daemon.json << \\EOF\n{\n  \"exec-opts\": [\"native.cgroupdriver=systemd\"],\n  \"data-root\": \"/data0/docker-data\",\n  \"registry-mirrors\" : [\n    \"https://ot2k4d59.mirror.aliyuncs.com/\"\n  ],\n  \"insecure-registries\": [\"reg.hub.com\"]\n}\nEOF\n\nsystemctl daemon-reload\nsystemctl restart docker\n\ncat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\ngpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg\n        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg\nEOF\n\nyum install -y kubelet-1.16.2-0 kubeadm-1.16.2-0 kubectl-1.16.2-0 --disableexcludes=kubernetes\n\nkubeadm version\nsystemctl daemon-reload\nsystemctl restart kubelet.service\nsystemctl enable kubelet.service\nsystemctl status kubelet\n\n#查看kubelet日志\njournalctl -f -u kubelet\n\n#kubelet.service服务位置\nls -l /lib/systemd/system/kubelet.service\n```\n\n# 三、初始化集群\n\n## 1、命令行初始化\n\n```bash\n#master节点初始化指令\nkubeadm init \\\n  --apiserver-advertise-address=192.168.56.11 \\\n  --image-repository registry.aliyuncs.com/google_containers \\\n  --kubernetes-version v1.16.2 \\\n  --apiserver-bind-port=6443 \\\n  --service-cidr=10.96.0.0/12 \\\n  --pod-network-cidr=10.244.0.0/16  #这里使用这个是因为官方flannel使用的这个段地址，不然的话,kube-flannel.yml那里需要调整\n\n#其他节点可以先指定image源，先下载需要的镜像\nkubeadm config images pull --image-repository registry.aliyuncs.com/google_containers\n\n#查看集群初始化配置\nkubeadm config view\n\n#获取加入集群的指令\nkubeadm token create --print-join-command\n\nkubeadm join 192.168.56.11:6443 --token 5avfk1.fwui1smk5utcu7m9     --discovery-token-ca-cert-hash sha256:6730e91a516d8bf3e26d8f5eddd6409a224f8703b94f6ecde2b1fd7481bbbd25\n\n#集群初始化如果遇到问题，可以使用下面的命令进行清理\nyes | kubeadm reset\nifconfig cni0 down\nip link delete cni0\nifconfig flannel.1 down\nip link delete flannel.1\nrm -rf /var/lib/cni/\nrm -f $HOME/.kube/config\n\nsystemctl restart kubelet\nsystemctl status kubelet\njournalctl -f -u kubelet\n```\n\n## 2、通过配置文件进行初始化\n\n```bash\n#在 master 节点配置 kubeadm 初始化文件，可以通过如下命令导出默认的初始化配置：\nroot># kubeadm config print init-defaults > kubeadm.yaml\n```\n\n```bash\n#然后根据我们自己的需求修改配置，比如修改 imageRepository 的值，kube-proxy 的模式为 ipvs\n\n如果是 flannel 网络插件的，需要将 networking.podSubnet 设置为默认的 10.244.0.0/16\n\n如果是 Calico 网络插件的，配置成 Calico 的默认网段 podSubnet: 192.168.0.0/16，这个也可以修改Calico的配置文件调整\n\nrm -f kubeadm.yaml\n\ncat > kubeadm.yaml << \\EOF\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- groups:\n  - system:bootstrappers:kubeadm:default-node-token\n  token: abcdef.0123456789abcdef\n  ttl: 24h0m0s\n  usages:\n  - signing\n  - authentication\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 192.168.56.11 #修改为主节点 IP\n  bindPort: 6443\n  #controlPlaneEndpoint: 1.1.1.100 #如果前面配置了负载均衡，此处填写vip地址\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  name: linux-node1.example.com\n  taints:\n  - effect: NoSchedule\n    key: node-role.kubernetes.io/master\n---\napiServer:\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrollerManager: {}\ndns:\n  type: CoreDNS #dns 类型\netcd:\n  local:\n    dataDir: /var/lib/etcd\n#imageRepository: k8s.gcr.io\nimageRepository: registry.aliyuncs.com/google_containers #国内不能访问 Google，修改为阿里云\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.2 # 修改版本号\nnetworking:\n  dnsDomain: cluster.local\n  # 配置成 flannel 的默认网段\n  serviceSubnet: 10.96.0.0/12\n  podSubnet: 10.244.0.0/16\nscheduler: {}\n---\n# 开启 IPVS 模式\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: ipvs # kube-proxy 模式\nEOF\n\nkubeadm init --config kubeadm.yaml\n```\n\n## 3、初始化进行的操作\n\n```bash\n初始化操作主要经历了下面15个步骤，每个阶段均输出均使用[步骤名称]作为开头：\n\n    1、[init]：指定版本进行初始化操作\n    2、[preflight] ：初始化前的检查和下载所需要的Docker镜像文件。\n    3、[kubelet-start] ：生成kubelet的配置文件”/var/lib/kubelet/config.yaml”，没有这个文件kubelet无法启动，所以初始化之前的kubelet实际上启动失败。\n    4、[certificates]：生成Kubernetes使用的证书，存放在/etc/kubernetes/pki目录中。\n    5、[kubeconfig] ：生成 KubeConfig 文件，存放在/etc/kubernetes目录中，组件之间通信需要使用对应文件。\n    6、[control-plane]：使用/etc/kubernetes/manifest目录下的YAML文件，安装 Master 组件。\n    7、[etcd]：使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。\n    8、[wait-control-plane]：等待control-plan部署的Master组件启动。\n    9、[apiclient]：检查Master组件服务状态。\n    10、[uploadconfig]：更新配置\n    11、[kubelet]：使用configMap配置kubelet。\n    12、[patchnode]：更新CNI信息到Node上，通过注释的方式记录。\n    13、[mark-control-plane]：为当前节点打标签，打了角色Master，和不可调度标签，这样默认就不会使用Master节点来运行Pod。\n    14、[bootstrap-token]：生成token记录下来，后边使用kubeadm join往集群中添加节点时会用到\n    15、[addons]：安装附加组件CoreDNS和kube-proxy\n    \nkubectl默认会在执行的用户家目录下面的.kube目录下寻找config文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf拷贝到.kube/config。\n```\n\n## 4、单独部署coredns（选择操作）\n\n```bash\n# 不依赖kubeadm的方式，适用于不是使用kubeadm创建的k8s集群，或者kubeadm初始化集群之后，删除了dns相关部署\n# 在calico网络中也配置一个coredns # 10.96.0.10 为k8s官方指定的kube-dns地址\nrm -f coredns.yaml.sed deploy.sh coredns.yml\nwget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed\nwget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh\nchmod +x deploy.sh\n./deploy.sh -i 10.10.0.10 > coredns.yml  #这里从--service-cidr=10.10.0.0/16中选用10.10.0.10作为coredns地址\nkubectl apply -f coredns.yml\n\n# 查看\nkubectl get pods --namespace kube-system\nkubectl get svc --namespace kube-system\n\n#删除coredns\nkubectl delete deployment coredns -n kube-system\nkubectl delete svc kube-dns -n kube-system\nkubectl delete cm coredns -n kube-system\n```\n\n## 5、集群移除节点\n\n```bash\n1、#移除work节点\n在准备移除的 worker 节点上执行\nkubeadm reset\n\n2、在第一个 master 节点 demo-master-a-1 上执行\nkubectl delete node demo-worker-x-x\n#worker 节点的名字可以通过在第一个 master 节点 demo-master-a-1 上执行 kubectl get nodes 命令获得\n```\n\n## 6、kube-proxy开启ipvs\n\n```bash\n1、#修改ConfigMap的kube-system/kube-proxy中的config.conf，把 mode: \"\" 改为mode: “ipvs\" 保存退出即可\n\nroot># kubectl edit cm kube-proxy -n kube-system\nconfigmap/kube-proxy edited\n\n2、#删除之前的proxy pod\nroot># kubectl get pod -n kube-system |grep kube-proxy |awk '{system(\"kubectl delete pod \"$1\" -n kube-system\")}'\n\n3、#查看proxy运行状态\nroot># kubectl get pod -n kube-system | grep kube-proxy\n\n4、#查看日志,如果有 `Using ipvs Proxier.` 说明kube-proxy的ipvs 开启成功!\nroot># kubectl logs kube-proxy-54qnw -n kube-system\nI0518 20:24:09.319160       1 server_others.go:176] Using ipvs Proxier.\nW0518 20:24:09.319751       1 proxier.go:386] IPVS scheduler not specified, use rr by default\nI0518 20:24:09.320035       1 server.go:562] Version: v1.14.2\nI0518 20:24:09.334372       1 conntrack.go:52] Setting nf_conntrack_max to 131072\nI0518 20:24:09.334853       1 config.go:102] Starting endpoints config controller\nI0518 20:24:09.334916       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller\nI0518 20:24:09.334945       1 config.go:202] Starting service config controller\nI0518 20:24:09.334976       1 controller_utils.go:1027] Waiting for caches to sync for service config controller\nI0518 20:24:09.435153       1 controller_utils.go:1034] Caches are synced for service config controller\nI0518 20:24:09.435271       1 controller_utils.go:1034] Caches are synced for endpoints config controller\n```\n\n# 四、Master操作\n\n```bash\n#将 master 节点上面的 $HOME/.kube/config 文件拷贝到 node 节点对应的文件中\nmkdir -p $HOME/.kube\nyes | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nchown $(id -u):$(id -g) $HOME/.kube/config\n\nscp $HOME/.kube/config root@linux-node2:$HOME/.kube/config\nscp $HOME/.kube/config root@linux-node3:$HOME/.kube/config\n\n#指令补全\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n# 五、Node操作\n\n```bash\n#node节点操作\nmkdir -p $HOME/.kube\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n#加入集群\nkubeadm join 192.168.56.11:6443 --token 5avfk1.fwui1smk5utcu7m9     --discovery-token-ca-cert-hash sha256:6730e91a516d8bf3e26d8f5eddd6409a224f8703b94f6ecde2b1fd7481bbbd25\n```\n\n# 六、集群操作\n\n```bash\n#批量重启docker\ndocker restart `docker ps -a -q` \n\nroot># kubectl get nodes\nNAME                      STATUS     ROLES    AGE     VERSION\nlinux-node1.example.com   NotReady   master   11m     v1.15.3\nlinux-node2.example.com   NotReady   <none>   5m9s    v1.15.3\nlinux-node3.example.com   NotReady   <none>   4m58s   v1.15.3\n\n可以看到是 NotReady 状态，这是因为还没有安装网络插件，接下来安装网络插件，可以在文档 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 中选择我们自己的网络插件，这里我们安装 flannel:\n\niptables -I RH-Firewall-1-INPUT -s 10.96.0.0/16 -j ACCEPT\nservice iptables save\n\nroot># kubectl get pods -n kube-system\nNAME                                              READY   STATUS    RESTARTS   AGE\ncoredns-5c98db65d4-mk254                          1/1     Running   0          14m\ncoredns-5c98db65d4-ntz98                          1/1     Running   0          14m\netcd-linux-node1.example.com                      1/1     Running   0          13m\nkube-apiserver-linux-node1.example.com            1/1     Running   0          13m\nkube-controller-manager-linux-node1.example.com   1/1     Running   0          13m\nkube-flannel-ds-amd64-6kx7m                       1/1     Running   0          11m\nkube-flannel-ds-amd64-cqfnb                       1/1     Running   0          11m\nkube-flannel-ds-amd64-thxx2                       1/1     Running   0          11m\nkube-proxy-gdtjg                                  1/1     Running   0          12m\nkube-proxy-lcscl                                  1/1     Running   0          14m\nkube-proxy-sb7d8                                  1/1     Running   0          12m\nkube-scheduler-linux-node1.example.com            1/1     Running   0          13m\nkubernetes-dashboard-fcfb4cbc-dqbq9               1/1     Running   0          4m43s\n\nkubectl describe pod/coredns-5c98db65d4-mk254 -n kube-system\n\n#创建Deployment\nkubectl run --image=nginx nginx-web-1 --image-pull-policy='IfNotPresent' --replicas=3\n\n#以不同方式暴露出去\nkubectl expose deployment nginx-web-1 --port=80 --target-port=80\nkubectl expose deployment nginx-web-1 --port=80 --target-port=80 --type=NodePort\n\nroot># kubectl exec -it nginx-web-1-5cc49f46bc-kn46r -- \\\n               sh -c \"echo hello>/usr/share/nginx/html/index.html\"\n\nroot># kubectl get svc -A\ndefault       nginx-web-1   NodePort    10.10.43.53   <none>        80:30163/TCP             101s\n\nroot># kubectl get endpoints\nnginx-web-1   10.244.154.193:80,10.244.44.193:80,10.244.89.129:80   5m27s\n\nroot># curl 10.10.43.53   \nhello\n\n#显示iptables规则(注意这里kube-proxy需要使用ipvs模式，上面主机预设的iptables策略才生效)\niptables -nvL --line-number\n\n#删除规则\niptables -D RH-Firewall-1-INPUT 4\n```\n\n# 七、网络插件部署\n\n## 1、master上部署flannel插件\n\n```bash\n#插件镜像 network: flannel image（因墙的问题，需要从国内源下载）\ndocker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64\ndocker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64  quay.io/coreos/flannel:v0.11.0-amd64\n\nhttps://www.cnblogs.com/horizonli/p/10855666.html\n\n#部署flannel\nrm -f kube-flannel.yml\nwget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml\nsed -i 's#image: quay.io/coreos/flannel:v0.11.0-amd64#image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64#g' kube-flannel.yml\nkubectl apply -f kube-flannel.yml\n\n#另外需要注意的是如果你的节点有多个网卡的话，需要在 kube-flannel.yml 中使用--iface参数指定集群主机内网网卡的名称，否则可能会出现 dns 无法解析。flanneld 启动参数加上--iface=<iface-name>\nargs:\n- --ip-masq\n- --kube-subnet-mgr\n- --iface=eth0\n```\n\n## 2、master上部署calico插件\n\n```bash\nexport POD_SUBNET=10.244.0.0/16\nrm -f calico.yaml\nwget https://docs.projectcalico.org/v3.8/manifests/calico.yaml\nsed -i \"s#192\\.168\\.0\\.0/16#${POD_SUBNET}#\" calico.yaml\nkubectl apply -f calico.yaml\n\nhttps://www.cnblogs.com/goldsunshine/p/10701242.html  k8s网络之Calico网络\n```\n\n## 3、性能对比\n\n```bash\nhttps://www.2cto.com/net/201701/591629.html  kubernetes flannel neutron calico三种网络方案性能测试分析\n```\n\n# 八、安装 Dashboard\n\n使用 dashboard 最好把浏览器的默认语言设置为英文，不然在进入容器操作的时候会有bug，会出现重影,然后k8s v1.16.x之后，需要使用Dashboard v2.0以上的版本，不然出现在error_outline 未知服务器错误 (404)\n\n## 1、下载yaml文件\n\n```bash\n#下载\nwget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml\n```\n\n## 2、修改配置\n```bash\n1、#热更新打补丁的方式修改svc\nkubectl apply -f recommended.yaml\nkubectl -n kubernetes-dashboard patch svc kubernetes-dashboard -p '{\"spec\":{\"type\":\"NodePort\"}}'\nkubectl -n kubernetes-dashboard patch svc kubernetes-dashboard -p '{\"spec\": {\"ports\": [{\"port\":443, \"nodePort\": 30001}]}}'\nkubectl get svc -A|grep kubernetes-dashboard\n\nhttps://www.jianshu.com/p/f38e1767bf19  使用 kubectl patch 更新 API 对象\n\n2、#手动修改recommended.yaml文件，为了方便访问，修改kubernetes-dashboard的Service定义，指定Service的type类型为NodeType，指定nodePort端口\nkubectl delete -f recommended.yaml \n\n---\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nspec:\n  type: NodePort  # 新增这一行，指定为NodePort方式\n  ports:\n    - port: 443\n      targetPort: 8443\n      nodePort: 30001  # 指定端口为30001\n  selector:\n    k8s-app: kubernetes-dashboard\n---\n\nkubectl apply -f recommended.yaml \n\n#注：dashboard-metrics-scraper的Service不需要修改\n\nKubernetes Dashboard 默认部署时，只配置了最低权限的 RBAC\n\n参考文档：https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md\n```\n\n## 3、查看dashboard\n\n```bash\nroot># kubectl  get pod,deploy,svc -n kubernetes-dashboard\nNAME                                             READY   STATUS    RESTARTS   AGE\npod/dashboard-metrics-scraper-76585494d8-ws57d   1/1     Running   0          2m18s\npod/kubernetes-dashboard-6b86b44f87-q26w6        1/1     Running   0          2m18s\n\nNAME                                        READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/dashboard-metrics-scraper   1/1     1            1           2m18s\ndeployment.apps/kubernetes-dashboard        1/1     1            1           2m18s\n\nNAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE\nservice/dashboard-metrics-scraper   ClusterIP   10.102.114.143   <none>        8000/TCP        2m18s\nservice/kubernetes-dashboard        NodePort    10.111.191.70    <none>        443:30001/TCP   2m19s\n\nroot># curl https://10.111.191.70:443 -k -I\nHTTP/1.1 200 OK\nAccept-Ranges: bytes\nCache-Control: no-store\nContent-Length: 1262\nContent-Type: text/html; charset=utf-8\nLast-Modified: Mon, 14 Oct 2019 16:39:02 GMT\nDate: Wed, 13 Nov 2019 02:25:52 GMT\n\n# 我们可以看到官方的dashboard帮我们启动了web-ui，并且帮我们启动了一个Metric服务\n# 但是dashboard默认使用的https的443端口\n\n然后可以通过上面的 https://NodeIP:30001 端口去访问 Dashboard，要记住使用 https，Chrome不生效可以使用Firefox测试：\n```\n\n## 4、然后创建一个具有全局所有权限的用户来登录Dashboard：(admin.yaml)\n\n```bash\ncat > admin.yaml << \\EOF\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: admin\n  annotations:\n    rbac.authorization.kubernetes.io/autoupdate: \"true\"\nroleRef:\n  kind: ClusterRole\n  name: cluster-admin\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n  name: admin\n  namespace: kube-system\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: admin\n  namespace: kube-system\n  labels:\n    kubernetes.io/cluster-service: \"true\"\n    addonmanager.kubernetes.io/mode: Reconcile\nEOF\n\nkubectl apply -f admin.yaml\n\nkubectl delete -f admin.yaml\n\n#获取token\nkubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin | awk '{print $1}')\n```\n\nhttps://192.168.56.12:31513\n\n然后用上面的base64解码后的字符串作为token登录Dashboard即可： k8s dashboard\n\n最终我们就完成了使用 kubeadm 搭建 v1.15.3 版本的 kubernetes 集群、coredns、ipvs、flannel。 \n\n# 九、问题排查\n\n## 1、coredns异常问题\n\n  ![coredns异常问题](https://github.com/Lancger/opsfull/blob/master/images/coredns-01.png)\n\n```\nE1006 12:30:53.935744       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.10.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.0.1:443: connect: no route to host\nE1006 12:30:53.935744       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.10.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.0.1:443: connect: no route to host\nlog: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-bccdc95cf-vlqxk.unknownuser.log.ERROR.20191006-123053.1: no such file or directory\n```\n\n### 1.1、解决办法\n\n```\n实际上是主机防火墙的问题，需要添加\niptables -A RH-Firewall-1-INPUT -s 10.10.0.0/16 -j ACCEPT\n\n其他参考\nhttps://medium.com/@cminion/quicknote-kubernetes-networking-issues-78f1e0d06e12\nhttps://github.com/coredns/coredns/issues/2325  \n```\n\n## 2、kubelet异常问题1\n\n```\n问题现象：\n\nkubelet fails to get cgroup stats for docker and kubelet services\n\n解决办法:\n\ncat > /etc/sysconfig/kubelet <<\\EOF\nKUBELET_EXTRA_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet\nsystemctl status kubelet\n\n#查看kubelet日志\njournalctl -f -u kubelet\n\nhttps://stackoverflow.com/questions/46726216/kubelet-fails-to-get-cgroup-stats-for-docker-and-kubelet-services  \n\nhttps://www.twblogs.net/a/5cc87d63bd9eee1ac2ed736b\n```\n\n## 3、kubelet异常问题2\n\n```\nfailed to create kubelet: misconfiguration: kubelet cgroup driver: \"cgroupfs\" is different from docker cgroup driver: \"systemd\"\n\n#解决办法\n添加如下内容--cgroup-driver=systemd\n\n[root@tw19336 ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf\n# Note: This dropin only works with kubeadm and kubelet v1.11+\n[Service]\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd\"\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml\"\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env\n# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use\n# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-/etc/sysconfig/kubelet\nExecStart=\nExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS\n\n\nsystemctl daemon-reload\nsystemctl restart kubelet\nsystemctl status kubelet\nhttps://www.cnblogs.com/hongdada/p/9771857.html\n```\n\n参考文档：\n\nhttps://www.cnblogs.com/liyongjian5179/p/11417794.html   使用kubeadm安装Kubernetes 1.15.3 并开启 ipvs\n\nhttps://www.jianshu.com/p/8bc61078bded \n\nhttps://www.cnblogs.com/lovesKey/p/10888006.html  centos7下用kubeadm安装k8s集群并使用ipvs做高可用方案\n\nhttps://github.com/kubernetes/dashboard/wiki/Creating-sample-user\n\nhttps://www.qikqiak.com/post/use-kubeadm-install-kubernetes-1.15.3/ \n\nhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/  官方文档 \n\nhttps://www.jianshu.com/p/d0933d6ae162 kubeadm 1.15 安装\n\nhttps://yq.aliyun.com/articles/680080/  单独部署coredns\n\nhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology  etcd-stacked-cluster\n\nhttps://www.kubernetes.org.cn/5021.html  etcd 集群运维实践\n"
  },
  {
    "path": "kubeadm/Kubernetes 集群变更IP地址.md",
    "content": "参考资料：\n\nhttps://blog.csdn.net/whywhy0716/article/details/92658111   Kubernetes 集群变更IP地址\n"
  },
  {
    "path": "kubeadm/README.md",
    "content": "# 一、防火墙配置\n\n```\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\n\nyum install iptables iptables-services -y\n\ncat > /etc/sysconfig/iptables << \\EOF\n# Generated by iptables-save v1.4.21 on Thu Aug  1 01:26:09 2019\n*filter\n:INPUT ACCEPT [0:0]\n:FORWARD ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:RH-Firewall-1-INPUT - [0:0]\n-A INPUT -j RH-Firewall-1-INPUT\n-A FORWARD -j RH-Firewall-1-INPUT\n-A RH-Firewall-1-INPUT -i lo -j ACCEPT\n-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.0/24 -p tcp -m tcp --dport 22 -j ACCEPT\n-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 22 -j DROP\n### k8s ###\n-A RH-Firewall-1-INPUT -s 192.168.56.11/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.12/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.13/32 -j ACCEPT\n# serviceSubnet rules\n-A RH-Firewall-1-INPUT -s 10.96.0.0/12 -j ACCEPT\n# podSubnet rules\n-A RH-Firewall-1-INPUT -s 10.244.0.0/16 -j ACCEPT\n# keepalived rules\n-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT\n# port rules\n-A RH-Firewall-1-INPUT -s 192.168.56.1/32 -p tcp -m multiport --dports 80,443,1080,6443,16443,30000:32767 -j ACCEPT\n### k8s ###\n-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT\n-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited\nCOMMIT\n# Completed on Thu Aug  1 01:26:09 2019\nEOF\nsystemctl restart iptables.service\nsystemctl enable iptables.service\n\niptables -nvL\n```\n\n# 二、初始化\n\n```bash\ncat > /etc/hosts << \\EOF\n127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n192.168.56.11 linux-node1 linux-node1.example.com\n192.168.56.12 linux-node2 linux-node2.example.com\n192.168.56.13 linux-node3 linux-node3.example.com\nEOF\n\nsystemctl stop firewalld\nsystemctl disable firewalld\n\nsetenforce 0\nsed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config\nsed -i 's/SELINUXTYPE=.*/SELINUXTYPE=disabled/g' /etc/selinux/config\n\n# 关闭 swap\nswapoff -a\n#sed -ir 's/.*swap.*/#&/' /etc/fstab\n#或\nyes | cp /etc/fstab /etc/fstab_bak\ncat /etc/fstab_bak |grep -v swap > /etc/fstab\n\n#export Time=`date \"+%Y%m%d%H%M%S\"`\n#cp /etc/fstab /etc/fstab_$Time\n\ncat > /etc/sysctl.d/k8s.conf << \\EOF\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nvm.swappiness = 0\nEOF\n\n#加载 br_netfilter 模块\nmodprobe br_netfilter\nsysctl -p /etc/sysctl.d/k8s.conf\n\n#创建/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块\ncat > /etc/sysconfig/modules/ipvs.modules <<EOF\n#!/bin/bash\nmodprobe -- ip_vs\nmodprobe -- ip_vs_rr\nmodprobe -- ip_vs_wrr\nmodprobe -- ip_vs_sh\nmodprobe -- nf_conntrack_ipv4\nEOF\n\nchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4\n\nyum install -y ipset ipvsadm\n\nyum install chrony -y\nsystemctl enable chronyd\nsystemctl restart chronyd\nchronyc sources\n\nyum install -y yum-utils \\\n  device-mapper-persistent-data \\\n  lvm2\n  \nyum-config-manager \\\n    --add-repo \\\n    https://download.docker.com/linux/centos/docker-ce.repo\n    \n#yum list docker-ce --showduplicates | sort -r\n\nyum install -y docker-ce-18.09.9-3.el7.x86_64\nsystemctl start docker\nsystemctl enable docker\n\ncat > /etc/docker/daemon.json << \\EOF\n{\n  \"exec-opts\": [\"native.cgroupdriver=systemd\"],\n  \"registry-mirrors\" : [\n    \"https://ot2k4d59.mirror.aliyuncs.com/\"\n  ]\n}\nEOF\nsystemctl daemon-reload\nsystemctl restart docker\n\ncat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\ngpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg\n        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg\nEOF\n\nyum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes\nsystemctl daemon-reload\nsystemctl restart kubelet.service\nkubeadm version\nsystemctl enable kubelet.service\nsystemctl status kubelet\n\n#查看kubelet日志\njournalctl -f -u kubelet\n\n#kubelet.service服务位置\n/lib/systemd/system/kubelet.service\n```\n\n# 三、初始化集群\n\n1、命令行初始化\n\n```bash\nkubeadm init \\\n  --apiserver-advertise-address=192.168.56.11 \\\n  --image-repository registry.aliyuncs.com/google_containers \\\n  --kubernetes-version v1.16.2 \\\n  --apiserver-bind-port=6443 \\\n  --service-cidr=10.96.0.0/12 \\\n  --pod-network-cidr=10.244.0.0/16    #这里使用这个是因为官方flannel使用的这个段地址，不然的话,kube-flannel.yml那里需要调整\n\n#其他节点可以先指定image源，先下载需要的镜像\nkubeadm config images pull --image-repository registry.aliyuncs.com/google_containers\n\n#获取加入集群的指令\nkubeadm token create --print-join-command\n\nkubeadm join 192.168.56.11:6443 --token 5avfk1.fwui1smk5utcu7m9     --discovery-token-ca-cert-hash sha256:6730e91a516d8bf3e26d8f5eddd6409a224f8703b94f6ecde2b1fd7481bbbd25\n\n#集群初始化如果遇到问题，可以使用下面的命令进行清理\nyes | kubeadm reset\nifconfig cni0 down\nip link delete cni0\nifconfig flannel.1 down\nip link delete flannel.1\nrm -rf /var/lib/cni/\nrm -f $HOME/.kube/config\n\nsystemctl restart kubelet\nsystemctl status kubelet\njournalctl -f -u kubelet\n```\n\n2、通过配置文件进行初始化\n\n```bash\n#在 master 节点配置 kubeadm 初始化文件，可以通过如下命令导出默认的初始化配置：\nroot># kubeadm config print init-defaults > kubeadm.yaml\n```\n\n```\n#然后根据我们自己的需求修改配置，比如修改 imageRepository 的值，kube-proxy 的模式为 ipvs\n\n如果是 flannel 网络插件的，需要将 networking.podSubnet 设置为默认的 10.244.0.0/16\n\n如果是 Calico 网络插件的，配置成 Calico 的默认网段 podSubnet: 192.168.0.0/16，这个也可以修改Calico的配置文件调整\n\nrm -f kubeadm.yaml\n\ncat > kubeadm.yaml << \\EOF\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- groups:\n  - system:bootstrappers:kubeadm:default-node-token\n  token: abcdef.0123456789abcdef\n  ttl: 24h0m0s\n  usages:\n  - signing\n  - authentication\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 192.168.56.11 #修改为主节点 IP\n  bindPort: 6443\n  #controlPlaneEndpoint: 1.1.1.100 #如果前面配置了负载均衡，此处填写vip地址\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  name: linux-node1.example.com\n  taints:\n  - effect: NoSchedule\n    key: node-role.kubernetes.io/master\n---\napiServer:\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrollerManager: {}\ndns:\n  type: CoreDNS #dns 类型\netcd:\n  local:\n    dataDir: /var/lib/etcd\n#imageRepository: k8s.gcr.io\nimageRepository: registry.aliyuncs.com/google_containers #国内不能访问 Google，修改为阿里云\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.2 # 修改版本号\nnetworking:\n  dnsDomain: cluster.local\n  # 配置成 flannel 的默认网段\n  serviceSubnet: 10.96.0.0/12\n  podSubnet: 10.244.0.0/16\nscheduler: {}\n---\n# 开启 IPVS 模式\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: ipvs # kube-proxy 模式\nEOF\n\nkubeadm init --config kubeadm.yaml\n```\n\n3、初始化进行的操作\n\n```bash\n初始化操作主要经历了下面15个步骤，每个阶段均输出均使用[步骤名称]作为开头：\n\n    1、[init]：指定版本进行初始化操作\n    2、[preflight] ：初始化前的检查和下载所需要的Docker镜像文件。\n    3、[kubelet-start] ：生成kubelet的配置文件”/var/lib/kubelet/config.yaml”，没有这个文件kubelet无法启动，所以初始化之前的kubelet实际上启动失败。\n    4、[certificates]：生成Kubernetes使用的证书，存放在/etc/kubernetes/pki目录中。\n    5、[kubeconfig] ：生成 KubeConfig 文件，存放在/etc/kubernetes目录中，组件之间通信需要使用对应文件。\n    6、[control-plane]：使用/etc/kubernetes/manifest目录下的YAML文件，安装 Master 组件。\n    7、[etcd]：使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。\n    8、[wait-control-plane]：等待control-plan部署的Master组件启动。\n    9、[apiclient]：检查Master组件服务状态。\n    10、[uploadconfig]：更新配置\n    11、[kubelet]：使用configMap配置kubelet。\n    12、[patchnode]：更新CNI信息到Node上，通过注释的方式记录。\n    13、[mark-control-plane]：为当前节点打标签，打了角色Master，和不可调度标签，这样默认就不会使用Master节点来运行Pod。\n    14、[bootstrap-token]：生成token记录下来，后边使用kubeadm join往集群中添加节点时会用到\n    15、[addons]：安装附加组件CoreDNS和kube-proxy\n    \nkubectl默认会在执行的用户家目录下面的.kube目录下寻找config文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf拷贝到.kube/config。\n```\n\n2、单独部署coredns（选择操作）\n\n```\n# 不依赖kubeadm的方式，适用于不是使用kubeadm创建的k8s集群，或者kubeadm初始化集群之后，删除了dns相关部署\n# 在calico网络中也配置一个coredns # 10.96.0.10 为k8s官方指定的kube-dns地址\nrm -f coredns.yaml.sed deploy.sh coredns.yml\nwget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed\nwget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh\nchmod +x deploy.sh\n./deploy.sh -i 10.10.0.10 > coredns.yml  #这里从--service-cidr=10.10.0.0/16中选用10.10.0.10作为coredns地址\nkubectl apply -f coredns.yml\n\n# 查看\nkubectl get pods --namespace kube-system\nkubectl get svc --namespace kube-system\n\n#删除coredns\nkubectl delete deployment coredns -n kube-system\nkubectl delete svc kube-dns -n kube-system\nkubectl delete cm coredns -n kube-system\n```\n\n3、集群移除节点\n\n```\n1、#移除work节点\n在准备移除的 worker 节点上执行\nkubeadm reset\n\n2、在第一个 master 节点 demo-master-a-1 上执行\nkubectl delete node demo-worker-x-x\n#worker 节点的名字可以通过在第一个 master 节点 demo-master-a-1 上执行 kubectl get nodes 命令获得\n```\n\n4、kube-proxy开启ipvs\n\n```\n1、#修改ConfigMap的kube-system/kube-proxy中的config.conf，把 mode: \"\" 改为mode: “ipvs\" 保存退出即可\n\nroot># kubectl edit cm kube-proxy -n kube-system\nconfigmap/kube-proxy edited\n\n2、#删除之前的proxy pod\nroot># kubectl get pod -n kube-system |grep kube-proxy |awk '{system(\"kubectl delete pod \"$1\" -n kube-system\")}'\n\n3、#查看proxy运行状态\nroot># kubectl get pod -n kube-system | grep kube-proxy\n\n4、#查看日志,如果有 `Using ipvs Proxier.` 说明kube-proxy的ipvs 开启成功!\nroot># kubectl logs kube-proxy-54qnw -n kube-system\nI0518 20:24:09.319160       1 server_others.go:176] Using ipvs Proxier.\nW0518 20:24:09.319751       1 proxier.go:386] IPVS scheduler not specified, use rr by default\nI0518 20:24:09.320035       1 server.go:562] Version: v1.14.2\nI0518 20:24:09.334372       1 conntrack.go:52] Setting nf_conntrack_max to 131072\nI0518 20:24:09.334853       1 config.go:102] Starting endpoints config controller\nI0518 20:24:09.334916       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller\nI0518 20:24:09.334945       1 config.go:202] Starting service config controller\nI0518 20:24:09.334976       1 controller_utils.go:1027] Waiting for caches to sync for service config controller\nI0518 20:24:09.435153       1 controller_utils.go:1034] Caches are synced for service config controller\nI0518 20:24:09.435271       1 controller_utils.go:1034] Caches are synced for endpoints config controller\n```\n\n# 四、Master操作\n\n```\n#将 master 节点上面的 $HOME/.kube/config 文件拷贝到 node 节点对应的文件中\nmkdir -p $HOME/.kube\nyes | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nchown $(id -u):$(id -g) $HOME/.kube/config\n\nscp $HOME/.kube/config root@linux-node2:$HOME/.kube/config\nscp $HOME/.kube/config root@linux-node3:$HOME/.kube/config\n\n#指令补全\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n# 五、Node操作\n\n```\n#node节点操作\nmkdir -p $HOME/.kube\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n#加入集群\nkubeadm join 192.168.56.11:6443 --token 5avfk1.fwui1smk5utcu7m9     --discovery-token-ca-cert-hash sha256:6730e91a516d8bf3e26d8f5eddd6409a224f8703b94f6ecde2b1fd7481bbbd25\n```\n\n# 六、集群操作\n\n```\n#批量重启docker\ndocker restart `docker ps -a -q` \n\nroot># kubectl get nodes\nNAME                      STATUS     ROLES    AGE     VERSION\nlinux-node1.example.com   NotReady   master   11m     v1.15.3\nlinux-node2.example.com   NotReady   <none>   5m9s    v1.15.3\nlinux-node3.example.com   NotReady   <none>   4m58s   v1.15.3\n\n可以看到是 NotReady 状态，这是因为还没有安装网络插件，接下来安装网络插件，可以在文档 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 中选择我们自己的网络插件，这里我们安装 flannel:\n\niptables -I RH-Firewall-1-INPUT -s 10.96.0.0/16 -j ACCEPT\nservice iptables save\n\nroot># kubectl get pods -n kube-system\nNAME                                              READY   STATUS    RESTARTS   AGE\ncoredns-5c98db65d4-mk254                          1/1     Running   0          14m\ncoredns-5c98db65d4-ntz98                          1/1     Running   0          14m\netcd-linux-node1.example.com                      1/1     Running   0          13m\nkube-apiserver-linux-node1.example.com            1/1     Running   0          13m\nkube-controller-manager-linux-node1.example.com   1/1     Running   0          13m\nkube-flannel-ds-amd64-6kx7m                       1/1     Running   0          11m\nkube-flannel-ds-amd64-cqfnb                       1/1     Running   0          11m\nkube-flannel-ds-amd64-thxx2                       1/1     Running   0          11m\nkube-proxy-gdtjg                                  1/1     Running   0          12m\nkube-proxy-lcscl                                  1/1     Running   0          14m\nkube-proxy-sb7d8                                  1/1     Running   0          12m\nkube-scheduler-linux-node1.example.com            1/1     Running   0          13m\nkubernetes-dashboard-fcfb4cbc-dqbq9               1/1     Running   0          4m43s\n\nkubectl describe pod/coredns-5c98db65d4-mk254 -n kube-system\n\n#创建Deployment\nkubectl run --image=nginx nginx-web-1 --image-pull-policy='IfNotPresent' --replicas=3\n\n#以不同方式暴露出去\nkubectl expose deployment nginx-web-1 --port=80 --target-port=80\nkubectl expose deployment nginx-web-1 --port=80 --target-port=80 --type=NodePort\n\nroot># kubectl exec -it nginx-web-1-5cc49f46bc-kn46r -- \\\n               sh -c \"echo hello>/usr/share/nginx/html/index.html\"\n\nroot># kubectl get svc -A\ndefault       nginx-web-1   NodePort    10.10.43.53   <none>        80:30163/TCP             101s\n\nroot># kubectl get endpoints\nnginx-web-1   10.244.154.193:80,10.244.44.193:80,10.244.89.129:80   5m27s\n\nroot># curl 10.10.43.53   \nhello\n\n#显示iptables规则(注意这里kube-proxy需要使用ipvs模式，上面主机预设的iptables策略才生效)\niptables -nvL --line-number\n\n#删除规则\niptables -D RH-Firewall-1-INPUT 4\n```\n\n# 七、网络插件部署\n\n1、master上部署flannel插件\n\n```\n#插件镜像 network: flannel image（因墙的问题，需要从国内源下载）\ndocker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64\ndocker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64  quay.io/coreos/flannel:v0.11.0-amd64\n\nhttps://www.cnblogs.com/horizonli/p/10855666.html\n\n#部署flannel\nrm -f kube-flannel.yml\nwget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml\nsed -i 's#image: quay.io/coreos/flannel:v0.11.0-amd64#image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64#g' kube-flannel.yml\nkubectl apply -f kube-flannel.yml\n\n#另外需要注意的是如果你的节点有多个网卡的话，需要在 kube-flannel.yml 中使用--iface参数指定集群主机内网网卡的名称，否则可能会出现 dns 无法解析。flanneld 启动参数加上--iface=<iface-name>\nargs:\n- --ip-masq\n- --kube-subnet-mgr\n- --iface=eth0\n```\n\n2、master上部署calico插件\n\n```\nexport POD_SUBNET=10.244.0.0/16\nrm -f calico.yaml\nwget https://docs.projectcalico.org/v3.8/manifests/calico.yaml\nsed -i \"s#192\\.168\\.0\\.0/16#${POD_SUBNET}#\" calico.yaml\nkubectl apply -f calico.yaml\n\nhttps://www.cnblogs.com/goldsunshine/p/10701242.html  k8s网络之Calico网络\n```\n\n3、性能对比\n\n```\nhttps://www.2cto.com/net/201701/591629.html  kubernetes flannel neutron calico三种网络方案性能测试分析\n```\n\n# 八、安装 Dashboard\n\n使用 dashboard 最好把浏览器的默认语言设置为英文，不然在进入容器操作的时候会有bug，会出现重影\n\n1、下载yaml文件\n\n```\nwget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml\n\nvim kubernetes-dashboard.yaml\n1、# 修改镜像名称\n......\n    spec:\n      containers:\n      - name: kubernetes-dashboard\n        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 #这个换成阿里云的镜像\n        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1\n        ports:\n        - containerPort: 8443\n          protocol: TCP\n        args:\n          - --auto-generate-certificates\n......\n2、# 修改Service为NodePort类型\n......\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\nspec:\n  type: NodePort   # 新增这一行，指定为NodePort方式\n  ports:\n    - port: 443\n      targetPort: 8443\n      nodePort: 32370  #新增这一行，指定固定node端口\n  selector:\n    k8s-app: kubernetes-dashboard\n```\n\n2、dashboard最终文件\n\n```\ncat > kubernetes-dashboard.yaml << \\EOF\n# Copyright 2017 The Kubernetes Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# ------------------- Dashboard Secret ------------------- #\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-certs\n  namespace: kube-system\ntype: Opaque\n\n---\n# ------------------- Dashboard Service Account ------------------- #\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\n\n---\n# ------------------- Dashboard Role & Role Binding ------------------- #\n\nkind: Role\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: kubernetes-dashboard-minimal\n  namespace: kube-system\nrules:\n  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.\n- apiGroups: [\"\"]\n  resources: [\"secrets\"]\n  verbs: [\"create\"]\n  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.\n- apiGroups: [\"\"]\n  resources: [\"configmaps\"]\n  verbs: [\"create\"]\n  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.\n- apiGroups: [\"\"]\n  resources: [\"secrets\"]\n  resourceNames: [\"kubernetes-dashboard-key-holder\", \"kubernetes-dashboard-certs\"]\n  verbs: [\"get\", \"update\", \"delete\"]\n  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.\n- apiGroups: [\"\"]\n  resources: [\"configmaps\"]\n  resourceNames: [\"kubernetes-dashboard-settings\"]\n  verbs: [\"get\", \"update\"]\n  # Allow Dashboard to get metrics from heapster.\n- apiGroups: [\"\"]\n  resources: [\"services\"]\n  resourceNames: [\"heapster\"]\n  verbs: [\"proxy\"]\n- apiGroups: [\"\"]\n  resources: [\"services/proxy\"]\n  resourceNames: [\"heapster\", \"http:heapster:\", \"https:heapster:\"]\n  verbs: [\"get\"]\n\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: kubernetes-dashboard-minimal\n  namespace: kube-system\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: kubernetes-dashboard-minimal\nsubjects:\n- kind: ServiceAccount\n  name: kubernetes-dashboard\n  namespace: kube-system\n\n---\n# ------------------- Dashboard Deployment ------------------- #\n\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\nspec:\n  replicas: 1\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      k8s-app: kubernetes-dashboard\n  template:\n    metadata:\n      labels:\n        k8s-app: kubernetes-dashboard\n    spec:\n      containers:\n      - name: kubernetes-dashboard\n        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1\n        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1\n        ports:\n        - containerPort: 8443\n          protocol: TCP\n        args:\n          - --auto-generate-certificates\n          # Uncomment the following line to manually specify Kubernetes API server Host\n          # If not specified, Dashboard will attempt to auto discover the API server and connect\n          # to it. Uncomment only if the default does not work.\n          # - --apiserver-host=http://my-address:port\n        volumeMounts:\n        - name: kubernetes-dashboard-certs\n          mountPath: /certs\n          # Create on-disk volume to store exec logs\n        - mountPath: /tmp\n          name: tmp-volume\n        livenessProbe:\n          httpGet:\n            scheme: HTTPS\n            path: /\n            port: 8443\n          initialDelaySeconds: 30\n          timeoutSeconds: 30\n      volumes:\n      - name: kubernetes-dashboard-certs\n        secret:\n          secretName: kubernetes-dashboard-certs\n      - name: tmp-volume\n        emptyDir: {}\n      serviceAccountName: kubernetes-dashboard\n      # Comment the following tolerations if Dashboard must not be deployed on master\n      tolerations:\n      - key: node-role.kubernetes.io/master\n        effect: NoSchedule\n\n---\n# ------------------- Dashboard Service ------------------- #\n\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kube-system\nspec:\n  type: NodePort  # 新增这一行，指定为NodePort方式\n  ports:\n    - port: 443\n      targetPort: 8443\n      nodePort: 32370  #新增这一行，指定固定node端口\n  selector:\n    k8s-app: kubernetes-dashboard\nEOF\n\nkubectl apply -f kubernetes-dashboard.yaml\n```\n\n3、查看dashboard\n\n```\nroot># kubectl get pods -n kube-system -l k8s-app=kubernetes-dashboard\nNAME                                  READY   STATUS    RESTARTS   AGE\nkubernetes-dashboard-fcfb4cbc-dqbq9   1/1     Running   0          8m5s\n\nroot># kubectl get svc -n kube-system -l k8s-app=kubernetes-dashboard\nNAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE\nkubernetes-dashboard   NodePort   192.168.56.11   <none>        443:32730/TCP   8m25s\n\n然后可以通过上面的 https://NodeIP:32730 端口去访问 Dashboard，要记住使用 https，Chrome不生效可以使用Firefox测试：\n```\n\n4、然后创建一个具有全局所有权限的用户来登录Dashboard：(admin.yaml)\n\n```\ncat > admin.yaml << \\EOF\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: admin\n  annotations:\n    rbac.authorization.kubernetes.io/autoupdate: \"true\"\nroleRef:\n  kind: ClusterRole\n  name: cluster-admin\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n  name: admin\n  namespace: kube-system\n\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: admin\n  namespace: kube-system\n  labels:\n    kubernetes.io/cluster-service: \"true\"\n    addonmanager.kubernetes.io/mode: Reconcile\nEOF\n\nkubectl apply -f admin.yaml\n\nkubectl delete -f admin.yaml\n\n#获取token\nkubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin | awk '{print $1}')\n```\n\nhttps://192.168.56.12:31513\n\n然后用上面的base64解码后的字符串作为token登录Dashboard即可： k8s dashboard\n\n最终我们就完成了使用 kubeadm 搭建 v1.15.3 版本的 kubernetes 集群、coredns、ipvs、flannel。 \n\n# 九、问题排查\n\n1、coredns异常问题\n\n  ![coredns异常问题](https://github.com/Lancger/opsfull/blob/master/images/coredns-01.png)\n\n```\nE1006 12:30:53.935744       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.10.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.0.1:443: connect: no route to host\nE1006 12:30:53.935744       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.10.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.10.0.1:443: connect: no route to host\nlog: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-bccdc95cf-vlqxk.unknownuser.log.ERROR.20191006-123053.1: no such file or directory\n```\n\n解决办法\n\n```\n实际上是主机防火墙的问题，需要添加\niptables -A RH-Firewall-1-INPUT -s 10.10.0.0/16 -j ACCEPT\n\n其他参考\nhttps://medium.com/@cminion/quicknote-kubernetes-networking-issues-78f1e0d06e12\nhttps://github.com/coredns/coredns/issues/2325  \n```\n\n2、kubelet异常问题1\n\n```\n问题现象：\n\nkubelet fails to get cgroup stats for docker and kubelet services\n\n解决办法:\n\ncat > /etc/sysconfig/kubelet <<\\EOF\nKUBELET_EXTRA_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice\nEOF\n\nsystemctl daemon-reload\nsystemctl restart kubelet\nsystemctl status kubelet\n\n#查看kubelet日志\njournalctl -f -u kubelet\n\nhttps://stackoverflow.com/questions/46726216/kubelet-fails-to-get-cgroup-stats-for-docker-and-kubelet-services  \n\nhttps://www.twblogs.net/a/5cc87d63bd9eee1ac2ed736b\n```\n\n3、kubelet异常问题2\n\n```\nfailed to create kubelet: misconfiguration: kubelet cgroup driver: \"cgroupfs\" is different from docker cgroup driver: \"systemd\"\n\n#解决办法\n添加如下内容--cgroup-driver=systemd\n\n[root@tw19336 ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf\n# Note: This dropin only works with kubeadm and kubelet v1.11+\n[Service]\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd\"\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml\"\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env\n# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use\n# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-/etc/sysconfig/kubelet\nExecStart=\nExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS\n\n\nsystemctl daemon-reload\nsystemctl restart kubelet\nsystemctl status kubelet\nhttps://www.cnblogs.com/hongdada/p/9771857.html\n```\n\n参考文档：\n\nhttps://www.cnblogs.com/liyongjian5179/p/11417794.html   使用kubeadm安装Kubernetes 1.15.3 并开启 ipvs\n\nhttps://www.jianshu.com/p/8bc61078bded \n\nhttps://www.cnblogs.com/lovesKey/p/10888006.html  centos7下用kubeadm安装k8s集群并使用ipvs做高可用方案\n\nhttps://github.com/kubernetes/dashboard/wiki/Creating-sample-user\n\nhttps://www.qikqiak.com/post/use-kubeadm-install-kubernetes-1.15.3/ \n\nhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/  官方文档 \n\nhttps://www.jianshu.com/p/d0933d6ae162 kubeadm 1.15 安装\n\nhttps://yq.aliyun.com/articles/680080/  单独部署coredns\n\nhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/#stacked-etcd-topology  etcd-stacked-cluster\n\nhttps://www.kubernetes.org.cn/5021.html  etcd 集群运维实践\n"
  },
  {
    "path": "kubeadm/k8S-HA-V1.15.3-Calico-开启防火墙版.md",
    "content": "# 环境介绍：\n```bash\nCentOS： 7.6\nDocker： docker-ce-18.09.9\nKubernetes： 1.15.3\nKubeadm： 1.15.3\nKubelet： 1.15.3\nKubectl： 1.15.3\n```  \n# 部署介绍：\n\n&#8195;创建高可用首先先有一个 Master 节点，然后再让其他服务器加入组成三个 Master 节点高可用，然后再将工作节点 Node 加入。下面将描述每个节点要执行的步骤：\n```bash\nMaster01： 二、三、四、五、六、七、八、九、十一\nMaster02、Master03： 二、三、五、六、四、九\nnode01、node02、node03： 二、五、六、九\n```\n# 防火墙配置\n```bash\nyum install iptables iptables-services -y\n\ncat > /etc/sysconfig/iptables << \\EOF\n# Generated by iptables-save v1.4.21 on Thu Aug  1 01:26:09 2019\n*filter\n:INPUT ACCEPT [0:0]\n:FORWARD ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:RH-Firewall-1-INPUT - [0:0]\n-A INPUT -j RH-Firewall-1-INPUT\n-A FORWARD -j RH-Firewall-1-INPUT\n-A RH-Firewall-1-INPUT -i lo -j ACCEPT\n-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.0/24 -p tcp -m tcp --dport 22 -j ACCEPT\n-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 22 -j DROP\n#k8s\n-A RH-Firewall-1-INPUT -s 192.168.56.11/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.12/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.13/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.14/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.1/32 -p tcp -m multiport --dports 80,443,1080,6443,16443 -j ACCEPT\n#\n-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT\n-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited\nCOMMIT\n# Completed on Thu Aug  1 01:26:09 2019\nEOF\nsystemctl restart iptables.service\nsystemctl enable iptables.service\n\niptables -nvL\n\n```\n# 集群架构：\n\n  ![kubeadm高可用架构图](https://github.com/Lancger/opsfull/blob/master/images/kubeadm-ha.jpg)\n \n# 一、kuberadm 简介\n\n### 1、Kuberadm 作用\n\n&#8195;Kubeadm 是一个工具，它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。\n\n&#8195;kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群。它被故意设计为只关心启动集群，而不是之前的节点准备工作。同样的，诸如安装各种各样值得拥有的插件，例如 Kubernetes Dashboard、监控解决方案以及特定云提供商的插件，这些都不在它负责的范围。\n\n&#8195;相反，我们期望由一个基于 kubeadm 从更高层设计的更加合适的工具来做这些事情；并且，理想情况下，使用 kubeadm 作为所有部署的基础将会使得创建一个符合期望的集群变得容易。\n\n### 2、Kuberadm 功能\n```bash\nkubeadm init： 启动一个 Kubernetes 主节点\nkubeadm join： 启动一个 Kubernetes 工作节点并且将其加入到集群\nkubeadm upgrade： 更新一个 Kubernetes 集群到新版本\nkubeadm config： 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化集群，您需要对集群做一些配置以便使用 kubeadm upgrade 命令\nkubeadm token： 管理 kubeadm join 使用的令牌\nkubeadm reset： 还原 kubeadm init 或者 kubeadm join 对主机所做的任何更改\nkubeadm version： 打印 kubeadm 版本\nkubeadm alpha： 预览一组可用的新功能以便从社区搜集反馈\n```\n### 3、功能版本\n\n<table border=\"0\">\n    <tr>\n        <td><strong>Area<strong></td>\n        <td><strong>Maturity Level<strong></td>\n    </tr>\n    <tr>\n        <td>Command line UX</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Implementation</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Config file API</td>\n        <td>beta</td>\n    </tr>\n    <tr>\n        <td>CoreDNS</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>kubeadm alpha subcommands</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>High availability</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>DynamicKubeletConfig</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>Self-hosting</td>\n        <td>alpha</td>\n    </tr>\n</table>\n            \n# 二、前期准备\n\n### 1、虚拟机分配说明\n\n<table border=\"0\">\n    <tr>\n        <td><strong>地址<strong></td>\n        <td><strong>主机名</td>\n        <td><strong>内存&CPU</td>\n        <td><strong>角色</td>\n    </tr>\n    <tr>\n        <td>192.168.56.200</td>\n        <td>-</td>\n        <td>-</td>\n        <td>vip</td>\n    </tr>\n    <tr>\n        <td>192.168.56.11</td>\n        <td>k8s-master-01</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>192.168.56.12</td>\n        <td>k8s-master-02</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>192.168.56.13</td>\n        <td>k8s-master-03</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>192.168.56.14</td>\n        <td>k8s-node-01</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>192.168.56.15</td>\n        <td>k8s-node-02</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>192.168.56.16</td>\n        <td>k8s-node-03</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n</table>\n\n### 2、各个节点端口占用\n\n- Master 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>6443*</td>\n        <td>Kubernetes API</td>\n        <td>server All</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>2379-2380</td>\n        <td>etcd server</td>\n        <td>client API kube-apiserver, etcd</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10251</td>\n        <td>kube-scheduler</td>\n        <td>Self</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10252</td>\n        <td>kube-controller-manager</td>\n        <td>Self</td>\n    </tr>\n</table>\n\n- node 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>30000-32767</td>\n        <td>NodePort Services**</td>\n        <td>All</td>\n    </tr>\n</table>\n    \n### 3、基础环境设置\n\n&#8195;Kubernetes 需要一定的环境来保证正常运行，如各个节点时间同步，主机名称解析，关闭防火墙等等。\n\n1、主机名称解析\n\n&#8195;分布式系统环境中的多主机通信通常基于主机名称进行，这在 IP 地址存在变化的可能性时为主机提供了固定的访问人口，因此一般需要有专用的 DNS 服务负责解决各节点主机 不过，考虑到此处部署的是测试集群，因此为了降低系复杂度，这里将基于 hosts 的文件进行主机名称解析。\n\n2、修改hosts和免key登录\n\n```bash\n#分别进入不同服务器，进入 /etc/hosts 进行编辑\n\ncat > /etc/hosts << \\EOF\n127.0.0.1     localhost  localhost.localdomain localhost4 localhost4.localdomain4\n::1           localhost  localhost.localdomain localhost6 localhost6.localdomain6\n192.168.56.200   k8s-vip         master      master.k8s.io\n192.168.56.11    k8s-master-01   master01    master01.k8s.io\n192.168.56.12    k8s-master-02   master02    master02.k8s.io\n192.168.56.13    k8s-master-03   master03    master03.k8s.io\n192.168.56.14    k8s-node-01     node01      node01.k8s.io\n192.168.56.15    k8s-node-02     node02      node02.k8s.io\n192.168.56.16    k8s-node-03     node03      node03.k8s.io\nEOF\n\n#root用户免密登录\nmkdir -p /root/.ssh/\nchmod 700 /root/.ssh/\necho 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bRm20od1b3rzW3ZPLB5NZn3jQesvfiz2p0WlfcYJrFHfF5Ap0ubIBUSQpVNLn94u8ABGBLboZL8Pjo+rXQPkIcObJxoKS8gz6ZOxcxJhldudbadabdanKAAKAKKKKKKKKKKKKKKKKKKKKKKK root@k8s-master-01' > /root/.ssh/authorized_keys\nchmod 400 /root/.ssh/authorized_keys\n```\n\n3、修改hostname\n\n```bash\n#分别进入不同的服务器修改 hostname 名称\n\n# 修改 192.168.56.11 服务器\nhostnamectl  set-hostname  k8s-master-01\n\n# 修改 192.168.56.12 服务器\nhostnamectl  set-hostname  k8s-master-02\n\n# 修改 192.168.56.13 服务器\nhostnamectl  set-hostname  k8s-master-03\n\n# 修改 192.168.56.14 服务器\nhostnamectl  set-hostname  k8s-node-01\n\n# 修改 192.168.56.15 服务器\nhostnamectl  set-hostname  k8s-node-02\n\n# 修改 192.168.56.16 服务器\nhostnamectl  set-hostname  k8s-node-03\n```\n\n4、主机时间同步\n\n```bash\n#将各个服务器的时间同步，并设置开机启动同步时间服务\n\nsystemctl start chronyd.service\nsystemctl enable chronyd.service\n```\n\n5、关闭防火墙服务\n```bash\nsystemctl stop firewalld\nsystemctl disable firewalld\n```\n\n6、关闭并禁用SELinux\n```bash\n# 若当前启用了 SELinux 则需要临时设置其当前状态为 permissive\nsetenforce 0\n\n# 编辑／etc/sysconfig selinux 文件，以彻底禁用 SELinux\nsed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config\n\n# 查看selinux状态\ngetenforce \n\n如果为permissive，则执行reboot重新启动即可\n\n```\n\n7、禁用 Swap 设备\n\n&#8195;kubeadm 默认会预先检当前主机是否禁用了 Swap 设备，并在未用时强制止部署 过程因此，在主机内存资惊充裕的条件下，需要禁用所有的 Swap 设备\n\n```\n# 关闭当前已启用的所有 Swap 设备\nswapoff -a && sysctl -w vm.swappiness=0\n\nsed -ri 's/.*swap.*/#&/' /etc/fstab\ncat /etc/fstab\n或\n# 编辑 fstab 配置文件，注释掉标识为 Swap 设备的所有行\nvi /etc/fstab\n\nUUID=9be41058-76a6-4588-8e3f-5b44604d8de1 /                       xfs     defaults,noatime        0 0\nUUID=4489cc8f-1885-4e17-bfe7-8652fd1d3feb /boot                   xfs     defaults,noatime        0 0\n#UUID=0f5ae5f1-4872-471f-9f3a-f172a43fc1ff swap                    swap    defaults,noatime        0 0\n```\n\n8、设置系统参数\n\n&#8195;设置允许路由转发，不对bridge的数据进行处理\n\n```bash\n#创建 /etc/sysctl.d/k8s.conf 文件\n\ncat > /etc/sysctl.d/k8s.conf << \\EOF\nvm.swappiness = 0\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nEOF\n\n#挂载br_netfilter\nmodprobe br_netfilter\n\n#生效配置文件\nsysctl -p /etc/sysctl.d/k8s.conf\n\n#查看是否生成相关文件\nls /proc/sys/net/bridge\n```\n\n9、资源配置文件\n\n`/etc/security/limits.conf` 是 Linux 资源使用配置文件，用来限制用户对系统资源的使用\n\n```bash\necho \"* soft nofile 65536\" >> /etc/security/limits.conf\necho \"* hard nofile 65536\" >> /etc/security/limits.conf\necho \"* soft nproc 65536\"  >> /etc/security/limits.conf\necho \"* hard nproc 65536\"  >> /etc/security/limits.conf\necho \"* soft memlock unlimited\"  >> /etc/security/limits.conf\necho \"* hard memlock unlimited\"  >> /etc/security/limits.conf\n```\n\n10、安装依赖包以及相关工具\n\n```bash\nyum install -y epel-release\n\nyum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl\n```\n\n# 三、安装Keepalived\n\n- keepalived介绍： 是集群管理中保证集群高可用的一个服务软件，其功能类似于heartbeat，用来防止单点故障\n- Keepalived作用： 为haproxy提供vip（192.168.56.200）在三个haproxy实例之间提供主备，降低当其中一个haproxy失效的时对服务的影响。\n\n### 1、yum安装Keepalived\n```bash\n# 安装keepalived\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\nyum install -y keepalived\n```\n\n### 2、配置Keepalived\n```bash\ncat <<EOF > /etc/keepalived/keepalived.conf\n! Configuration File for keepalived\n\n# 主要是配置故障发生时的通知对象以及机器标识。\nglobal_defs {\n   # 标识本节点的字条串，通常为 hostname，但不一定非得是 hostname。故障发生时，邮件通知会用到。\n   router_id LVS_K8S\n}\n\n# 用来做健康检查的，当时检查失败时会将 vrrp_instance 的 priority 减少相应的值。\nvrrp_script check_haproxy {\n    script \"killall -0 haproxy\"   #根据进程名称检测进程是否存活\n    interval 3\n    weight -2\n    fall 10\n    rise 2\n}\n\n# rp_instance用来定义对外提供服务的 VIP 区域及其相关属性。\nvrrp_instance VI_1 {\n    state MASTER   #当前节点为MASTER，其他两个节点设置为BACKUP\n    interface eth0 #改为自己的网卡\n    virtual_router_id 51\n    priority 250\n    advert_int 1\n    authentication {\n        auth_type PASS\n        auth_pass 35f18af7190d51c9f7f78f37300a0cbd\n    }\n    virtual_ipaddress {\n        192.168.56.200   #虚拟ip，即VIP\n    }\n    track_script {\n        check_haproxy\n    }\n\n}\nEOF\n```\n当前节点的配置中 state 配置为 MASTER，其它两个节点设置为 BACKUP\n\n```bash\n配置说明：\n\n    virtual_ipaddress： vip\n    track_script： 执行上面定义好的检测的script\n    interface： 节点固有IP（非VIP）的网卡，用来发VRRP包。\n    virtual_router_id： 取值在0-255之间，用来区分多个instance的VRRP组播\n    advert_int： 发VRRP包的时间间隔，即多久进行一次master选举（可以认为是健康查检时间间隔）。\n    authentication： 认证区域，认证类型有PASS和HA（IPSEC），推荐使用PASS（密码只识别前8位）。\n    state： 可以是MASTER或BACKUP，不过当其他节点keepalived启动时会将priority比较大的节点选举为MASTER，因此该项其实没有实质用途。\n    priority： 用来选举master的，要成为master，那么这个选项的值最好高于其他机器50个点，该项取值范围是1-255（在此范围之外会被识别成默认值100）。\n    \n# 1、注意防火墙需要放开vrrp协议(不然会出现脑裂现象，三台主机都存在VIP的情况)\n#-A INPUT -p vrrp -j ACCEPT\n-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT\n    \n#2、注意上面配置script \"killall -0 haproxy\"   #根据进程名称检测进程是否存活，会在/var/log/messages每隔一秒执行检测的日志记录\n# tail -100f /var/log/messages\n\nSep 27 10:54:16 tw19410s1 Keepalived_vrrp[9113]: /usr/bin/killall -0 haproxy exited with status 1\n```\n\n### 3、启动Keepalived\n```bash\n# 设置开机启动\nsystemctl enable keepalived\n\n# 启动keepalived\nsystemctl start keepalived\n\n# 查看启动状态\nsystemctl status keepalived\n```\n### 4、查看网络状态\n\nkepplived 配置中 state 为 MASTER 的节点启动后，查看网络状态，可以看到虚拟IP已经加入到绑定的网卡中\n\n```bash\n[root@k8s-master-01 ~]# ip address show eth0\n2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000\n    link/ether 00:50:56:be:86:af brd ff:ff:ff:ff:ff:ff\n    inet 192.168.56.11/24 brd 192.168.56.255 scope global eth0\n       valid_lft forever preferred_lft forever\n    inet 192.168.56.200/32 scope global eth0\n       valid_lft forever preferred_lft forever\n\n当关掉当前节点的keeplived服务后将进行虚拟IP转移，将会推选state 为 BACKUP 的节点的某一节点为新的MASTER，可以在那台节点上查看网卡，将会查看到虚拟IP\n```\n\n# 四、安装haproxy\n\n&#8195;此处的haproxy为apiserver提供反向代理，haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量，这种方式更加合理、健壮。\n\n### 1、yum安装haproxy\n```bash\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\n\nyum install -y haproxy\n```\n\n### 2、配置haproxy\n```bash\ncat > /etc/haproxy/haproxy.cfg << EOF\n#---------------------------------------------------------------------\n# Global settings\n#---------------------------------------------------------------------\nglobal\n    # to have these messages end up in /var/log/haproxy.log you will\n    # need to:\n    # 1) configure syslog to accept network log events.  This is done\n    #    by adding the '-r' option to the SYSLOGD_OPTIONS in\n    #    /etc/sysconfig/syslog\n    # 2) configure local2 events to go to the /var/log/haproxy.log\n    #   file. A line like the following can be added to\n    #   /etc/sysconfig/syslog\n    #\n    #    local2.*                       /var/log/haproxy.log\n    #\n    log         127.0.0.1 local2\n    \n    chroot      /var/lib/haproxy\n    pidfile     /var/run/haproxy.pid\n    maxconn     4000\n    user        haproxy\n    group       haproxy\n    daemon \n       \n    # turn on stats unix socket\n    stats socket /var/lib/haproxy/stats\n#---------------------------------------------------------------------\n# common defaults that all the 'listen' and 'backend' sections will\n# use if not designated in their block\n#---------------------------------------------------------------------  \ndefaults\n    mode                    http\n    log                     global\n    option                  httplog\n    option                  dontlognull\n    option http-server-close\n    option forwardfor       except 127.0.0.0/8\n    option                  redispatch\n    retries                 3\n    timeout http-request    10s\n    timeout queue           1m\n    timeout connect         10s\n    timeout client          1m\n    timeout server          1m\n    timeout http-keep-alive 10s\n    timeout check           10s\n    maxconn                 3000\n#---------------------------------------------------------------------\n# kubernetes apiserver frontend which proxys to the backends\n#--------------------------------------------------------------------- \nfrontend kubernetes-apiserver\n    mode                 tcp\n    bind                 *:16443\n    option               tcplog\n    default_backend      kubernetes-apiserver    \n#---------------------------------------------------------------------\n# round robin balancing between the various backends\n#---------------------------------------------------------------------\nbackend kubernetes-apiserver\n    mode        tcp\n    balance     roundrobin\n    server      master01.k8s.io   192.168.56.11:6443 check\n    server      master02.k8s.io   192.168.56.12:6443 check\n    server      master03.k8s.io   192.168.56.13:6443 check\n#---------------------------------------------------------------------\n# collection haproxy statistics message\n#---------------------------------------------------------------------\nlisten stats\n    bind                 *:1080\n    stats auth           admin:awesomePassword\n    stats refresh        5s\n    stats realm          HAProxy\\ Statistics\n    stats uri            /admin?stats\nEOF\n```\nhaproxy配置在其他master节点上(192.168.56.12和192.168.56.13)相同\n\n### 3、启动并检测haproxy\n```bash\n# 设置开机启动\nsystemctl enable haproxy\n\n# 开启haproxy\nsystemctl start haproxy\n\n# 查看启动状态\nsystemctl status haproxy\n```\n\n### 4、检测haproxy端口\n```bash\nss -lnt | grep -E \"16443|1080\"\n```\n\n# 五、安装Docker (所有节点)\n\n### 1、移除之前安装过的Docker\n```bash\nsudo yum remove -y docker \\\n                  docker-client \\\n                  docker-client-latest \\\n                  docker-common \\\n                  docker-latest \\\n                  docker-latest-logrotate \\\n                  docker-logrotate \\\n                  docker-selinux \\\n                  docker-engine-selinux \\\n                  docker-ce-cli \\\n                  docker-engine\n                  \n# 查看还有没有存在的docker组件\nrpm -qa|grep docker\n\n# 有则通过命令 yum -y remove XXX 来删除,比如：\nyum remove docker-ce-cli\n```\n\n### 2、配置docker的yum源\n\n下面两个镜像源选择其一即可，由于官方下载速度比较慢，推荐用阿里镜像源\n\n- 阿里镜像源\n\n```bash\nsudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\n```\n\n- Docker官方镜像源\n```bash\nsudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo\n```\n\n### 3、安装Docker：\n\n```\n# 显示docker-ce所有可安装版本：\nyum list docker-ce --showduplicates | sort -r\n\n# 安装指定docker版本\nsudo yum install docker-ce-18.09.9-3.el7.x86_64 -y\n\n# 启动docker并设置docker开机启动\nsystemctl enable docker\nsystemctl start docker\n\n# 确认一下iptables\n确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。\n\niptables -nvL\n\nChain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target     prot opt in     out     source               destination         \n    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED\n    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0      \n    \nDocker从1.13版本开始调整了默认的防火墙规则，禁用了iptables filter表中FOWARD链，这样会引起Kubernetes集群中跨Node的Pod无法通信。但这里通过安装docker 1806，发现默认策略又改回了ACCEPT，这个不知道是从哪个版本改回的，因为我们线上版本使用的1706还是需要手动调整这个策略的。\n\n# 执行下面命令\niptables -P FORWARD ACCEPT\n\n# 修改docker的配置\nvim /usr/lib/systemd/system/docker.service\n\n# 增加下面命令(ExecReload后面新增ExecStartPost=...)\n...\nExecReload=/bin/kill -s HUP $MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\n...\n\n# 配置docker加速器\ncat > /etc/docker/daemon.json << \\EOF\n{\n  \"registry-mirrors\": [\n    \"https://dockerhub.azk8s.cn\",\n    \"https://i37dz0y4.mirror.aliyuncs.com\"\n  ],\n  \"insecure-registries\": [\"reg.hub.com\"]\n}\nEOF\n\n# 重启Docker\nsystemctl daemon-reload\nsystemctl restart docker\n```\n### 4、docker最终的服务文件\n```\n#注意，有变量的地方需要使用转义符号\n\ncat > /usr/lib/systemd/system/docker.service << EOF\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\nAfter=network-online.target firewalld.service containerd.service\nWants=network-online.target\nRequires=docker.socket\n\n[Service]\nType=notify\n# the default is not to use systemd for cgroups because the delegate issues still\n# exists and systemd currently does not support the cgroup feature set required\n# for containers run by docker\nExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd\nExecReload=/bin/kill -s HUP \\$MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\nTimeoutSec=0\nRestartSec=2\nRestart=always\n\n# Note that StartLimit* options were moved from \"Service\" to \"Unit\" in systemd 229.\n# Both the old, and new location are accepted by systemd 229 and up, so using the old location\n# to make them work for either version of systemd.\nStartLimitBurst=3\n\n# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.\n# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make\n# this option work for either version of systemd.\nStartLimitInterval=60s\n\n# Having non-zero Limit*s causes performance problems due to accounting overhead\n# in the kernel. We recommend using cgroups to do container-local accounting.\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\n\n# Comment TasksMax if your systemd version does not support it.\n# Only systemd 226 and above support this option.\nTasksMax=infinity\n\n# set delegate yes so that systemd does not reset the cgroups of docker containers\nDelegate=yes\n\n# kill only the docker process, not all processes in the cgroup\nKillMode=process\n\n[Install]\nWantedBy=multi-user.target\nEOF\n\n# 重启Docker\nsystemctl daemon-reload\nsystemctl restart docker\nsystemctl enable docker\n```\n\n# 六、安装kubeadm、kubelet\n\n### 1、配置可用的国内yum源用于安装：\n```\ncat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\ngpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg\nEOF\n```\n\n### 2、安装kubelet\n\n```\n# 需要在每台机器上都安装以下的软件包：\n     kubeadm: 用来初始化集群的指令。\n     kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。\n     kubectl: 用来与集群通信的命令行工具。\n\n# 查看kubelet版本列表\nyum list kubelet --showduplicates | sort -r \n\n# 安装kubelet\nyum install -y kubelet-1.15.3-0\n\n# 启动kubelet并设置开机启动\nsystemctl enable kubelet \nsystemctl start kubelet\n\n# 检查状态\n检查状态,发现是failed状态，正常，kubelet会10秒重启一次，需等下面完成初始化master节点后即可正常\nsystemctl status kubelet\n\n# 查看kubelet日志\njournalctl -u kubelet --no-pager\n```\n\n### 3、安装kubeadm\n\n```\n# 负责初始化集群\n# 1、查看kubeadm版本列表\nyum list kubeadm --showduplicates | sort -r \n\n# 2、安装kubeadm\nyum install -y kubeadm-1.15.3-0\n\n# 安装 kubeadm 时候会默认安装 kubectl ，所以不需要单独安装kubectl\n\n# 3、重启服务器\n为了防止发生某些未知错误，这里我们重启下服务器，方便进行后续操作\nreboot\n```\n\n# 七、初始化第一个kubernetes master节点\n\n```\n# 因为需要绑定虚拟IP，所以需要首先先查看虚拟IP启动这几台master机子哪台上\n\n[root@k8s-master-01 ~]# ip address show eth0\n2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000\n    link/ether 00:50:56:be:86:af brd ff:ff:ff:ff:ff:ff\n    inet 192.168.56.56/22 brd 10.19.3.255 scope global eth0\n       valid_lft forever preferred_lft forever\n    inet 192.168.56.200/32 scope global eth0\n       valid_lft forever preferred_lft forever\n\n可以看到虚拟IP 192.168.56.200  和 服务器IP 192.168.56.11在一台机子上，所以初始化kubernetes第一个master要在master01机子上进行安装\n```\n\n### 1、创建kubeadm配置的yaml文件\n```\n# 1、创建kubeadm配置的yaml文件\n\nrm -f ./kubeadm-config.yaml\n\nexport APISERVER_NAME=master.k8s.io\nexport POD_SUBNET=10.20.0.0/16\nexport SVC_SUBNET=10.96.0.0/16\n\ncat > kubeadm-config.yaml << EOF\napiServer:\n  certSANs:\n    - k8s-master-01\n    - k8s-master-02\n    - k8s-master-03\n    - master.k8s.io\n    - 192.168.56.11\n    - 192.168.56.12\n    - 192.168.56.13\n    - 192.168.56.200\n    - 127.0.0.1\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: \"${APISERVER_NAME}:16443\"\ncontrollerManager: {}\ndns: \n  type: CoreDNS\netcd:\n  local:    \n    dataDir: /var/lib/etcd\nimageRepository: registry.aliyuncs.com/google_containers\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.3\nnetworking: \n  dnsDomain: cluster.local  \n  podSubnet: \"${POD_SUBNET}\"\n  serviceSubnet: \"${SVC_SUBNET}\"\nscheduler: {}\nEOF\n\n以下两个地方设置： \n- certSANs： 虚拟ip地址（为了安全起见，把所有集群地址都加上） \n- controlPlaneEndpoint： 虚拟IP:监控端口号\n\n配置说明：\n\n    imageRepository： registry.aliyuncs.com/google_containers (使用阿里云镜像仓库)\n    podSubnet： 10.20.0.1/16 (#pod地址池)\n    serviceSubnet： 10.96.0.1/16 (#service地址池)\n```\n\n### 2、初始化第一个master节点\n```\nkubeadm init --config=kubeadm-config.yaml --upload-certs   #使用这个就不用做拷贝证书的操作\n```\n日志\n```\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of the control-plane node running the following command on each as root:\n\n  kubeadm join master.k8s.io:16443 --token wf0eoe.liqcp0nhtlov4ioi \\\n    --discovery-token-ca-cert-hash sha256:e43bbb08bb5decae1ce0001f2988ff79095e6be5a3dea77a7c6af180562c7e56 \\\n    --control-plane --certificate-key 6054323448a1aeb661b78763262db5c30e12026c54341400d48401a853194ec2\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use \n\"kubeadm init phase upload-certs --upload-certs\" to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join master.k8s.io:16443 --token wf0eoe.liqcp0nhtlov4ioi \\\n    --discovery-token-ca-cert-hash sha256:e43bbb08bb5decae1ce0001f2988ff79095e6be5a3dea77a7c6af180562c7e56\n```\n### 执行结果中\n\n用于初始化第二、三个 master 节点\n```\nkubeadm join master.k8s.io:16443 --token wf0eoe.liqcp0nhtlov4ioi \\\n    --discovery-token-ca-cert-hash sha256:e43bbb08bb5decae1ce0001f2988ff79095e6be5a3dea77a7c6af180562c7e56 \\\n    --control-plane --certificate-key 6054323448a1aeb661b78763262db5c30e12026c54341400d48401a853194ec2\n```\n用于初始化 worker 节点\n```\nkubeadm join master.k8s.io:16443 --token wf0eoe.liqcp0nhtlov4ioi \\\n    --discovery-token-ca-cert-hash sha256:e43bbb08bb5decae1ce0001f2988ff79095e6be5a3dea77a7c6af180562c7e56\n```\n\n### 3、配置kubectl环境变量\n```bash\n# 配置环境变量\n\nrm -rf $HOME/.kube\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 4、查看组件状态\n```bash\nkubectl get cs\n\nNAME                 STATUS    MESSAGE              ERROR\ncontroller-manager   Healthy   ok                   \nscheduler            Healthy   ok                   \netcd-0               Healthy   {\"health\": \"true\"}   \n\n# 查看pod状态\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                0/1     Pending   0          7m32s    ---coredns没有启动\ncoredns-78d4cf999f-mkgsx                0/1     Pending   0          7m32s    ---coredns没有启动\netcd-k8s-master-01                      1/1     Running   0          6m39s\nkube-apiserver-k8s-master-01            1/1     Running   0          6m43s\nkube-controller-manager-k8s-master-01   1/1     Running   0          6m32s\nkube-proxy-88s74                        1/1     Running   0          7m32s\nkube-scheduler-k8s-master-01            1/1     Running   0          6m45s\n\n可以看到coredns没有启动，这是由于还没有配置网络插件，接下来配置下后再重新查看启动状态\n\n#检查ETCD服务\ndocker exec -it $(docker ps |grep etcd_etcd|awk '{print $1}') sh\netcdctl --endpoints=https://192.168.56.11:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key member list\n\netcdctl --endpoints=https://192.168.56.11:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key cluster-health\n```\n# 八、安装网络插件\n\n### 1、安装 calico 网络插件\n```\n# 安装 calico 网络插件\n# 参考文档 https://docs.projectcalico.org/v3.8/getting-started/kubernetes/\n\nexport POD_SUBNET=10.20.0.0/16\nrm -f calico.yaml\nwget https://docs.projectcalico.org/v3.8/manifests/calico.yaml\nsed -i \"s#192\\.168\\.0\\.0/16#${POD_SUBNET}#\" calico.yaml\nkubectl apply -f calico.yaml\n```\n\n### 2、等待一会时间，再次查看各个pods的状态\n```\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                1/1     Running   0          12m    ---coredns启动成功\ncoredns-78d4cf999f-mkgsx                1/1     Running   0          12m    ---coredns启动成功\netcd-k8s-master-01                      1/1     Running   0          11m\nkube-apiserver-k8s-master-01            1/1     Running   0          12m\nkube-controller-manager-k8s-master-01   1/1     Running   0          11m\nkube-flannel-ds-amd64-7lj6m             1/1     Running   0          13s\nkube-proxy-88s74                        1/1     Running   0          12m\nkube-scheduler-k8s-master-01            1/1     Running   0          12m\n```\n\n# 九、加入集群\n\n### 1、Master加入集群构成高可用\n```\n复制秘钥到各个节点\n\n在master01 服务器上执行下面命令，将kubernetes相关文件复制到 master02、master03\n\n如果其他节点为初始化第一个master节点，则将该节点的配置文件复制到其余两个主节点，例如master03为第一个master节点，则将它的k8s配置复制到master02和master01。\n```\n- 复制文件到 master02\n```\nssh root@master02.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master02.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master02.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master02.k8s.io:/etc/kubernetes/pki/etcd\n```\n- 复制文件到 master03\n\n```\nssh root@master03.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master03.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master03.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master03.k8s.io:/etc/kubernetes/pki/etcd\n```\n- master节点加入集群\n\n&#8195;master02 和 master03 服务器上都执行加入集群操作\n\n```bash\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f --experimental-control-plane\n```\n&#8195;如果加入失败想重新尝试，请输入 kubeadm reset 命令清除之前的设置，重新执行从“复制秘钥”和“加入集群”这两步\n\n&#8195;如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n\n```bash\n# 显示安装过程:\n\nThis node has joined the cluster and a new control plane instance was created:\n\n* Certificate signing request was sent to apiserver and approval was received.\n* The Kubelet was informed of the new secure connection details.\n* Master label and taint were applied to the new node.\n* The Kubernetes control plane instances scaled up.\n* A new etcd member was added to the local/stacked etcd cluster.\n\nTo start administering your cluster from this node, you need to run the following as a regular user:\n\n        mkdir -p $HOME/.kube\n        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n        sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nRun 'kubectl get nodes' to see this node join the cluster.\n```\n- 配置kubectl环境变量\n```bash\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 2、node节点加入集群\n\n&#8195;除了让master节点加入集群组成高可用外，slave节点也要加入集群中。\n\n&#8195;这里将k8s-node-01、k8s-node-02、k8s-node-03加入集群，进行工作\n\n&#8195;输入初始化k8s master时候提示的加入命令，如下：\n\n```\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f\n```\n&#8195;node节点加入，不需要加上 –experimental-control-plane 这个参数\n\n### 3、如果忘记加入集群的token和sha256 (如正常则跳过)\n\n- 显示获取token列表\n\n```\nkubeadm token list\n```\n\n默认情况下 Token 过期是时间是24小时，如果 Token 过期以后，可以输入以下命令，生成新的 Token\n\n```\nkubeadm token create\n```\n\n- 获取ca证书sha256编码hash值\n\n```\nopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'\n```\n\n拼接命令\n```\nkubeadm join master.k8s.io:16443 --token 882ik4.9ib2kb0eftvuhb58 --discovery-token-ca-cert-hash sha256:0b1a836894d930c8558b350feeac8210c85c9d35b6d91fde202b870f3244016a\n\n如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n```\n\n### 4、查看各个节点加入集群情况\n```\nkubectl get nodes -o wide\n\n```\n\n# 十、从集群中删除 Node\n\n- Master节点：\n\n```\nkubectl drain <node name> --delete-local-data --force --ignore-daemonsets\nkubectl delete node <node name>\n```\n\n- Slave节点：\n\n```\nkubeadm reset\n```\n\n## 初始化失败\n```bash\nkubeadm reset\nifconfig cni0 down\nip link delete cni0\nifconfig flannel.1 down\nip link delete flannel.1\nrm -rf /var/lib/cni/\nrm -rf /var/lib/etcd/*\n```\n\n参考资料：\n\nhttp://www.mydlq.club/article/4/\n\nhttps://kuboard.cn/install/install-kubernetes.html#%E5%88%9D%E5%A7%8B%E5%8C%96%E7%AC%AC%E4%B8%80%E4%B8%AAmaster%E8%8A%82%E7%82%B9\n\nhttps://blog.51cto.com/fengwan/2426528?source=dra  kubeadm搭建高可用kubernetes 1.15.1\n\nhttps://segmentfault.com/a/1190000018741112?utm_source=tag-newest  Kubernetes的几种主流部署方式02-kubeadm部署高可用集群\n"
  },
  {
    "path": "kubeadm/k8S-HA-V1.15.3-Flannel-开启防火墙版.md",
    "content": "# 环境介绍：\n```bash\nCentOS： 7.6\nDocker： docker-ce-18.09.9\nKubernetes： 1.15.3\nKubeadm： 1.15.3\nKubelet： 1.15.3\nKubectl： 1.15.3\n```  \n# 部署介绍：\n\n&#8195;创建高可用首先先有一个 Master 节点，然后再让其他服务器加入组成三个 Master 节点高可用，然后再将工作节点 Node 加入。下面将描述每个节点要执行的步骤：\n```bash\nMaster01： 二、三、四、五、六、七、八、九、十一\nMaster02、Master03： 二、三、五、六、四、九\nnode01、node02、node03： 二、五、六、九\n```\n# 防火墙配置\n```bash\n1、防火墙策略\nyum install iptables iptables-services -y\n\ncat > /etc/sysconfig/iptables << \\EOF\n# Generated by iptables-save v1.4.21 on Thu Aug  1 01:26:09 2019\n*filter\n:INPUT ACCEPT [0:0]\n:FORWARD ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:RH-Firewall-1-INPUT - [0:0]\n-A INPUT -j RH-Firewall-1-INPUT\n-A FORWARD -j RH-Firewall-1-INPUT\n-A RH-Firewall-1-INPUT -i lo -j ACCEPT\n-A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.0/24 -p tcp -m tcp --dport 22 -j ACCEPT\n-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 22 -j DROP\n# k8s 服务器公网和内网IP，VIP都加上\n-A RH-Firewall-1-INPUT -s 192.168.56.200/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.11/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.12/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.13/32 -j ACCEPT\n-A RH-Firewall-1-INPUT -s 192.168.56.14/32 -j ACCEPT\n# keepalived\n-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT\n# serviceSubnet rules\n-A RH-Firewall-1-INPUT -s 10.96.0.0/12 -j ACCEPT\n# podSubnet rules\n-A RH-Firewall-1-INPUT -s 10.244.0.0/16 -j ACCEPT\n# port rules\n-A RH-Firewall-1-INPUT -s 192.168.56.1/32 -p tcp -m multiport --dports 80,443,1080,6443,16443 -j ACCEPT\n#\n-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT\n-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited\nCOMMIT\n# Completed on Thu Aug  1 01:26:09 2019\nEOF\n\nsystemctl restart iptables.service\nsystemctl enable iptables.service\n\niptables -nvL\n\n2、hosts.deny配置(注意需要注释掉)\nsed -ri 's/.*all:all.*/#all:all/g' /etc/hosts.deny\ncat /etc/hosts.deny\n```\n# 集群架构：\n\n  ![kubeadm高可用架构图](https://github.com/Lancger/opsfull/blob/master/images/kubeadm-ha.jpg)\n \n# 一、kuberadm 简介\n\n### 1、Kuberadm 作用\n\n&#8195;Kubeadm 是一个工具，它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。\n\n&#8195;kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群。它被故意设计为只关心启动集群，而不是之前的节点准备工作。同样的，诸如安装各种各样值得拥有的插件，例如 Kubernetes Dashboard、监控解决方案以及特定云提供商的插件，这些都不在它负责的范围。\n\n&#8195;相反，我们期望由一个基于 kubeadm 从更高层设计的更加合适的工具来做这些事情；并且，理想情况下，使用 kubeadm 作为所有部署的基础将会使得创建一个符合期望的集群变得容易。\n\n### 2、Kuberadm 功能\n```bash\nkubeadm init： 启动一个 Kubernetes 主节点\nkubeadm join： 启动一个 Kubernetes 工作节点并且将其加入到集群\nkubeadm upgrade： 更新一个 Kubernetes 集群到新版本\nkubeadm config： 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化集群，您需要对集群做一些配置以便使用 kubeadm upgrade 命令\nkubeadm token： 管理 kubeadm join 使用的令牌\nkubeadm reset： 还原 kubeadm init 或者 kubeadm join 对主机所做的任何更改\nkubeadm version： 打印 kubeadm 版本\nkubeadm alpha： 预览一组可用的新功能以便从社区搜集反馈\n```\n### 3、功能版本\n\n<table border=\"0\">\n    <tr>\n        <td><strong>Area<strong></td>\n        <td><strong>Maturity Level<strong></td>\n    </tr>\n    <tr>\n        <td>Command line UX</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Implementation</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>Config file API</td>\n        <td>beta</td>\n    </tr>\n    <tr>\n        <td>CoreDNS</td>\n        <td>GA</td>\n    </tr>\n    <tr>\n        <td>kubeadm alpha subcommands</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>High availability</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>DynamicKubeletConfig</td>\n        <td>alpha</td>\n    </tr>\n    <tr>\n        <td>Self-hosting</td>\n        <td>alpha</td>\n    </tr>\n</table>\n            \n# 二、前期准备\n\n### 1、虚拟机分配说明\n\n<table border=\"0\">\n    <tr>\n        <td><strong>地址<strong></td>\n        <td><strong>主机名</td>\n        <td><strong>内存&CPU</td>\n        <td><strong>角色</td>\n    </tr>\n    <tr>\n        <td>10.199.1.200</td>\n        <td>-</td>\n        <td>-</td>\n        <td>vip</td>\n    </tr>\n    <tr>\n        <td>10.199.1.136</td>\n        <td>k8s-master-01</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.199.1.137</td>\n        <td>k8s-master-02</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.199.1.138</td>\n        <td>k8s-master-03</td>\n        <td>2C & 2G</td>\n        <td>master</td>\n    </tr>\n    <tr>\n        <td>10.199.1.139</td>\n        <td>k8s-node-01</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>10.199.1.140</td>\n        <td>k8s-node-02</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n    <tr>\n        <td>10.199.1.141</td>\n        <td>k8s-node-03</td>\n        <td>4C & 8G</td>\n        <td>node</td>\n    </tr>\n</table>\n\n### 2、各个节点端口占用\n\n- Master 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>6443*</td>\n        <td>Kubernetes API</td>\n        <td>server All</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>2379-2380</td>\n        <td>etcd server</td>\n        <td>client API kube-apiserver, etcd</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10251</td>\n        <td>kube-scheduler</td>\n        <td>Self</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10252</td>\n        <td>kube-controller-manager</td>\n        <td>Self</td>\n    </tr>\n</table>\n\n- node 节点\n\n<table border=\"0\">\n    <tr>\n        <td><strong>规则<strong></td>\n        <td><strong>方向</td>\n        <td><strong>端口范围</td>\n        <td><strong>作用</td>\n        <td><strong>使用者</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>10250</td>\n        <td>Kubernetes API</td>\n        <td>Self, Control plane</td>\n    </tr>\n    <tr>\n        <td>TCP</td>\n        <td>Inbound 入口</td>\n        <td>30000-32767</td>\n        <td>NodePort Services**</td>\n        <td>All</td>\n    </tr>\n</table>\n    \n### 3、基础环境设置\n\n&#8195;Kubernetes 需要一定的环境来保证正常运行，如各个节点时间同步，主机名称解析，关闭防火墙等等。\n\n1、主机名称解析\n\n&#8195;分布式系统环境中的多主机通信通常基于主机名称进行，这在 IP 地址存在变化的可能性时为主机提供了固定的访问人口，因此一般需要有专用的 DNS 服务负责解决各节点主机 不过，考虑到此处部署的是测试集群，因此为了降低系复杂度，这里将基于 hosts 的文件进行主机名称解析。\n\n2、修改hosts和免key登录\n\n```bash\n#分别进入不同服务器，进入 /etc/hosts 进行编辑\n\ncat > /etc/hosts << \\EOF\n127.0.0.1     localhost  localhost.localdomain localhost4 localhost4.localdomain4\n::1           localhost  localhost.localdomain localhost6 localhost6.localdomain6\n10.199.1.200      k8s-vip         master      master.k8s.io\n10.199.1.136      k8s-master-01   master01    master01.k8s.io\n10.199.1.137      k8s-master-02   master02    master02.k8s.io\n10.199.1.138      k8s-master-03   master03    master03.k8s.io\n10.199.1.139      k8s-node-01     node01      node01.k8s.io\nEOF\n\n#root用户免密登录\nmkdir -p /root/.ssh/\nchmod 700 /root/.ssh/\necho 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bRm20od1b3rzW3ZPLB5NZn3jQesvfiz2p0WlfcYJrFHfF5Ap0ubIBUSQpVNLn94u8ABGBLboZL8Pjo+rXQPkIcObJxoKS8gz6ZOxcxJhldudbadabdanKAAKAKKKKKKKKKKKKKKKKKKKKKKK root@k8s-master-01' > /root/.ssh/authorized_keys\nchmod 400 /root/.ssh/authorized_keys\n```\n\n3、修改hostname\n\n```bash\n#分别进入不同的服务器修改 hostname 名称\n\n# 修改 10.199.1.136 服务器\nhostnamectl  set-hostname  k8s-master-01\n\n# 修改 10.199.1.137 服务器\nhostnamectl  set-hostname  k8s-master-02\n\n# 修改 10.199.1.138 服务器\nhostnamectl  set-hostname  k8s-master-03\n\n# 修改 10.199.1.139 服务器\nhostnamectl  set-hostname  k8s-node-01\n\n# 修改 10.199.1.140 服务器\nhostnamectl  set-hostname  k8s-node-02\n\n# 修改 10.199.1.141 服务器\nhostnamectl  set-hostname  k8s-node-03\n```\n\n4、主机时间同步\n\n```bash\n#将各个服务器的时间同步，并设置开机启动同步时间服务\n\nsystemctl restart chronyd.service\nsystemctl enable chronyd.service\n```\n\n5、关闭防火墙服务\n```bash\nsystemctl stop firewalld\nsystemctl disable firewalld\n```\n\n6、关闭并禁用SELinux\n```bash\n# 若当前启用了 SELinux 则需要临时设置其当前状态为 permissive\nsetenforce 0\n\n# 编辑／etc/sysconfig selinux 文件，以彻底禁用 SELinux\nsed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config\n\n# 查看selinux状态\ngetenforce \n\n如果为permissive，则执行reboot重新启动即可\n```\n\n7、禁用 Swap 设备\n\n&#8195;kubeadm 默认会预先检当前主机是否禁用了 Swap 设备，并在未用时强制止部署 过程因此，在主机内存资惊充裕的条件下，需要禁用所有的 Swap 设备\n\n```\n# 关闭当前已启用的所有 Swap 设备\nswapoff -a && sysctl -w vm.swappiness=0\n\nsed -ri 's/.*swap.*/#&/' /etc/fstab\ncat /etc/fstab\n或\n# 编辑 fstab 配置文件，注释掉标识为 Swap 设备的所有行\nvi /etc/fstab\n\nUUID=9be41058-76a6-4588-8e3f-5b44604d8de1 /                       xfs     defaults,noatime        0 0\nUUID=4489cc8f-1885-4e17-bfe7-8652fd1d3feb /boot                   xfs     defaults,noatime        0 0\n#UUID=0f5ae5f1-4872-471f-9f3a-f172a43fc1ff swap                    swap    defaults,noatime        0 0\n```\n\n8、设置系统参数\n\n&#8195;设置允许路由转发，不对bridge的数据进行处理\n\n```bash\n#创建 /etc/sysctl.d/k8s.conf 文件\n\ncat > /etc/sysctl.d/k8s.conf << \\EOF\nvm.swappiness = 0\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nEOF\n\n#挂载br_netfilter\nmodprobe br_netfilter\n\n#生效配置文件\nsysctl -p /etc/sysctl.d/k8s.conf\n\n#查看是否生成相关文件\nls /proc/sys/net/bridge\n```\n\n9、资源配置文件\n\n`/etc/security/limits.conf` 是 Linux 资源使用配置文件，用来限制用户对系统资源的使用\n\n```bash\necho \"* soft nofile 65536\" >> /etc/security/limits.conf\necho \"* hard nofile 65536\" >> /etc/security/limits.conf\necho \"* soft nproc 65536\"  >> /etc/security/limits.conf\necho \"* hard nproc 65536\"  >> /etc/security/limits.conf\necho \"* soft memlock unlimited\"  >> /etc/security/limits.conf\necho \"* hard memlock unlimited\"  >> /etc/security/limits.conf\n```\n\n10、安装依赖包以及相关工具\n\n```bash\nyum install -y epel-release\n\nyum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl\n```\n\n# 三、安装Keepalived\n\n- keepalived介绍： 是集群管理中保证集群高可用的一个服务软件，其功能类似于heartbeat，用来防止单点故障\n- Keepalived作用： 为haproxy提供vip（192.168.56.200）在三个haproxy实例之间提供主备，降低当其中一个haproxy失效的时对服务的影响。\n\n### 1、yum安装Keepalived\n```bash\n# 安装keepalived\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\nyum install -y keepalived\n```\n\n### 2、配置Keepalived\n```bash\ncat <<EOF > /etc/keepalived/keepalived.conf\n! Configuration File for keepalived\n\n# 主要是配置故障发生时的通知对象以及机器标识。\nglobal_defs {\n   # 标识本节点的字条串，通常为 hostname，但不一定非得是 hostname。故障发生时，邮件通知会用到。\n   router_id LVS_K8S\n}\n\n# 用来做健康检查的，当时检查失败时会将 vrrp_instance 的 priority 减少相应的值。\nvrrp_script check_haproxy {\n    script \"killall -0 haproxy\"   #根据进程名称检测进程是否存活\n    interval 3\n    weight -2\n    fall 10\n    rise 2\n}\n\n# rp_instance用来定义对外提供服务的 VIP 区域及其相关属性。\nvrrp_instance VI_1 {\n    state MASTER   #当前节点为MASTER，其他两个节点设置为 BACKUP\n    interface bond0 #改为自己的网卡\n    virtual_router_id 51\n    priority 200\n    advert_int 1\n    authentication {\n        auth_type PASS\n        auth_pass 35f18af7190d51c9f7f78f37300a0cbd\n    }\n    virtual_ipaddress {\n        10.199.1.200/22  #虚拟VIP，即VIP,注意掩码一定要写，不然会出现VIP端口，部分机器正常，部分机器异常问题\n    }\n    track_script {\n        check_haproxy\n    }\n}\nEOF\n```\n当前节点的配置中 state 配置为 MASTER，其它两个节点设置为 BACKUP\n\n```bash\n配置说明：\n\n    virtual_ipaddress： vip\n    track_script： 执行上面定义好的检测的script\n    interface： 节点固有IP（非VIP）的网卡，用来发VRRP包。\n    virtual_router_id： 取值在0-255之间，用来区分多个instance的VRRP组播\n    advert_int： 发VRRP包的时间间隔，即多久进行一次master选举（可以认为是健康查检时间间隔）。\n    authentication： 认证区域，认证类型有PASS和HA（IPSEC），推荐使用PASS（密码只识别前8位）。\n    state： 可以是MASTER或BACKUP，不过当其他节点keepalived启动时会将priority比较大的节点选举为MASTER，因此该项其实没有实质用途。\n    priority： 用来选举master的，要成为master，那么这个选项的值最好高于其他机器50个点，该项取值范围是1-255（在此范围之外会被识别成默认值100）。\n    \n# 1、注意防火墙需要放开vrrp协议(不然会出现脑裂现象，三台主机都存在VIP的情况)\n#-A INPUT -p vrrp -j ACCEPT\n-A RH-Firewall-1-INPUT -p vrrp -j ACCEPT\n\n# 2、注意上面配置script \"killall -0 haproxy\"   #根据进程名称检测进程是否存活，会在/var/log/messages每隔一秒执行检测的日志记录\n# tail -100f /var/log/messages\n\nSep 27 10:54:16 tw19410s1 Keepalived_vrrp[9113]: /usr/bin/killall -0 haproxy exited with status 1\n\n# 3、“VRRP实例的绑定到IP”对于所使用的网卡需要合法\n比如使用网卡“bond0”，该网卡的掩码为“255.255.255.0”，那么所使用的“VRRP实例的绑定到IP”的掩码也必须为“255.255.255.0”，即具有“xxx.xxx.xxx.xxx/24”的形式。\n\ntcpdump -ani any vrrp | grep vrid\n\n特别需要注意的是，同一网段中的virtual_router_id(vrid)的值不能重复，否则会干扰其他Keepalived集群的正常运行。\n```\n\n### 3、启动Keepalived\n```bash\n# 设置开机启动\nsystemctl enable keepalived\n\n# 启动keepalived\nsystemctl restart keepalived\n\n# 查看启动状态\nsystemctl status keepalived\n```\n### 4、查看网络状态\n\nkepplived 配置中 state 为 MASTER 的节点启动后，查看网络状态，可以看到虚拟IP已经加入到绑定的网卡中\n\n```bash\n[root@k8s-master-01 ~]# ip address show bond0\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 6c:92:bf:27:9e:ed brd ff:ff:ff:ff:ff:ff\n    inet 10.199.1.136/22 brd 10.19.3.255 scope global bond0\n       valid_lft forever preferred_lft forever\n    inet 10.199.1.200/32 scope global bond0\n       valid_lft forever preferred_lft forever\n\n当关掉当前节点的keeplived服务后将进行虚拟IP转移，将会推选state 为 BACKUP 的节点的某一节点为新的MASTER，可以在那台节点上查看网卡，将会查看到虚拟IP\n```\n\n# 四、安装haproxy\n\n&#8195;此处的haproxy为apiserver提供反向代理，haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量，这种方式更加合理、健壮。\n\n### 1、yum安装haproxy\n```bash\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\n\nyum install -y haproxy\n```\n\n### 2、配置haproxy\n```bash\ncat > /etc/haproxy/haproxy.cfg << EOF\n#---------------------------------------------------------------------\n# Global settings\n#---------------------------------------------------------------------\nglobal\n    # to have these messages end up in /var/log/haproxy.log you will\n    # need to:\n    # 1) configure syslog to accept network log events.  This is done\n    #    by adding the '-r' option to the SYSLOGD_OPTIONS in\n    #    /etc/sysconfig/syslog\n    # 2) configure local2 events to go to the /var/log/haproxy.log\n    #   file. A line like the following can be added to\n    #   /etc/sysconfig/syslog\n    #\n    #    local2.*                       /var/log/haproxy.log\n    #\n    log         127.0.0.1 local2\n    \n    chroot      /var/lib/haproxy\n    pidfile     /var/run/haproxy.pid\n    maxconn     4000\n    user        haproxy\n    group       haproxy\n    daemon \n       \n    # turn on stats unix socket\n    stats socket /var/lib/haproxy/stats\n#---------------------------------------------------------------------\n# common defaults that all the 'listen' and 'backend' sections will\n# use if not designated in their block\n#---------------------------------------------------------------------  \ndefaults\n    mode                    http\n    log                     global\n    option                  httplog\n    option                  dontlognull\n    option http-server-close\n    option forwardfor       except 127.0.0.0/8\n    option                  redispatch\n    retries                 3\n    timeout http-request    10s\n    timeout queue           1m\n    timeout connect         10s\n    timeout client          1m\n    timeout server          1m\n    timeout http-keep-alive 10s\n    timeout check           10s\n    maxconn                 3000\n#---------------------------------------------------------------------\n# kubernetes apiserver frontend which proxys to the backends\n#--------------------------------------------------------------------- \nfrontend kubernetes-apiserver\n    mode                 tcp\n    bind                 *:16443\n    option               tcplog\n    default_backend      kubernetes-apiserver    \n#---------------------------------------------------------------------\n# round robin balancing between the various backends\n#---------------------------------------------------------------------\nbackend kubernetes-apiserver\n    mode        tcp\n    balance     roundrobin\n    server      master01.k8s.io   10.199.1.136:6443 check\n    server      master02.k8s.io   10.199.1.137:6443 check\n    server      master03.k8s.io   10.199.1.138:6443 check\n#---------------------------------------------------------------------\n# collection haproxy statistics message\n#---------------------------------------------------------------------\nlisten stats\n    bind                 *:1080\n    stats auth           admin:awesomePassword\n    stats refresh        5s\n    stats realm          HAProxy\\ Statistics\n    stats uri            /admin?stats\nEOF\n```\nhaproxy配置在其他master节点上(10.199.1.137和10.199.1.138)相同\n\n### 3、启动并检测haproxy\n```bash\n# 设置开机启动\nsystemctl enable haproxy\n\n# 开启haproxy\nsystemctl restart haproxy\n\n# 查看启动状态\nsystemctl status haproxy\n```\n\n### 4、检测haproxy端口\n```bash\nss -lnt | grep -E \"16443|1080\"\n\nnc -zv master.k8s.io 16443\nnc -zv master.k8s.io 1080\n```\n\n# 五、安装Docker (所有节点)\n\n### 1、移除之前安装过的Docker\n```bash\nsudo yum remove -y docker \\\n                  docker-client \\\n                  docker-client-latest \\\n                  docker-common \\\n                  docker-latest \\\n                  docker-latest-logrotate \\\n                  docker-logrotate \\\n                  docker-selinux \\\n                  docker-engine-selinux \\\n                  docker-ce-cli \\\n                  docker-engine\n                  \n# 查看还有没有存在的docker组件\nrpm -qa|grep docker\n\n# 有则通过命令 yum -y remove XXX 来删除,比如：\nyum remove docker-ce-cli\n```\n\n### 2、配置docker的yum源\n\n下面两个镜像源选择其一即可，由于官方下载速度比较慢，推荐用阿里镜像源\n\n- 阿里镜像源\n\n```bash\nsudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\n```\n\n- Docker官方镜像源\n```bash\nsudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo\n```\n\n### 3、安装Docker：\n\n```\n# 显示docker-ce所有可安装版本：\nyum list docker-ce --showduplicates | sort -r\n\n# 安装指定docker版本\nsudo yum install docker-ce-18.09.9-3.el7.x86_64 -y\n\n# 启动docker并设置docker开机启动\nsystemctl enable docker\nsystemctl start docker\n\n# 确认一下iptables\n确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。\n\niptables -nvL\n\nChain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target     prot opt in     out     source               destination         \n    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED\n    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           \n    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0      \n    \nDocker从1.13版本开始调整了默认的防火墙规则，禁用了iptables filter表中FOWARD链，这样会引起Kubernetes集群中跨Node的Pod无法通信。但这里通过安装docker 1806，发现默认策略又改回了ACCEPT，这个不知道是从哪个版本改回的，因为我们线上版本使用的1706还是需要手动调整这个策略的。\n\n# 执行下面命令\niptables -P FORWARD ACCEPT\n\n# 修改docker的配置\nvim /usr/lib/systemd/system/docker.service\n\n# 增加下面命令(ExecReload后面新增ExecStartPost=...)\n...\nExecReload=/bin/kill -s HUP $MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\n...\n\n# 配置docker加速器\ncat > /etc/docker/daemon.json << \\EOF\n{\n  \"exec-opts\": [\"native.cgroupdriver=systemd\"],\n  \"registry-mirrors\" : [\n    \"https://ot2k4d59.mirror.aliyuncs.com/\"\n  ]\n}\nEOF\n\n# 重启Docker\nsystemctl daemon-reload\nsystemctl restart docker\n\ndocker info|grep -i cgroup\n```\n### 4、docker最终的服务文件\n```\n#注意，有变量的地方需要使用转义符号\n\ncat > /usr/lib/systemd/system/docker.service << EOF\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\nAfter=network-online.target firewalld.service containerd.service\nWants=network-online.target\nRequires=docker.socket\n\n[Service]\nType=notify\n# the default is not to use systemd for cgroups because the delegate issues still\n# exists and systemd currently does not support the cgroup feature set required\n# for containers run by docker\nExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd\nExecReload=/bin/kill -s HUP \\$MAINPID\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\nTimeoutSec=0\nRestartSec=2\nRestart=always\n\n# Note that StartLimit* options were moved from \"Service\" to \"Unit\" in systemd 229.\n# Both the old, and new location are accepted by systemd 229 and up, so using the old location\n# to make them work for either version of systemd.\nStartLimitBurst=3\n\n# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.\n# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make\n# this option work for either version of systemd.\nStartLimitInterval=60s\n\n# Having non-zero Limit*s causes performance problems due to accounting overhead\n# in the kernel. We recommend using cgroups to do container-local accounting.\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\n\n# Comment TasksMax if your systemd version does not support it.\n# Only systemd 226 and above support this option.\nTasksMax=infinity\n\n# set delegate yes so that systemd does not reset the cgroups of docker containers\nDelegate=yes\n\n# kill only the docker process, not all processes in the cgroup\nKillMode=process\n\n[Install]\nWantedBy=multi-user.target\nEOF\n\n# 重启Docker\nsystemctl daemon-reload\nsystemctl restart docker\nsystemctl enable docker\n```\n\n# 六、安装kubeadm、kubelet\n\n### 1、配置可用的国内yum源用于安装：\n```\ncat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\ngpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg\nEOF\n```\n\n### 2、安装kubelet\n\n```\n# 需要在每台机器上都安装以下的软件包：\n     kubeadm: 用来初始化集群的指令。\n     kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。\n     kubectl: 用来与集群通信的命令行工具。\n\n# 查看kubelet版本列表\nyum list kubelet --showduplicates | sort -r \n\n# 安装kubelet\nyum install -y kubelet-1.15.3-0\n\n# 启动kubelet并设置开机启动\nsystemctl enable kubelet \nsystemctl start kubelet\n\n# 检查状态\n检查状态,发现是failed状态，正常，kubelet会10秒重启一次，需等下面完成初始化master节点后即可正常\nsystemctl status kubelet\n\n# 查看kubelet日志\njournalctl -u kubelet --no-pager\n```\n\n### 3、安装kubeadm\n\n```\n# 负责初始化集群\n# 1、查看kubeadm版本列表\nyum list kubeadm --showduplicates | sort -r \n\n# 2、安装kubeadm\nyum install -y kubeadm-1.15.3-0\n\n# 安装 kubeadm 时候会默认安装 kubectl ，所以不需要单独安装kubectl\n\n# 3、重启服务器\n为了防止发生某些未知错误，这里我们重启下服务器，方便进行后续操作\nreboot\n```\n\n# 七、初始化第一个kubernetes master节点\n\n```\n# 因为需要绑定虚拟IP，所以需要首先先查看虚拟IP启动这几台master机子哪台上\n\n[root@k8s-master-01 ~]# ip address show bond0\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 6c:92:bf:27:9e:ed brd ff:ff:ff:ff:ff:ff\n    inet 10.199.1.136/22 brd 10.19.3.255 scope global bond0\n       valid_lft forever preferred_lft forever\n    inet 10.199.1.200/32 scope global bond0\n       valid_lft forever preferred_lft forever\n7: bond0.101@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 6c:92:bf:27:9e:ed brd ff:ff:ff:ff:ff:ff\n    inet 16.201.26.36/24 brd 16.201.26.255 scope global bond0.101\n       valid_lft forever preferred_lft forever\n\n可以看到虚拟IP 10.199.1.200  和 服务器IP 10.199.1.136 在一台机子上，所以初始化kubernetes第一个master要在master01机子上进行安装\n```\n\n### 1、创建kubeadm配置的yaml文件\n```\n# 1、创建kubeadm配置的yaml文件\n\nrm -f ./kubeadm-config.yaml\n\nexport MASTER_NODE1=10.199.1.136\nexport APISERVER_NAME=master.k8s.io\nexport POD_SUBNET=10.244.0.0/16\nexport SVC_SUBNET=10.96.0.0/12\n\ncat <<EOF > ./kubeadm-config.yaml\napiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- groups:\n  - system:bootstrappers:kubeadm:default-node-token\n  token: abcdef.0123456789abcdef\n  ttl: 24h0m0s\n  usages:\n  - signing\n  - authentication\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: ${MASTER_NODE1}  #这里填写第一个初始化的master的ip\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  name: k8s-master-01 #注意这里需要调整为自己的节点\n  taints:\n  - effect: NoSchedule\n    key: node-role.kubernetes.io/master\n---\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nclusterName: kubernetes\nkubernetesVersion: v1.15.3\ncertificatesDir: /etc/kubernetes/pki\ncontrollerManager: {}\ncontrolPlaneEndpoint: \"${APISERVER_NAME}:16443\" # 这里写vip的地址或域名加上端口\nimageRepository: registry.aliyuncs.com/google_containers # 使用阿里云镜像\napiServer:\n  timeoutForControlPlane: 4m0s\n  certSANs:\n    - k8s-master-01\n    - k8s-master-02\n    - k8s-master-03\n    - master.k8s.io\n    - 10.199.1.200\n    - 10.199.1.136\n    - 10.199.1.137\n    - 10.199.1.138\n    - 127.0.0.1\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: ${POD_SUBNET}\n  serviceSubnet: ${SVC_SUBNET}\nscheduler: {}\n---\n# 开启 IPVS 模式\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: ipvs # kube-proxy 模式\nEOF\n\nkubeadm init --config=kubeadm-config.yaml --upload-certs\n\n以下两个地方设置： \n- certSANs： 虚拟ip地址（为了安全起见，把所有集群地址都加上） \n- controlPlaneEndpoint： 虚拟IP:监控端口号\n\n配置说明：\n\n    imageRepository： registry.aliyuncs.com/google_containers (使用阿里云镜像仓库)\n    podSubnet： 10.244.0.0/16 (#pod地址池)\n    serviceSubnet： 10.96.0.0/12 (#service地址池)\n```\n\n### 2、初始化第一个master节点\n```\nkubeadm init --config=kubeadm-config.yaml --upload-certs  #使用这个就不用做拷贝证书的操作\n\nkubeadm init --config kubeadm-config.yaml  #使用这个还需要手动做拷贝证书的操作\n\n#验证下端口是否通\nnc -zv master.k8s.io 6443\nnc -zv master.k8s.io 16443\n```\n日志\n```\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME/.kube\n  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n  sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of the control-plane node running the following command on each as root:\n\n  kubeadm join master.k8s.io:16443 --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5 \\\n    --control-plane --certificate-key 13284467f0141778898ffa33d340c0598cb757c6aa016f00da2165cd3eab4523\n\nPlease note that the certificate-key gives access to cluster sensitive data, keep it secret!\nAs a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use \n\"kubeadm init phase upload-certs --upload-certs\" to reload certs afterward.\n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join master.k8s.io:16443 --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5\n```\n### 执行结果中\n\n用于初始化第二、三个 master 节点\n```\n#初始化第二个master节点\nexport MASTER_NODE2=10.199.1.137\nkubeadm join master.k8s.io:16443 --apiserver-advertise-address ${MASTER_NODE2} --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5 \\\n    --control-plane --certificate-key 13284467f0141778898ffa33d340c0598cb757c6aa016f00da2165cd3eab4523\n\n#初始化第三个master节点    \nexport MASTER_NODE3=10.199.1.138\nkubeadm join master.k8s.io:16443 --apiserver-advertise-address ${MASTER_NODE3} --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5 \\\n    --control-plane --certificate-key 13284467f0141778898ffa33d340c0598cb757c6aa016f00da2165cd3eab4523\n```\n用于初始化 worker 节点\n```\nkubeadm join master.k8s.io:16443 --token abcdef.0123456789abcdef \\\n    --discovery-token-ca-cert-hash sha256:ab6da874166785bfe75acc4d6fd622bf821a7451837332e3a21a6106e346c8d5\n```\n\n\n### 3、配置kubectl环境变量\n```bash\n# 配置环境变量\n\nrm -rf $HOME/.kube\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 4、查看组件状态\n```bash\nkubectl get cs\n\nNAME                 STATUS    MESSAGE              ERROR\ncontroller-manager   Healthy   ok                   \nscheduler            Healthy   ok                   \netcd-0               Healthy   {\"health\": \"true\"}   \n\n# 查看pod状态\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                0/1     Pending   0          7m32s    ---coredns没有启动\ncoredns-78d4cf999f-mkgsx                0/1     Pending   0          7m32s    ---coredns没有启动\netcd-k8s-master-01                      1/1     Running   0          6m39s\nkube-apiserver-k8s-master-01            1/1     Running   0          6m43s\nkube-controller-manager-k8s-master-01   1/1     Running   0          6m32s\nkube-proxy-88s74                        1/1     Running   0          7m32s\nkube-scheduler-k8s-master-01            1/1     Running   0          6m45s\n\n可以看到coredns没有启动，这是由于还没有配置网络插件，接下来配置下后再重新查看启动状态\n\n#检查ETCD服务\ndocker exec -it $(docker ps |grep etcd_etcd|awk '{print $1}') sh\netcdctl --endpoints=https://192.168.56.11:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key member list\n\netcdctl --endpoints=https://192.168.56.11:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.crt --cert-file=/etc/kubernetes/pki/etcd/server.crt --key-file=/etc/kubernetes/pki/etcd/server.key cluster-health\n```\n# 八、安装网络插件\n\n### 1、安装 calico 网络插件\n```\n# 安装 calico 网络插件\n# 参考文档 https://docs.projectcalico.org/v3.8/getting-started/kubernetes/\n\nexport POD_SUBNET=10.244.0.0/16\nrm -f calico.yaml\nwget https://docs.projectcalico.org/v3.8/manifests/calico.yaml\nsed -i \"s#192\\.168\\.0\\.0/16#${POD_SUBNET}#\" calico.yaml\nkubectl apply -f calico.yaml\n```\n\n### 2、安装 flannel 网络插件\n```bash\nexport POD_SUBNET=10.244.0.0/16\n\ncat > kube-flannel.yaml << EOF\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: flannel\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - pods\n    verbs:\n      - get\n  - apiGroups:\n      - \"\"\n    resources:\n      - nodes\n    verbs:\n      - list\n      - watch\n  - apiGroups:\n      - \"\"\n    resources:\n      - nodes/status\n    verbs:\n      - patch\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1beta1\nmetadata:\n  name: flannel\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: flannel\nsubjects:\n- kind: ServiceAccount\n  name: flannel\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: flannel\n  namespace: kube-system\n---\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: kube-flannel-cfg\n  namespace: kube-system\n  labels:\n    tier: node\n    app: flannel\ndata:\n  cni-conf.json: |\n    {\n      \"name\": \"cbr0\",\n      \"plugins\": [\n        {\n          \"type\": \"flannel\",\n          \"delegate\": {\n            \"hairpinMode\": true,\n            \"isDefaultGateway\": true\n          }\n        },\n        {\n          \"type\": \"portmap\",\n          \"capabilities\": {\n            \"portMappings\": true\n          }\n        }\n      ]\n    }\n  net-conf.json: |\n    {\n      \"Network\": \"${POD_SUBNET}\",\n      \"Backend\": {\n        \"Type\": \"vxlan\"\n      }\n    }\n---\napiVersion: extensions/v1beta1\nkind: DaemonSet\nmetadata:\n  name: kube-flannel-ds-amd64\n  namespace: kube-system\n  labels:\n    tier: node\n    app: flannel\nspec:\n  template:\n    metadata:\n      labels:\n        tier: node\n        app: flannel\n    spec:\n      hostNetwork: true\n      nodeSelector:\n        beta.kubernetes.io/arch: amd64\n      tolerations:\n      - operator: Exists\n        effect: NoSchedule\n      serviceAccountName: flannel\n      initContainers:\n      - name: install-cni\n        image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64\n        command:\n        - cp\n        args:\n        - -f\n        - /etc/kube-flannel/cni-conf.json\n        - /etc/cni/net.d/10-flannel.conflist\n        volumeMounts:\n        - name: cni\n          mountPath: /etc/cni/net.d\n        - name: flannel-cfg\n          mountPath: /etc/kube-flannel/\n      containers:\n      - name: kube-flannel\n        image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64\n        command:\n        - /opt/bin/flanneld\n        args:\n        - --ip-masq\n        - --kube-subnet-mgr\n        - --iface=bond0\n        resources:\n          requests:\n            cpu: \"100m\"\n            memory: \"50Mi\"\n          limits:\n            cpu: \"100m\"\n            memory: \"50Mi\"\n        securityContext:\n          privileged: true\n        env:\n        - name: POD_NAME\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.name\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.namespace\n        volumeMounts:\n        - name: run\n          mountPath: /run\n        - name: flannel-cfg\n          mountPath: /etc/kube-flannel/\n      volumes:\n        - name: run\n          hostPath:\n            path: /run\n        - name: cni\n          hostPath:\n            path: /etc/cni/net.d\n        - name: flannel-cfg\n          configMap:\n            name: kube-flannel-cfg\nEOF\n\n“Network”: “10.244.0.0/16”要和kubeadm-config.yaml配置文件中podSubnet: 10.244.0.0/16相同\n```\n\n### 2、创建flanner相关role和pod\n\n```\n# 应用生效\n[root@k8s-master-01 ~]# kubectl apply -f kube-flannel.yaml\nclusterrole.rbac.authorization.k8s.io/flannel created\nclusterrolebinding.rbac.authorization.k8s.io/flannel created\nserviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.extensions/kube-flannel-ds-amd64 created\n\n# 等待一会时间，再次查看各个pods的状态\n[root@k8s-master-01 ~]# kubectl get pods --namespace=kube-system\nNAME                                    READY   STATUS    RESTARTS   AGE\ncoredns-78d4cf999f-5zt5z                1/1     Running   0          12m    ---coredns启动成功\ncoredns-78d4cf999f-mkgsx                1/1     Running   0          12m    ---coredns启动成功\netcd-k8s-master-01                      1/1     Running   0          11m\nkube-apiserver-k8s-master-01            1/1     Running   0          12m\nkube-controller-manager-k8s-master-01   1/1     Running   0          11m\nkube-flannel-ds-amd64-7lj6m             1/1     Running   0          13s\nkube-proxy-88s74                        1/1     Running   0          12m\nkube-scheduler-k8s-master-01            1/1     Running   0          12m\n\n# 加入更换了网络插件，需要把coredns的pod重新创建，不然网络coredns的pod网络不通\n# 查看\nkubectl get pods --namespace kube-system\nkubectl get svc --namespace kube-system\n\n#删除coredns\nkubectl delete deployment coredns -n kube-system\nkubectl delete svc kube-dns -n kube-system\nkubectl delete cm coredns -n kube-system\n\n#重新部署coredns\nrm -f coredns.yaml.sed deploy.sh coredns.yml\nwget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed\nwget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh\nchmod +x deploy.sh\n./deploy.sh -i 10.96.0.10 > coredns.yml  #这里从--service-cidr=10.96.0.0/16中选用10.96.0.10作为coredns地址\nkubectl apply -f coredns.yml\n```\n\n# 九、加入集群\n\n### 1、Master加入集群构成高可用\n```\n复制秘钥到各个节点\n\n在master01 服务器上执行下面命令，将kubernetes相关文件复制到 master02、master03\n\n如果其他节点为初始化第一个master节点，则将该节点的配置文件复制到其余两个主节点，例如master03为第一个master节点，则将它的k8s配置复制到master02和master01。\n```\n- 复制文件到 master02\n```\nssh root@master02.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master02.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master02.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master02.k8s.io:/etc/kubernetes/pki/etcd\n```\n- 复制文件到 master03\n\n```\nssh root@master03.k8s.io mkdir -p /etc/kubernetes/pki/etcd\nscp /etc/kubernetes/admin.conf root@master03.k8s.io:/etc/kubernetes\nscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master03.k8s.io:/etc/kubernetes/pki\nscp /etc/kubernetes/pki/etcd/ca.* root@master03.k8s.io:/etc/kubernetes/pki/etcd\n```\n- master节点加入集群\n\n&#8195;master02 和 master03 服务器上都执行加入集群操作\n\n```bash\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f --experimental-control-plane\n```\n&#8195;如果加入失败想重新尝试，请输入 kubeadm reset 命令清除之前的设置，重新执行从“复制秘钥”和“加入集群”这两步\n\n&#8195;如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n\n```bash\n# 显示安装过程:\n\nThis node has joined the cluster and a new control plane instance was created:\n\n* Certificate signing request was sent to apiserver and approval was received.\n* The Kubelet was informed of the new secure connection details.\n* Master label and taint were applied to the new node.\n* The Kubernetes control plane instances scaled up.\n* A new etcd member was added to the local/stacked etcd cluster.\n\nTo start administering your cluster from this node, you need to run the following as a regular user:\n\n        mkdir -p $HOME/.kube\n        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n        sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nRun 'kubectl get nodes' to see this node join the cluster.\n```\n- 配置kubectl环境变量\n```bash\nmkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config\n\n# 指令补全\n\nyum install bash-completion -y\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n### 2、node节点加入集群\n\n&#8195;除了让master节点加入集群组成高可用外，slave节点也要加入集群中。\n\n&#8195;这里将k8s-node-01、k8s-node-02、k8s-node-03加入集群，进行工作\n\n&#8195;输入初始化k8s master时候提示的加入命令，如下：\n\n```\nkubeadm join master.k8s.io:16443 --token i77yg1.1eype0c53jsanoge --discovery-token-ca-cert-hash sha256:8f0a817012ab333a057b6a7410e65971be20b95c1b75fc4015f8f3b6785f626f\n```\n&#8195;node节点加入，不需要加上 –experimental-control-plane 这个参数\n\n### 3、如果忘记加入集群的token和sha256 (如正常则跳过)\n\n- 显示获取token列表\n\n```\nkubeadm token list\n```\n\n默认情况下 Token 过期是时间是24小时，如果 Token 过期以后，可以输入以下命令，生成新的 Token\n\n```\nkubeadm token create\n```\n\n- 获取ca证书sha256编码hash值\n\n```\nopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'\n```\n\n拼接命令\n```\nkubeadm join master.k8s.io:16443 --token 882ik4.9ib2kb0eftvuhb58 --discovery-token-ca-cert-hash sha256:0b1a836894d930c8558b350feeac8210c85c9d35b6d91fde202b870f3244016a\n\n如果是master加入，请在最后面加上 –experimental-control-plane 这个参数\n```\n\n### 4、查看各个节点加入集群情况\n```\nkubectl get nodes -o wide\n\n```\n\n# 十、从集群中删除 Node\n\n- Master节点：\n\n```\nkubectl drain <node name> --delete-local-data --force --ignore-daemonsets\nkubectl delete node <node name>\n```\n\n- Slave节点：\n\n```\nkubeadm reset\n```\n\n## 初始化失败\n```bash\nyes | kubeadm reset\nifconfig cni0 down\nip link delete cni0\nifconfig flannel.1 down\nip link delete flannel.1\nrm -rf /var/lib/cni/\nrm -f $HOME/.kube/config\n\nsystemctl restart docker\nsystemctl restart kubelet\nsystemctl status kubelet\njournalctl -f -u kubelet\n```\n\n## 问题汇总：\n\n1、多网卡监听问题\n```\nk8s master组件在多网卡环境下，会监听到服务器外网IP问题\n\n#注意--hostname-override的值写kubectl get nodes显示的结果\n\n#修改kubelet启动参数\ncat > /etc/sysconfig/kubelet <<\\EOF\nKUBELET_EXTRA_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --hostname-override=k8s-master-01 --node-ip=10.199.1.136\nEOF\n\n#重启kubelet服务\nsystemctl daemon-reload\nsystemctl restart kubelet\nsystemctl status kubelet\n\n#查看kubelet日志\njournalctl -f -u kubelet\n\nhttps://blog.csdn.net/qianghaohao/article/details/98588427  kubeadm + vagrant 部署多节点 k8s 的一个坑(多网卡问题)\n\nhttps://github.com/kubernetes/kubernetes/issues/33618\n\nhttps://kubernetes.io/zh/docs/setup/independent/install-kubeadm/   kubeadm init 和 kubeadm join 用于为 kubelet 获取 额外的用户参数。\n\n#解决方案\n@danielschonfeld The kubelet flag you should set is --hostname-override\n```\n\n参考资料：\n\nhttps://github.com/kubernetes/kubernetes/issues/33618   Issue when using kubeadm with multiple network interfaces #33618\n\nhttp://www.mydlq.club/article/4/\n\nhttps://kuboard.cn/install/install-kubernetes.html#%E5%88%9D%E5%A7%8B%E5%8C%96%E7%AC%AC%E4%B8%80%E4%B8%AAmaster%E8%8A%82%E7%82%B9\n\nhttps://blog.51cto.com/fengwan/2426528?source=dra  kubeadm搭建高可用kubernetes 1.15.1\n\nhttps://segmentfault.com/a/1190000018741112?utm_source=tag-newest  Kubernetes的几种主流部署方式02-kubeadm部署高可用集群\n\nhttps://www.cnblogs.com/hongdada/p/9771857.html  Docker中的Cgroup Driver:Cgroupfs 与 Systemd\n\nhttps://juejin.im/entry/5b0aa39551882538be0d2e21  centos7使用kubeadm配置高可用集群(多master 多网卡，需主动修改组件信息)\n"
  },
  {
    "path": "kubeadm/k8s清理.md",
    "content": "# 一、清理资源\n```\nsystemctl stop kubelet\nsystemctl stop docker\n\nkubeadm reset\n#yum remove -y kubelet kubeadm kubectl --disableexcludes=kubernetes\n\nrm -rf /etc/kubernetes/\nrm -rf /root/.kube/\nrm -rf $HOME/.kube/\nrm -rf /var/lib/etcd/\nrm -rf /var/lib/cni/\nrm -rf /var/lib/kubelet/\nrm -rf /etc/cni/\nrm -rf /opt/cni/\n\nifconfig cni0 down\nifconfig flannel.1 down\nifconfig docker0 down\nip link delete cni0\nip link delete flannel.1\n\n#docker rmi -f $(docker images -q)\n#docker rm -f `docker ps -a -q`\n\n#yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes\nkubeadm version\nsystemctl restart kubelet.service\nsystemctl enable kubelet.service\n```\n\n# 二、重新初始化\n```\nswapoff -a\nmodprobe br_netfilter\nsysctl -p /etc/sysctl.d/k8s.conf\nchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4\n\nkubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' |sh -x\ndocker images |grep google_containers |awk '{print \"docker tag \",$1\":\"$2,$1\":\"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#2' |sh -x\ndocker images |grep google_containers |awk '{print \"docker rmi \", $1\":\"$2}' |sh -x\ndocker pull coredns/coredns:1.3.1\ndocker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1\ndocker rmi coredns/coredns:1.3.1\n\nkubeadm init --kubernetes-version=v1.15.3 --pod-network-cidr=10.244.0.0/16  --apiserver-advertise-address=192.168.56.11 --apiserver-bind-port=6443\n\n#获取加入集群的指令\nkubeadm token create --print-join-command\n```\n\n# 三、Node操作\n```\nmkdir -p $HOME/.kube\n```\n\n# 四、Master操作\n```\nmkdir -p $HOME/.kube\ncp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nchown $(id -u):$(id -g) $HOME/.kube/config\n\nscp $HOME/.kube/config root@linux-node2:$HOME/.kube/config\nscp $HOME/.kube/config root@linux-node3:$HOME/.kube/config\nscp $HOME/.kube/config root@linux-node4:$HOME/.kube/config\n```\n\n# 五、Master和Node节点\n```\nchown $(id -u):$(id -g) $HOME/.kube/config\n\n```\n\n\n参考资料：\n\nhttps://blog.51cto.com/wutengfei/2121202  kubernetes中网络报错问题\n"
  },
  {
    "path": "kubeadm/kubeadm.yaml",
    "content": "apiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- groups:\n  - system:bootstrappers:kubeadm:default-node-token\n  token: abcdef.0123456789abcdef\n  ttl: 24h0m0s\n  usages:\n  - signing\n  - authentication\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: 192.168.56.11\n  bindPort: 6443\nnodeRegistration:\n  criSocket: /var/run/dockershim.sock\n  name: linux-node1.example.com\n  taints:\n  - effect: NoSchedule\n    key: node-role.kubernetes.io/master\n---\napiServer:\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.15.0\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: 172.168.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n---\napiVersion: kubeproxy.config.k8s.io/v1alpha1\nkind: KubeProxyConfiguration\nmode: ipvs  # kube-proxy 模式\n"
  },
  {
    "path": "kubeadm/kubeadm初始化k8s集群延长证书过期时间.md",
    "content": "# 一、前言\n\nkubeadm初始化k8s集群，签发的CA证书有效期默认是10年，签发的apiserver证书有效期默认是1年，到期之后请求apiserver会报错，使用openssl命令查询相关证书是否到期。\n以下延长证书过期的方法适合kubernetes1.14、1.15、1.16、1.17、1.18版本\n\n# 二、查看证书有效时间\n```bash\nopenssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text  |grep Not\n\n显示如下，通过下面可看到ca证书有效期是10年，从2020到2030年：\nNot Before: Apr 22 04:09:07 2020 GMT\nNot After : Apr 20 04:09:07 2030 GMT\n\nopenssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text  |grep Not\n\n显示如下，通过下面可看到apiserver证书有效期是1年，从2020到2021年：\nNot Before: Apr 22 04:09:07 2020 GMT\nNot After : Apr 22 04:09:07 2021 GMT\n```\n\n# 三、延长证书过期时间\n\n```bash\n1.把update-kubeadm-cert.sh文件上传到master1、master2、master3节点\nupdate-kubeadm-cert.sh文件所在的github地址如下：\nhttps://github.com/luckylucky421/kubernetes1.17.3\n把update-kubeadm-cert.sh文件clone和下载下来，拷贝到master1，master2，master3节点上\n\n2.在每个节点都执行如下命令\n1）给update-kubeadm-cert.sh证书授权可执行权限\nchmod +x update-kubeadm-cert.sh\n\n2）执行下面命令，修改证书过期时间，把时间延长到10年\n./update-kubeadm-cert.sh all\n\n3）在master1节点查询Pod是否正常,能查询出数据说明证书签发完成\nkubectl  get pods -n kube-system\n\n显示如下，能够看到pod信息，说明证书签发正常：\n......\ncalico-node-b5ks5                  1/1     Running   0          157m\ncalico-node-r6bfr                  1/1     Running   0          155m\ncalico-node-r8qzv                  1/1     Running   0          7h1m\ncoredns-66bff467f8-5vk2q           1/1     Running   0          7h30m\n......\n```\n\n# 四、验证证书有效时间是否延长到10年\n\n```bash\nopenssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text  |grep Not\n显示如下，通过下面可看到ca证书有效期是10年，从2020到2030年：\nNot Before: Apr 22 04:09:07 2020 GMT\nNot After : Apr 20 04:09:07 2030 GMT\nopenssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text  |grep Not\n显示如下，通过下面可看到apiserver证书有效期是10年，从2020到2030年：\nNot Before: Apr 22 11:15:53 2020 GMT\nNot After : Apr 20 11:15:53 2030 GMT\nopenssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt  -noout -text  |grep Not\n显示如下，通过下面可看到etcd证书有效期是10年，从2020到2030年：\nNot Before: Apr 22 11:32:24 2020 GMT\nNot After : Apr 20 11:32:24 2030 GMT\nopenssl x509 -in /etc/kubernetes/pki/front-proxy-ca.crt  -noout -text  |grep Not\n显示如下，通过下面可看到fron-proxy证书有效期是10年，从2020到2030年：\nNot Before: Apr 22 04:09:08 2020 GMT\nNot After : Apr 20 04:09:08 2030 GMT\n```\n\n参考资料：\n\nhttps://mp.weixin.qq.com/s/N7WRT0OkyJHec35BH_X1Hg  kubeadm初始化k8s集群延长证书过期时间\n"
  },
  {
    "path": "kubeadm/kubeadm无法下载镜像问题.md",
    "content": "0、kubeadm镜像介绍\n```\nkubeadm 是kubernetes 的集群安装工具，能够快速安装kubernetes 集群。\nkubeadm init 命令默认使用的docker镜像仓库为k8s.gcr.io，国内无法直接访问，于是需要变通一下。\n```\n\n1、首先查看需要使用哪些镜像\n```\nkubeadm config images list\n#输出如下结果\n\nk8s.gcr.io/kube-apiserver:v1.15.3\nk8s.gcr.io/kube-controller-manager:v1.15.3\nk8s.gcr.io/kube-scheduler:v1.15.3\nk8s.gcr.io/kube-proxy:v1.15.3\nk8s.gcr.io/pause:3.1\nk8s.gcr.io/etcd:3.3.10\nk8s.gcr.io/coredns:1.3.1\n\n我们通过 docker.io/mirrorgooglecontainers 中转一下\n```\n\n2、批量下载及转换标签\n\n```\n#docker.io/mirrorgooglecontainers中转镜像\n\nkubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#docker.io/mirrorgooglecontainers#g' |sh -x\ndocker images |grep mirrorgooglecontainers |awk '{print \"docker tag \",$1\":\"$2,$1\":\"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x\ndocker images |grep mirrorgooglecontainers |awk '{print \"docker rmi \", $1\":\"$2}' |sh -x\ndocker pull coredns/coredns:1.3.1\ndocker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1\ndocker rmi coredns/coredns:1.3.1\n\n注：coredns没包含在docker.io/mirrorgooglecontainers中，需要手工从coredns官方镜像转换下。\n\n\n#阿里云的中转镜像\n\nkubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' |sh -x\ndocker images |grep google_containers |awk '{print \"docker tag \",$1\":\"$2,$1\":\"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#2' |sh -x\ndocker images |grep google_containers |awk '{print \"docker rmi \", $1\":\"$2}' |sh -x\ndocker pull coredns/coredns:1.3.1\ndocker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1\ndocker rmi coredns/coredns:1.3.1\n```\n\n3、查看镜像列表\n```\ndocker images\n\nREPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE\nk8s.gcr.io/kube-proxy                v1.15.3             232b5c793146        2 weeks ago         82.4MB\nk8s.gcr.io/kube-scheduler            v1.15.3             703f9c69a5d5        2 weeks ago         81.1MB\nk8s.gcr.io/kube-controller-manager   v1.15.3             e77c31de5547        2 weeks ago         159MB\nk8s.gcr.io/coredns                   1.3.1               eb516548c180        7 months ago        40.3MB\nk8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        9 months ago        258MB\nk8s.gcr.io/pause                     3.1                 da86e6ba6ca1        20 months ago       742kB\n\n\ndocker rmi -f $(docker images -q)\ndocker rm -f `docker ps -a -q`\n```\n\n参考文档：\n\nhttps://cloud.tencent.com/info/6db42438f5dd7842bcecb6baf61833aa.html  kubeadm 无法下载镜像问题\n\nhttps://juejin.im/post/5b8a4536e51d4538c545645c  使用kubeadm 部署 Kubernetes(国内环境)\n"
  },
  {
    "path": "manual/README.md",
    "content": "# 内核升级\n```\n# 载入公钥\nrpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org\n\n# 安装ELRepo\nrpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm\n\n# 载入elrepo-kernel元数据\nyum --disablerepo=\\* --enablerepo=elrepo-kernel repolist\n\n# 查看可用的rpm包\nyum --disablerepo=\\* --enablerepo=elrepo-kernel list kernel*\n\n# 安装长期支持版本的kernel\nyum --disablerepo=\\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64\n\n# 删除旧版本工具包\nyum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y\n\n# 安装新版本工具包\nyum --disablerepo=\\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64\n\n#查看默认启动顺序\nawk -F\\' '$1==\"menuentry \" {print $2}' /etc/grub2.cfg  \nCentOS Linux (4.4.183-1.el7.elrepo.x86_64) 7 (Core)  \nCentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core)  \nCentOS Linux (0-rescue-c52097a1078c403da03b8eddeac5080b) 7 (Core)\n\n#默认启动的顺序是从0开始，新内核是从头插入（目前位置在0，而4.4.4的是在1），所以需要选择0。\ngrub2-set-default 0\n\n#重启并检查\nreboot\n```\n\n参考资料\n\nhttps://github.com/easzlab/kubeasz/blob/master/docs/guide/kernel_upgrade.md\n"
  },
  {
    "path": "manual/v1.14/README.md",
    "content": "\n"
  },
  {
    "path": "manual/v1.15.3/README.md",
    "content": "# 一、Kubernetes 1.15 二进制集群安装\n\n本系列文档将介绍如何使用二进制部署Kubernetes v1.15.3 集群的所有部署，而不是使用自动化部署(kubeadm)集群。在部署过程中，将详细列出各个组件启动参数，以及相关配置说明。在学习完本文档后，将理解k8s各个组件的交互原理，并且可以快速解决实际问题。\n\n## 1.1、组件版本\n\n```\nKubernetes 1.15.3\nDocker 18.09 (docker使用官方的脚本安装，后期可能升级为新的版本，但是不影响)\nEtcd 3.3.13\nFlanneld 0.11.0\n```\n\n## 1.2、组件说明\n\n### kube-apiserver\n\n```\n使用节点本地Nginx 4层透明代理实现高可用 (也可以使用haproxy，只是起到代理apiserver的作用)\n关闭非安全端口8080和匿名访问\n使用安全端口6443接受https请求\n严格的认知和授权策略 (x509、token、rbac)\n开启bootstrap token认证，支持kubelet TLS bootstrapping；\n使用https访问kubelet、etcd\n```\n\n### kube-controller-manager\n```\n3节点高可用 (在k8s中，有些组件需要选举，所以使用奇数为集群高可用方案)\n关闭非安全端口，使用10252接受https请求\n使用kubeconfig访问apiserver的安全扣\n使用approve kubelet证书签名请求(CSR)，证书过期后自动轮转\n各controller使用自己的ServiceAccount访问apiserver\n```\n### kube-scheduler\n```\n3节点高可用；\n使用kubeconfig访问apiserver安全端口\n```\n### kubelet\n```\n使用kubeadm动态创建bootstrap token\n使用TLS bootstrap机制自动生成client和server证书，过期后自动轮转\n在kubeletConfiguration类型的JSON文件配置主要参数\n关闭只读端口，在安全端口10250接受https请求，对请求进行认真和授权，拒绝匿名访问和非授权访问\n使用kubeconfig访问apiserver的安全端口\n```\n### kube-proxy\n```\n使用kubeconfig访问apiserver的安全端口\n在KubeProxyConfiguration类型JSON文件配置为主要参数\n使用ipvs代理模式\n```\n### 集群插件\n```\nDNS 使用功能、性能更好的coredns\n网络 使用Flanneld 作为集群网络插件\n```\n\n# 二、初始化环境\n\n## 1.1、集群机器\n```\n#master节点\n192.168.0.50 k8s-01\n192.168.0.51 k8s-02\n192.168.0.52 k8s-03\n\n#node节点\n192.168.0.53 k8s-04     #node节点只运行node，但是设置证书的时候要添加这个ip\n```\n本文档的所有etcd集群、master集群、worker节点均使用以上三台机器，并且初始化步骤需要在所有机器上执行命令。如果没有特殊命令，所有操作均在192.168.0.50上进行操作\n\nnode节点后面会有操作，但是在初始化这步，是所有集群机器。包括node节点，我上面没有列出node节点\n\n## 1.2、修改主机名\n\n所有机器设置永久主机名\n\n```\nhostnamectl set-hostname abcdocker-k8s01  #所有机器按照要求修改\nbash        #刷新主机名\n```\n接下来我们需要在所有机器上添加hosts解析\n```\ncat >> /etc/hosts <<EOF\n192.168.0.50  k8s-01\n192.168.0.51  k8s-02\n192.168.0.52  k8s-03\n192.168.0.53  k8s-04\nEOF\n```\n\n## 1.3、设置免密\n\n我们只在k8s-01上设置免密即可\n\n```\nwget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo\ncurl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo\nyum install -y expect\n\n#分发公钥\nssh-keygen -t rsa -P \"\" -f /root/.ssh/id_rsa\nfor i in k8s-01 k8s-02 k8s-03 k8s-04;do\nexpect -c \"\nspawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i\n        expect {\n                \\\"*yes/no*\\\" {send \\\"yes\\r\\\"; exp_continue}\n                \\\"*password*\\\" {send \\\"123456\\r\\\"; exp_continue}\n                \\\"*Password*\\\" {send \\\"123456\\r\\\";}\n        } \"\ndone \n\n#vim ssh_copy.sh\n\n#!/bin/bash\n\nfor i in `echo k8s-master1 k8s-master2 k8s-slave01 k8s-slave02 k8s-slave03`;do\nexpect -c \"\nspawn scp $1 root@$i:$1\n    expect {\n            \\\"*yes/no*\\\" {send \\\"yes\\r\\\"; exp_continue}\n            \\\"*password*\\\" {send \\\"123456\\r\\\"; exp_continue}\n            \\\"*Password*\\\" {send \\\"123456\\r\\\";}\n    } \"\ndone\n\n\n使用\nssh_copy.sh /etc/sysconfig/iptables\n\n\n#我这里密码是123456  大家按照自己主机的密码进行修改就可以\n```\n更新PATH变量 \n```\n[root@abcdocker-k8s01 ~]# echo 'PATH=/opt/k8s/bin:$PATH' >>/etc/profile\n[root@abcdocker-k8s01 ~]# source  /etc/profile\n[root@abcdocker-k8s01 ~]# env|grep PATH\nPATH=/opt/k8s/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin\n```\n\n## 1.4、安装依赖包\n\n在每台服务器上安装依赖包\n \n```\nyum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget\n```\n\n关闭防火墙 Linux 以及swap分区\n\n```\nsystemctl stop firewalld\nsystemctl disable firewalld\niptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat\niptables -P FORWARD ACCEPT\nswapoff -a\nsed -i '/ swap / s/^\\(.*\\)$/#\\1/g' /etc/fstab\nsetenforce 0\nsed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config\n\n#如果开启了swap分区，kubelet会启动失败(可以通过设置参数——-fail-swap-on设置为false)\n```\n\n升级内核\n\n```\n\n```\n\n\n\n\n参考资料\n\nhttps://i4t.com/4253.html   Kubernetes 1.14 二进制集群安装\n\nhttps://github.com/kubernetes/kubernetes/releases/tag/v1.15.3   下载链接\n"
  },
  {
    "path": "mysql/README.md",
    "content": "\n"
  },
  {
    "path": "mysql/kubernetes访问外部mysql服务.md",
    "content": "Table of Contents\n=================\n\n   * [Table of Contents](#table-of-contents)\n   * [一、创建endpoints](#一创建endpoints)\n   * [二、创建service](#二创建service)\n   * [三、文件合并](#三文件合并)\n   * [四、安装centos7基础镜像](#四安装centos7基础镜像)\n   * [五、测试数据库连接](#五测试数据库连接)\n   \n`k8s访问集群外独立的服务最好的方式是采用Endpoint方式(可以看作是将k8s集群之外的服务抽象为内部服务)，以mysql服务为例`\n\n# 一、创建endpoints\n\n(带注释的操作，建议分步操作，被这个坑了很久，或者可以直接使用合并文件一步执行)\n\n```bash\n# 删除 mysql-endpoints\nkubectl delete -f mysql-endpoints.yaml -n mos-namespace\n\n# 创建 mysql-endpoints.yaml\ncat >mysql-endpoints.yaml<<\\EOF\napiVersion: v1\nkind: Endpoints\nmetadata:\n  name: mysql-production\nsubsets:\n  - addresses:\n    - ip: 10.198.1.155 #-注意目前服务器的数据库需要放开权限\n    ports:\n    - port: 3306\n      protocol: TCP\nEOF\n\n# 创建 mysql-endpoints\nkubectl apply -f mysql-endpoints.yaml -n mos-namespace\n\n# 查看 mysql-endpoints\nkubectl get endpoints mysql-production -n mos-namespace\n\n# 查看 mysql-endpoints详情\nkubectl describe endpoints mysql-production -n mos-namespace\n\n# 探测服务是否可达\nnc -zv 10.198.1.155 3306\n```\n\n# 二、创建service\n```bash\n# 删除 mysql-service\nkubectl delete -f mysql-service.yaml -n mos-namespace\n\n# 编写 mysql-service.yaml\ncat >mysql-service.yaml<<\\EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-production\nspec:\n  ports:\n  - port: 3306\n    protocol: TCP\nEOF\n\n# 创建 mysql-service\nkubectl apply -f mysql-service.yaml -n mos-namespace\n\n# 查看 mysql-service\nkubectl get svc mysql-production -n mos-namespace\n\n# 查看 mysql-service详情\nkubectl describe svc mysql-production -n mos-namespace\n\n# 验证service ip的连通性\nnc -zv `kubectl get svc mysql-production -n mos-namespace|grep mysql-production|awk '{print $3}'` 3306\n```\n\n# 三、文件合并\n\n`注意点: Endpoints类型，可以打标签，但是Service不可以通过标签来选择，直接不写selector: name: mysql-endpoints 不然会出现异常，找不到endpoints节点`\n\n```\ncat << EOF > mysql-service-new.yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-production\nspec:\n  #selector:     ---注意这里用标签选择，直接取消\n  #  name: mysql-endpoints\n  ports:\n  - port: 3306\n    protocol: TCP\nEOF\n```\n完整文件\n```bash\nkubectl delete -f mysql-endpoints-new.yaml -n mos-namespace\nkubectl delete -f mysql-service-new.yaml -n mos-namespace\n\ncat << EOF > mysql-endpoints-new.yaml\napiVersion: v1\nkind: Endpoints\nmetadata:\n  name: mysql-production\n  labels:\n    name: mysql-endpoints\nsubsets:\n  - addresses:\n    - ip: 10.198.1.155\n    ports:\n    - port: 3306\n      protocol: TCP\nEOF\n\ncat << EOF > mysql-service-new.yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-production\nspec:\n  ports:\n  - port: 3306\n    protocol: TCP\nEOF\n\nkubectl apply -f mysql-endpoints-new.yaml -n mos-namespace\nkubectl apply -f mysql-service-new.yaml -n mos-namespace\n\nnc -zv `kubectl get svc mysql-production -n mos-namespace|grep mysql-production|awk '{print $3}'` 3306\n```\n\n# 四、安装centos7基础镜像\n```bash\n# 查看 mos-namespace 下的pod资源\nkubectl get pods -n mos-namespace\n\n# 清理命令行创建的deployment\nkubectl delete deployment centos7-app -n mos-namespace\n\n# 命令行跑一个centos7的bash基础容器\n#kubectl run --rm --image=centos:7.2.1511 centos7-app -it --port=8080 --replicas=1 -n mos-namespace\nkubectl run --image=centos:7.2.1511 centos7-app -it --port=8080 --replicas=1 -n mos-namespace\n\n# 安装mysql客户端\nyum install vim net-tools telnet nc -y\nyum install -y mariadb.x86_64 mariadb-libs.x86_64\n```\n\n# 五、测试数据库连接\n\n```bash\n# 进入到容器\nkubectl exec `kubectl get pods -n mos-namespace|grep centos7-app|awk '{print $1}'` -it /bin/bash -n mos-namespace\n\n# 检查网络连通性\nping mysql-production\n\n# 测试mysql服务端口是否OK\nnc -zv mysql-production 3306\n\n# 连接测试\nmysql -h'mysql-production' -u'root' -p'password'\n```\n\n参考资料：\n\nhttps://blog.csdn.net/hxpjava1/article/details/80040407   使用kubernetes访问外部服务mysql/redis\n\nhttps://blog.csdn.net/liyingke112/article/details/76204038  \n\nhttps://blog.csdn.net/ybt_c_index/article/details/80881157  istio 0.8 用ServiceEntry访问外部服务（如RDS）\n"
  },
  {
    "path": "redis/K8s上Redis集群动态扩容.md",
    "content": "参考资料：\n\nhttp://redisdoc.com/topic/cluster-tutorial.html#id10   Redis 命令参考\n\nhttps://cloud.tencent.com/developer/article/1392872   \n"
  },
  {
    "path": "redis/K8s上运行Redis单实例.md",
    "content": "Table of Contents\n=================\n\n   * [一、创建namespace](#一创建namespace)\n   * [二、创建一个 configmap](#二创建一个-configmap)\n   * [三、创建 redis 容器](#三创建-redis-容器)\n   * [四、创建redis-service服务](#四创建redis-service服务)\n   * [五、验证redis实例](#五验证redis实例)\n   \n# 一、创建namespace\n```bash\n# 清理 namespace\nkubectl delete -f mos-namespace.yaml\n\n# 创建一个专用的 namespace\ncat > mos-namespace.yaml <<\\EOF\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: mos-namespace\nEOF\n\nkubectl apply -f mos-namespace.yaml\n\n# 查看 namespace\nkubectl get namespace -A\n```\n\n# 二、创建一个 configmap\n\n```bash\nmkdir config && cd config\n\n# 清理configmap\nkubectl delete configmap redis-conf -n mos-namespace\n\n# 创建redis配置文件\ncat >redis.conf <<\\EOF\n#daemonize yes\npidfile /data/redis.pid\nport 6379\ntcp-backlog 30000\ntimeout 0\ntcp-keepalive 10\nloglevel notice\nlogfile /data/redis.log\ndatabases 16\n#save 900 1\n#save 300 10\n#save 60 10000\nstop-writes-on-bgsave-error no\nrdbcompression yes\nrdbchecksum yes\ndbfilename dump.rdb\ndir /data\nslave-serve-stale-data yes\nslave-read-only yes\nrepl-diskless-sync no\nrepl-diskless-sync-delay 5\nrepl-disable-tcp-nodelay no\nslave-priority 100\nrequirepass redispassword\nmaxclients 30000\nappendonly no\nappendfilename \"appendonly.aof\"\nappendfsync everysec\nno-appendfsync-on-rewrite no\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\naof-load-truncated yes\nlua-time-limit 5000\nslowlog-log-slower-than 10000\nslowlog-max-len 128\nlatency-monitor-threshold 0\nnotify-keyspace-events KEA\nhash-max-ziplist-entries 512\nhash-max-ziplist-value 64\nlist-max-ziplist-entries 512\nlist-max-ziplist-value 64\nset-max-intset-entries 1000\nzset-max-ziplist-entries 128\nzset-max-ziplist-value 64\nhll-sparse-max-bytes 3000\nactiverehashing yes\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit slave 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\nhz 10\nEOF\n\n# 在mos-namespace中创建 configmap\nkubectl create configmap redis-conf --from-file=redis.conf -n mos-namespace\n```\n\n# 三、创建 redis 容器\n```bash\n# 清理pod\nkubectl delete -f mos_redis.yaml \n\ncat > mos_redis.yaml <<\\EOF\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mos-redis\n  namespace: mos-namespace\nspec:\n  selector:\n    matchLabels:\n      name: mos-redis\n  replicas: 1\n  template:\n    metadata:\n     labels:\n       name: mos-redis\n    spec:\n     containers:\n     - name: mos-redis\n       image: redis\n       volumeMounts:\n       - name: mos\n         mountPath: \"/usr/local/etc\"\n       command:\n         - \"redis-server\"\n       args:\n         - \"/usr/local/etc/redis/redis.conf\"\n     volumes:\n     - name: mos\n       configMap:\n         name: redis-conf\n         items:\n           - key: redis.conf\n             path: redis/redis.conf\nEOF\n\n# 创建和查看 pod\nkubectl apply -f mos_redis.yaml \nkubectl get pods -n mos-namespace\n\n# 注意：configMap 会挂在 /usr/local/etc/redis/redis.conf 上。与 mountPath 和 configMap 下的 path 一同指定\n```\n\n# 四、创建redis-service服务\n\n```bash\n# 删除service\nkubectl delete -f redis-service.yaml -n mos-namespace\n\n# 编写redis-service.yaml\ncat >redis-service.yaml<<\\EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: redis-production\n  namespace: mos-namespace\nspec:\n  selector:\n    name: mos-redis\n  ports:\n    - port: 6379\n      protocol: TCP\nEOF\n\n# 创建service\nkubectl apply -f redis-service.yaml -n mos-namespace\n\n# 查看service\nkubectl get svc redis-production -n mos-namespace\n\n# 查看service详情\nkubectl describe svc redis-production -n mos-namespace\n```\n\n\n# 五、验证redis实例\n\n1、普通方式验证\n\n```bash\n# 进入到容器\nkubectl exec -it `kubectl get pods -n mos-namespace|grep redis|awk '{print $1}'` /bin/bash -n mos-namespace\n\nredis-cli -h 127.0.0.1 -a redispassword\n# 127.0.0.1:6379> set a b\n# 127.0.0.1:6379> get a\n\"b\"\n\n# 查看日志(因为配置文件中有配置日志写到容器里的/data/redis.log文件)\nkubectl exec -it `kubectl get pods -n mos-namespace|grep redis|awk '{print $1}'` /bin/bash -n mos-namespace\n\n$ tail -100f /data/redis.log \n1:C 14 Nov 2019 06:46:13.476 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo\n1:C 14 Nov 2019 06:46:13.476 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started\n1:C 14 Nov 2019 06:46:13.476 # Configuration loaded\n1:M 14 Nov 2019 06:46:13.478 * Running mode=standalone, port=6379.\n1:M 14 Nov 2019 06:46:13.478 # WARNING: The TCP backlog setting of 30000 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Nov 2019 06:46:13.478 # Server initialized\n1:M 14 Nov 2019 06:46:13.478 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Nov 2019 06:46:13.478 * Ready to accept connections\n```\n\n2、通过暴露的service验证\n\n```bash\n# 命令行跑一个centos7的bash基础容器\nkubectl run --image=centos:7.2.1511 centos7-app -it --port=8080 --replicas=1 -n mos-namespace\n\n# 通过service方式验证\nkubectl exec `kubectl get pods -n mos-namespace|grep centos7-app|awk '{print $1}'` -it /bin/bash -n mos-namespace\n\nyum install -y epel-release\nyum install -y redis\n\nredis-cli -h redis-production -a redispassword\n```\n\n参考文档：\n\nhttps://www.cnblogs.com/klvchen/p/10862607.html \n"
  },
  {
    "path": "redis/K8s上运行Redis集群指南.md",
    "content": "Table of Contents\n=================\n\n   * [一、前言](#一前言)\n   * [二、准备操作](#二准备操作)\n   * [三、StatefulSet简介](#三statefulset简介)\n   * [四、部署过程](#四部署过程)\n      * [1、创建NFS存储](#1创建nfs存储)\n      * [2、创建PV](#2创建pv)\n      * [3、创建Configmap](#3创建configmap)\n      * [4、创建Headless service](#4创建headless-service)\n      * [4、创建Redis 集群节点](#4创建redis-集群节点)\n      * [5、初始化Redis集群](#5初始化redis集群)\n      * [6、创建用于访问Service](#6创建用于访问service)\n   * [五、测试主从切换](#五测试主从切换)\n   * [六、疑问点](#六疑问点)\n   \n# 一、前言\n\n架构原理:\n\n`每个Master都可以拥有多个Slave。当Master下线后，Redis集群会从多个Slave中选举出一个新的Master作为替代，而旧Master重新上线后变成新Master的Slave。`\n\n# 二、准备操作\n\n本次部署主要基于该项目：\n\n`https://github.com/zuxqoj/kubernetes-redis-cluster`\n\n其包含了两种部署Redis集群的方式：\n```bash\nStatefulSet\n\nService & Deployment\n```\n两种方式各有优劣，对于像Redis、Mongodb、Zookeeper等有状态的服务，使用StatefulSet是首选方式。本文将主要介绍如何使用StatefulSet进行Redis集群的部署。\n\n# 三、StatefulSet简介\n\n- 1、RC、Deployment、DaemonSet都是面向无状态的服务，它们所管理的Pod的IP、名字，启停顺序等都是随机的，而StatefulSet是什么？顾名思义，有状态的集合，管理所有有状态的服务，比如MySQL、MongoDB集群等。\n\n- 2、StatefulSet本质上是Deployment的一种变体，在v1.9版本中已成为GA版本，它为了解决有状态服务的问题，它所管理的Pod拥有固定的Pod名称，启停顺序，在StatefulSet中，Pod名字称为网络标识(hostname)，还必须要用到共享存储。\n\n- 3、在Deployment中，与之对应的服务是service，而在StatefulSet中与之对应的headless service，headless service，即无头服务，与service的区别就是它没有Cluster IP，解析它的名称时将返回该Headless Service对应的全部Pod的Endpoint列表。\n\n- 4、除此之外，StatefulSet在Headless Service的基础上又为StatefulSet控制的每个Pod副本创建了一个DNS域名，这个域名的格式为：\n```bash\n$(podname).(headless server name)   \nFQDN： $(podname).(headless server name).namespace.svc.cluster.local\n```\n- 5、也即是说，对于有状态服务，我们最好使用固定的网络标识（如域名信息）来标记节点，当然这也需要应用程序的支持（如Zookeeper就支持在配置文件中写入主机域名）。\n\n- 6、StatefulSet基于Headless Service（即没有Cluster IP的Service）为Pod实现了稳定的网络标志（包括Pod的hostname和DNS Records），在Pod重新调度后也保持不变。同时，结合PV/PVC，StatefulSet可以实现稳定的持久化存储，就算Pod重新调度后，还是能访问到原先的持久化数据。\n\n- 7、以下为使用StatefulSet部署Redis的架构，无论是Master还是Slave，都作为StatefulSet的一个副本，并且数据通过PV进行持久化，对外暴露为一个Service，接受客户端请求。\n\n\n# 四、部署过程\n\n```bash\n1.创建NFS存储\n2.创建PV\n3.创建PVC\n4.创建Configmap\n5.创建headless服务\n6.创建Redis StatefulSet\n7.初始化Redis集群\n```\n## 1、创建NFS存储\n\n创建NFS存储主要是为了给Redis提供稳定的后端存储，当Redis的Pod重启或迁移后，依然能获得原先的数据。这里，我们先要创建NFS，然后通过使用PV为Redis挂载一个远程的NFS路径。\n\n```bash\nyum -y install nfs-utils   #主包提供文件系统\nyum -y install rpcbind     #提供rpc协议\n```\n然后，新增/etc/exports文件，用于设置需要共享的路径\n\n```bash\n$ cat /etc/exports\n/data/nfs/redis/pv1 *(rw,no_root_squash,sync,insecure)\n/data/nfs/redis/pv2 *(rw,no_root_squash,sync,insecure)\n/data/nfs/redis/pv3 *(rw,no_root_squash,sync,insecure)\n/data/nfs/redis/pv4 *(rw,no_root_squash,sync,insecure)\n/data/nfs/redis/pv5 *(rw,no_root_squash,sync,insecure)\n/data/nfs/redis/pv6 *(rw,no_root_squash,sync,insecure)\n\n#创建相应目录\nmkdir -p /data/nfs/redis/pv{1..6}\n\n#接着，启动NFS和rpcbind服务\nsystemctl restart rpcbind\nsystemctl restart nfs\nsystemctl enable nfs\nsystemctl enable rpcbind\n\n#查看\nexportfs -v\n\n#客户端\nyum -y install nfs-utils\n\n#查看存储端共享\nshowmount -e localhost\n```\n\n## 2、创建PV\n\n每一个Redis Pod都需要一个独立的PV来存储自己的数据，因此可以创建一个pv.yaml文件，包含6个PV\n\n```bash\nkubectl delete -f pv.yaml\n\ncat >pv.yaml<<\\EOF\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv1\nspec:\n  capacity:\n    storage: 20Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs\n  nfs:\n    server: 10.198.1.155\n    path: \"/data/nfs/redis/pv1\"\n\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv2\nspec:\n  capacity:\n    storage: 20Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs\n  nfs:\n    server: 10.198.1.155\n    path: \"/data/nfs/redis/pv2\"\n\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv3\nspec:\n  capacity:\n    storage: 20Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs\n  nfs:\n    server: 10.198.1.155\n    path: \"/data/nfs/redis/pv3\"\n\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv4\nspec:\n  capacity:\n    storage: 20Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs\n  nfs:\n    server: 10.198.1.155\n    path: \"/data/nfs/redis/pv4\"\n\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv5\nspec:\n  capacity:\n    storage: 20Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs\n  nfs:\n    server: 10.198.1.155\n    path: \"/data/nfs/redis/pv5\"\n\n---\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: nfs-pv6\nspec:\n  capacity:\n    storage: 20Gi\n  accessModes:\n    - ReadWriteMany\n  persistentVolumeReclaimPolicy: Retain\n  storageClassName: nfs\n  nfs:\n    server: 10.198.1.155\n    path: \"/data/nfs/redis/pv6\"\nEOF\n\nkubectl apply -f pv.yaml\n```\n\n## 3、创建Configmap\n\n这里，我们可以直接将Redis的配置文件转化为Configmap，这是一种更方便的配置读取方式。配置文件redis.conf如下\n\n```bash\n#配置文件redis.conf\ncat >redis.conf<<\\EOF \nappendonly yes\ncluster-enabled yes\ncluster-config-file /var/lib/redis/nodes.conf\ncluster-node-timeout 5000\ndir /var/lib/redis\nport 6379\nEOF\n\n#删除名为redis-conf的Configmap\nkubectl delete configmap redis-conf\n\n#创建名为redis-conf的Configmap\nkubectl create configmap redis-conf --from-file=redis.conf\n\n#查看创建的configmap\n$ kubectl describe cm redis-conf\nName:         redis-conf\nNamespace:    default\nLabels:       <none>\nAnnotations:  <none>\n\nData\n====\nredis.conf:\n----\nappendonly yes\ncluster-enabled yes\ncluster-config-file /var/lib/redis/nodes.conf\ncluster-node-timeout 5000\ndir /var/lib/redis\nport 6379\n\nEvents:  <none>\n#如上，redis.conf中的所有配置项都保存到redis-conf这个Configmap中。\n```\n\n## 4、创建Headless service\n\nHeadless service是StatefulSet实现稳定网络标识的基础，我们需要提前创建。准备文件headless-service.yaml如下：\n\n```bash\n#删除svc\nkubectl delete -f headless-service.yaml\n\n#编写svc\ncat >headless-service.yaml<<\\EOF \napiVersion: v1\nkind: Service\nmetadata:\n  name: redis-service\n  labels:\n    app: redis\nspec:\n  ports:\n  - name: redis-port\n    port: 6379\n  clusterIP: None\n  selector:\n    app: redis\n    appCluster: redis-cluster\nEOF\n\n#创建svc\nkubectl create -f headless-service.yaml\n\n#查看service\n$ kubectl get svc\nNAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE\nredis-service   ClusterIP   None         <none>        6379/TCP   0s\n```\n可以看到，服务名称为redis-service，其CLUSTER-IP为None，表示这是一个“无头”服务。\n\n## 4、创建Redis 集群节点\n\n创建好Headless service后，就可以利用StatefulSet创建Redis 集群节点，这也是本文的核心内容。我们先创建redis.yml文件：\n\n```bash\n#清理pvc资源\nkubectl delete pvc redis-data-redis-app-{0..5}\n\n#清理pod资源\nkubectl delete -f redis.yaml\n\n#编写yaml\ncat >redis.yaml<<\\EOF\napiVersion: apps/v1beta1\nkind: StatefulSet\nmetadata:\n  name: redis-app\nspec:\n  serviceName: \"redis-service\"\n  replicas: 6\n  template:\n    metadata:\n      labels:\n        app: redis\n        appCluster: redis-cluster\n    spec:\n      terminationGracePeriodSeconds: 20\n      affinity:\n        podAntiAffinity:\n          preferredDuringSchedulingIgnoredDuringExecution:\n          - weight: 100\n            podAffinityTerm:\n              labelSelector:\n                matchExpressions:\n                - key: app\n                  operator: In\n                  values:\n                  - redis\n              topologyKey: kubernetes.io/hostname\n      containers:\n      - name: redis\n        image: redis\n        command:\n          - \"redis-server\"  #redis启动命令\n        args:\n          - \"/etc/redis/redis.conf\"  #redis-server后面跟的参数,换行代表空格\n          - \"--protected-mode\"  #允许外网访问\n          - \"no\"\n        resources:  #资源\n          requests:  #请求的资源\n            cpu: \"100m\"  #m代表千分之,相当于0.1 个cpu资源\n            memory: \"100Mi\"  #内存100m大小\n        ports:\n            - name: redis\n              containerPort: 6379\n              protocol: \"TCP\"\n            - name: cluster\n              containerPort: 16379\n              protocol: \"TCP\"\n        volumeMounts:\n          - name: \"redis-conf\"  #挂载configmap生成的文件\n            mountPath: \"/etc/redis\"  #挂载到哪个路径下\n          - name: \"redis-data\"  #挂载持久卷的路径\n            mountPath: \"/var/lib/redis\"\n      volumes:\n      - name: \"redis-conf\"  #引用configMap卷\n        configMap:\n          name: \"redis-conf\"\n          items:\n            - key: \"redis.conf\"  #创建configMap指定的名称\n              path: \"redis.conf\"  #里面的那个文件--from-file参数后面的文件\n  volumeClaimTemplates:  #进行pvc持久卷声明\n  - metadata:\n      name: redis-data\n    spec:\n      accessModes: [ \"ReadWriteMany\" ]\n      storageClassName: \"nfs\"  #--注意这里是使用nfs storageClass，如果没有改默认的，可以忽略不写\n      resources:\n        requests:\n          storage: 20Gi\nEOF\n\n#创建资源\nkubectl apply -f redis.yaml\n\nPodAntiAffinity:表示反亲和性，其决定了某个pod不可以和哪些Pod部署在同一拓扑域，可以用于将一个服务的POD分散在不同的主机或者拓扑域中，提高服务本身的稳定性。\n\nmatchExpressions:规定了Redis_Pod要尽量不要调度到包含app为redis的Node上，也即是说已经存在Redis的Node上尽量不要再分配Redis Pod了.\n\n另外，根据StatefulSet的规则，我们生成的Redis的6个Pod的hostname会被依次命名为$(statefulset名称)-$(序号)，如下图所示：\n\n```\n\n```bash\n# kubectl get pods -o wide \nNAME                                            READY     STATUS      RESTARTS   AGE       IP             NODE            NOMINATED NODE\nredis-app-0                                     1/1       Running     0          2h        172.17.24.3    192.168.0.144   <none>\nredis-app-1                                     1/1       Running     0          2h        172.17.63.8    192.168.0.148   <none>\nredis-app-2                                     1/1       Running     0          2h        172.17.24.8    192.168.0.144   <none>\nredis-app-3                                     1/1       Running     0          2h        172.17.63.9    192.168.0.148   <none>\nredis-app-4                                     1/1       Running     0          2h        172.17.24.9    192.168.0.144   <none>\nredis-app-5                                     1/1       ContainerCreating     0          2h        172.17.63.10   192.168.0.148   <none>\n\n如上，可以看到这些Pods在部署时是以{0…N-1}的顺序依次创建的。注意，直到redis-app-0状态启动后达到Running状态之后，redis-app-1 才开始启动。\n\n同时，每个Pod都会得到集群内的一个DNS域名，格式为$(podname).$(service name).$(namespace).svc.cluster.local ，也即是：\n\nredis-app-0.redis-service.default.svc.cluster.local\nredis-app-1.redis-service.default.svc.cluster.local\n...以此类推...\n\n这里我们可以验证一下\n#kubectl run --rm curl --image=radial/busyboxplus:curl -it\nkubectl run --rm -i --tty busybox --image=busybox:1.28 /bin/sh\n\n$ nslookup redis-app-0.redis-service   #注意格式 $(podname).$(service name).$(namespace)\nServer:    10.96.0.10\nAddress 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local\n\nName:      redis-app-0.redis-service\nAddress 1: 172.17.24.3 redis-app-0.redis-service.default.svc.cluster.local\n\n在K8S集群内部，这些Pod就可以利用该域名互相通信。我们可以使用busybox镜像的nslookup检验这些域名(一条命令)\n\n$ kubectl run -it --rm --image=busybox:1.28 --restart=Never busybox -- nslookup redis-app-0.redis-service\nServer:    10.96.0.10\nAddress 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local\n\nName:      redis-app-0.redis-service\nAddress 1: 172.17.24.3 redis-app-0.redis-service.default.svc.cluster.local\npod \"busybox\" deleted\n\n可以看到, redis-app-0 的IP为172.17.24.3。当然，若Redis Pod迁移或是重启(我们可以手动删除掉一个Redis Pod来测试),IP是会改变的,但是Pod的域名、SRV records、A record都不会改变。\n\n另外可以发现，我们之前创建的pv都被成功绑定了：\n\n$ kubectl get pv|grep nfs-pv\nnfs-pv1                                    20Gi       RWX            Retain           Bound      default/redis-data-redis-app-1                  nfs                            65s\nnfs-pv2                                    20Gi       RWX            Retain           Bound      default/redis-data-redis-app-0                  nfs                            65s\nnfs-pv3                                    20Gi       RWX            Retain           Bound      default/redis-data-redis-app-2                  nfs                            65s\nnfs-pv4                                    20Gi       RWX            Retain           Bound      default/redis-data-redis-app-5                  nfs                            65s\nnfs-pv5                                    20Gi       RWX            Retain           Bound      default/redis-data-redis-app-3                  nfs                            65s\nnfs-pv6                                    20Gi       RWX            Retain           Bound      default/redis-data-redis-app-4                  nfs                            65s\n\n查看pvc资源\n$ kubectl get pvc\nNAME                     STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE\nredis-data-redis-app-0   Bound    nfs-pv2   20Gi       RWX            nfs            96s\nredis-data-redis-app-1   Bound    nfs-pv1   20Gi       RWX            nfs            86s\nredis-data-redis-app-2   Bound    nfs-pv3   20Gi       RWX            nfs            75s\nredis-data-redis-app-3   Bound    nfs-pv5   20Gi       RWX            nfs            69s\nredis-data-redis-app-4   Bound    nfs-pv6   20Gi       RWX            nfs            62s\nredis-data-redis-app-5   Bound    nfs-pv4   20Gi       RWX            nfs            56s\n```\n\n## 5、初始化Redis集群\n\n创建好6个Redis Pod后，我们还需要利用常用的Redis-tribe工具进行集群的初始化\n\n创建Ubuntu容器\n\n由于Redis集群必须在所有节点启动后才能进行初始化，而如果将初始化逻辑写入Statefulset中，则是一件非常复杂而且低效的行为。这里，本人不得不称赞一下原项目作者的思路，值得学习。也就是说，我们可以在K8S上创建一个额外的容器，专门用于进行K8S集群内部某些服务的管理控制。\n这里，我们专门启动一个Ubuntu的容器，可以在该容器中安装Redis-tribe，进而初始化Redis集群，执行：\n\n```bash\n1、#创建一个ubuntu容器\nkubectl run -it ubuntu --image=ubuntu --restart=Never /bin/bash\n\n#进入到容器\nkubectl exec -it ubuntu /bin/bash\n\n2、#我们使用阿里云的Ubuntu源，执行\n$ cat > /etc/apt/sources.list << EOF\ndeb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse\ndeb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse\n\ndeb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse\ndeb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse\n\ndeb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse\ndeb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse\n\ndeb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse\ndeb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse\n \ndeb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse\ndeb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse\nEOF\n\n3、#成功后，原项目要求执行如下命令安装基本的软件环境：\napt-get update\napt-get install -y vim wget python2.7 python-pip redis-tools dnsutils\n\n4、#初始化集群\n首先，我们需要安装redis-trib\npip install redis-trib==0.5.1\n\n然后，创建只有Master节点的集群\nredis-trib.py create \\\n  `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \\\n  `dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \\\n  `dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379\n\n其次，为每个Master添加Slave\nredis-trib.py replicate \\\n  --master-addr `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \\\n  --slave-addr `dig +short redis-app-3.redis-service.default.svc.cluster.local`:6379\n\nredis-trib.py replicate \\\n  --master-addr `dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \\\n  --slave-addr `dig +short redis-app-4.redis-service.default.svc.cluster.local`:6379\n\nredis-trib.py replicate \\\n  --master-addr `dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379 \\\n  --slave-addr `dig +short redis-app-5.redis-service.default.svc.cluster.local`:6379\n\n至此，我们的Redis集群就真正创建完毕了，连到任意一个Redis Pod中检验一下：\n$ kubectl exec -it redis-app-2 /bin/bash\nroot@redis-app-2:/data# /usr/local/bin/redis-cli -c\n127.0.0.1:6379> cluster nodes\n5d3e77f6131c6f272576530b23d1cd7592942eec 172.17.24.3:6379@16379 master - 0 1559628533000 1 connected 0-5461\na4b529c40a920da314c6c93d17dc603625d6412c 172.17.63.10:6379@16379 master - 0 1559628531670 6 connected 10923-16383\n368971dc8916611a86577a8726e4f1f3a69c5eb7 172.17.24.9:6379@16379 slave 0025e6140f85cb243c60c214467b7e77bf819ae3 0 1559628533672 4 connected\n0025e6140f85cb243c60c214467b7e77bf819ae3 172.17.63.8:6379@16379 master - 0 1559628533000 2 connected 5462-10922\n6d5ee94b78b279e7d3c77a55437695662e8c039e 172.17.24.8:6379@16379 myself,slave a4b529c40a920da314c6c93d17dc603625d6412c 0 1559628532000 5 connected\n2eb3e06ce914e0e285d6284c4df32573e318bc01 172.17.63.9:6379@16379 slave 5d3e77f6131c6f272576530b23d1cd7592942eec 0 1559628533000 3 connected\n127.0.0.1:6379> cluster info\ncluster_state:ok\ncluster_slots_assigned:16384\ncluster_slots_ok:16384\ncluster_slots_pfail:0\ncluster_slots_fail:0\ncluster_known_nodes:6\ncluster_size:3\ncluster_current_epoch:6\ncluster_my_epoch:6\ncluster_stats_messages_ping_sent:14910\ncluster_stats_messages_pong_sent:15139\ncluster_stats_messages_sent:30049\ncluster_stats_messages_ping_received:15139\ncluster_stats_messages_pong_received:14910\ncluster_stats_messages_received:30049\n127.0.0.1:6379> \n\n另外，还可以在NFS上查看Redis挂载的数据：\n$ ll /data/nfs/redis/pv3\ntotal 12\n-rw-r--r-- 1 root root  92 Jun  4 11:36 appendonly.aof\n-rw-r--r-- 1 root root 175 Jun  4 11:36 dump.rdb\n-rw-r--r-- 1 root root 794 Jun  4 11:49 nodes.conf\n```\n\n## 6、创建用于访问Service\n\n前面我们创建了用于实现StatefulSet的Headless Service，但该Service没有Cluster Ip，因此不能用于外界访问。所以，我们还需要创建一个Service，专用于为Redis集群提供访问和负载均衡：\n\n```bash\n#删除服务\nkubectl delete -f redis-access-service.yaml\n\n#编写yaml\ncat >redis-access-service.yaml<<\\EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: redis-access-service\n  labels:\n    app: redis\nspec:\n  type: NodePort\n  ports:\n  - name: redis-port\n    protocol: \"TCP\"\n    port: 6379\n    targetPort: 6379\n    nodePort: 30010\n  selector:\n    app: redis\n    appCluster: redis-cluster\nEOF\n\n#如上，该Service名称为 redis-access-service，在K8S集群中暴露6379端口，并且会对labels name为app: redis或appCluster: redis-cluster的pod进行负载均衡。\n\n#创建服务\nkubectl apply -f redis-access-service.yaml\n\n#查看svc\n$ kubectl get svc redis-access-service -o wide\nNAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE   SELECTOR\nredis-access-service   NodePort   10.111.59.191   <none>        6379:30010/TCP   83m   app=redis,appCluster=redis-cluster\n\n#如上，在K8S集群中，所有应用都可以通过 10.111.59.191:6379 来访问Redis集群。当然，为了方便测试，我们也可以为Service添加一个NodePort映射到物理机30010上。\n#查看svc详情\n$ kubectl describe svc redis-access-service\nName:                     redis-access-service\nNamespace:                default\nLabels:                   app=redis\nAnnotations:              kubectl.kubernetes.io/last-applied-configuration:\n                            {\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"redis\"},\"name\":\"redis-access-service\",\"namespace\":\"defau...\nSelector:                 app=redis,appCluster=redis-cluster\nType:                     NodePort\nIP:                       10.111.59.191\nPort:                     redis-port  6379/TCP\nTargetPort:               6379/TCP\nNodePort:                 redis-port  30010/TCP\nEndpoints:                10.244.1.230:6379,10.244.1.231:6379,10.244.1.232:6379 + 3 more...\nSession Affinity:         None\nExternal Traffic Policy:  Cluster\nEvents:                   <none>\n\n#集群内测试（service ip 测试）\nyum install redis -y\n\nredis-cli -h 10.111.59.191 -p 6379 -c\n10.111.59.191:6379> CLUSTER info\ncluster_state:ok\ncluster_slots_assigned:16384\ncluster_slots_ok:16384\ncluster_slots_pfail:0\ncluster_slots_fail:0\ncluster_known_nodes:5\ncluster_size:3\ncluster_current_epoch:3\ncluster_my_epoch:3\ncluster_stats_messages_ping_sent:766\ncluster_stats_messages_pong_sent:790\ncluster_stats_messages_meet_sent:2\ncluster_stats_messages_sent:1558\ncluster_stats_messages_ping_received:787\ncluster_stats_messages_pong_received:768\ncluster_stats_messages_meet_received:3\ncluster_stats_messages_received:1558\n\n#宿主机端口测试(使用集群协议测试)\nredis-cli -h 10.198.1.156 -p 30010 -c\n10.198.1.156:30010> cluster info\ncluster_state:ok\ncluster_slots_assigned:16384\ncluster_slots_ok:16384\ncluster_slots_pfail:0\ncluster_slots_fail:0\ncluster_known_nodes:5\ncluster_size:3\ncluster_current_epoch:3\ncluster_my_epoch:2\ncluster_stats_messages_ping_sent:907\ncluster_stats_messages_pong_sent:901\ncluster_stats_messages_meet_sent:3\ncluster_stats_messages_sent:1811\ncluster_stats_messages_ping_received:900\ncluster_stats_messages_pong_received:910\ncluster_stats_messages_meet_received:1\ncluster_stats_messages_received:1811\n```\n\n# 五、测试主从切换\n\n在K8S上搭建完好Redis集群后，我们最关心的就是其原有的高可用机制是否正常。这里，我们可以任意挑选一个Master的Pod来测试集群的主从切换机制，如redis-app-0：\n\n```bash\n[root@master redis]# kubectl get pods redis-app-0 -o wide\nNAME          READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE\nredis-app-1   1/1       Running   0          3h        172.17.24.3   192.168.0.144   <none>\n\n进入redis-app-0查看：\n[root@master redis]# kubectl exec -it redis-app-0 /bin/bash\nroot@redis-app-0:/data# /usr/local/bin/redis-cli -c\n127.0.0.1:6379> role\n1) \"master\"\n2) (integer) 13370\n3) 1) 1) \"172.17.63.9\"\n      2) \"6379\"\n      3) \"13370\"\n127.0.0.1:6379> \n\n如上可以看到，app-0为master，slave为172.17.63.9即redis-app-3。\n\n接着，我们手动删除redis-app-0：\n[root@master redis]# kubectl delete pod redis-app-0\npod \"redis-app-0\" deleted\n[root@master redis]#  kubectl get pod redis-app-0 -o wide\nNAME          READY     STATUS    RESTARTS   AGE       IP            NODE            NOMINATED NODE\nredis-app-0   1/1       Running   0          4m        172.17.24.3   192.168.0.144   <none>\n\n我们再进入redis-app-0内部查看：\n[root@master redis]# kubectl exec -it redis-app-0 /bin/bash\nroot@redis-app-0:/data# /usr/local/bin/redis-cli -c\n127.0.0.1:6379> role\n1) \"slave\"\n2) \"172.17.63.9\"\n3) (integer) 6379\n4) \"connected\"\n5) (integer) 13958\n\n如上，redis-app-0变成了slave，从属于它之前的从节点172.17.63.9即redis-app-3\n```\n\n# 六、疑问点\n\n1、pod重启，ip变了，集群健康性如何维护\n```\n至此，大家可能会疑惑，前面讲了这么多似乎并没有体现出StatefulSet的作用，其提供的稳定标志redis-app-*仅在初始化集群的时候用到，而后续Redis Pod的通信或配置文件中并没有使用该标志。我想说，是的，本文使用StatefulSet部署Redis确实没有体现出其优势，还不如介绍Zookeeper集群来的明显，不过没关系，学到知识就好。\n\n那为什么没有使用稳定的标志，Redis Pod也能正常进行故障转移呢？这涉及了Redis本身的机制。因为，Redis集群中每个节点都有自己的NodeId（保存在自动生成的nodes.conf中），并且该NodeId不会随着IP的变化和变化，这其实也是一种固定的网络标志。也就是说，就算某个Redis Pod重启了，该Pod依然会加载保存的NodeId来维持自己的身份。我们可以在NFS上查看redis-app-1的nodes.conf文件\n\n$ cat /usr/local/k8s/redis/pv1/nodes.conf \n96689f2018089173e528d3a71c4ef10af68ee462 192.168.169.209:6379@16379 slave d884c4971de9748f99b10d14678d864187a9e5d3 0 1526460952651 4 connected\n237d46046d9b75a6822f02523ab894928e2300e6 192.168.169.200:6379@16379 slave c15f378a604ee5b200f06cc23e9371cbc04f4559 0 1526460952651 1 connected\nc15f378a604ee5b200f06cc23e9371cbc04f4559 192.168.169.197:6379@16379 master - 0 1526460952651 1 connected 10923-16383\nd884c4971de9748f99b10d14678d864187a9e5d3 192.168.169.205:6379@16379 master - 0 1526460952651 4 connected 5462-10922\nc3b4ae23c80ffe31b7b34ef29dd6f8d73beaf85f 192.168.169.198:6379@16379 myself,slave c8a8f70b4c29333de6039c47b2f3453ed11fb5c2 0 1526460952565 3 connected\nc8a8f70b4c29333de6039c47b2f3453ed11fb5c2 192.168.169.201:6379@16379 master - 0 1526460952651 6 connected 0-5461\nvars currentEpoch 6 lastVoteEpoch 4\n\n如上，第一列为NodeId，稳定不变；第二列为IP和端口信息，可能会改变。\n\n这里，我们介绍NodeId的两种使用场景：\n\n当某个Slave Pod断线重连后IP改变，但是Master发现其NodeId依旧， 就认为该Slave还是之前的Slave。\n\n当某个Master Pod下线后，集群在其Slave中选举重新的Master。待旧Master上线后，集群发现其NodeId依旧，会让旧Master变成新Master的slave。\n```\n\n2、pvc绑定不上报错(storageclass.storage.k8s.io \"nfs\" not found报错)\n\n```\n$ kubectl describe pvc redis-data-redis-app-0\n\nWarning  ProvisioningFailed  14s (x2 over 24s)  persistentvolume-controller  storageclass.storage.k8s.io \"nfs\" not found\n\n#原因为创建pv的时候，没有指定\nstorageClassName: nfs\n```\n\n参考文档：\n\nhttps://cloud.tencent.com/developer/article/1392872  redis动态扩容\n\nhttps://blog.csdn.net/zhutongcloud/article/details/90768390  部署Redis集群\n\nhttps://www.jianshu.com/p/65c4baadf5d9  redis故障切换nodeid原因\n"
  },
  {
    "path": "redis/README.md",
    "content": "参考资料：\n\nhttps://mp.weixin.qq.com/s/noVUEO5tbdcdx8AzYNrsMw  Kubernetes上通过sts测试Redis Cluster集群\n"
  },
  {
    "path": "rke/README.md",
    "content": "# 一、基础配置优化\n```\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\ngroupadd docker\nuseradd -g docker docker\necho \"1Qaz2Wsx3Edc\" | passwd --stdin docker\nusermod docker -G docker  #注意这里需要将数组改为docker属组，不然会报错\n\nsetenforce 0\nsed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 关闭selinux\nsystemctl daemon-reload\nsystemctl stop firewalld.service && systemctl disable firewalld.service # 关闭防火墙\n#echo 'LANG=\"en_US.UTF-8\"' >> /etc/profile; source /etc/profile # 修改系统语言\nln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime # 修改时区（如果需要修改）\n\n# 性能调优\ncat >> /etc/sysctl.conf<<EOF\nnet.bridge.bridge-nf-call-iptables=1\nnet.ipv4.neigh.default.gc_thresh1=4096\nnet.ipv4.neigh.default.gc_thresh2=6144\nnet.ipv4.neigh.default.gc_thresh3=8192\nEOF\nsysctl -p\n\ncat <<EOF >  /etc/sysctl.d/k8s.conf\nnet.ipv4.ip_forward=1\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nvm.swappiness=0\nEOF\nsysctl --system\n\n#docker用户免密登录\nmkdir -p /home/docker/.ssh/\nchmod 700 /home/docker/.ssh/\necho 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bRm20od1b3rzW3ZPLB5NZn3jQesvfiz2p0WlfcYJrFHfF5Ap0ubIBUSQpVNLn94u8ABGBLboZL8Pjo+rXQPkIcObJxoKS8gz6ZOxcxJhl11JKxTz7s49nNYaNDIwB13KaNpvBEHVoW3frUnP+RnIKIIDsr1QCr9t64D9TE99mbNkEvDXr021UQi12Bf4KP/8gfYK3hDMRuX634/K8yu7+IaO1vEPNT8HDo9XGcvrOD1QGV+is8mrU53Xa2qTsto7AOb2J8M6n1mSZxgNz2oGc6ZDuN1iMBfHm4O/s5VEgbttzB2PtI0meKeaLt8VaqwTth631EN1ryjRYUuav7bf docker@k8s-master-01' > /home/docker/.ssh/authorized_keys\nchmod 400 /home/docker/.ssh/authorized_keys\n```\n\n## 二、基础环境准备\n\n```\nmkdir -p /etc/yum.repos.d_bak/\nmv /etc/yum.repos.d/* /etc/yum.repos.d_bak/\ncurl http://mirrors.aliyun.com/repo/Centos-7.repo >/etc/yum.repos.d/Centos-7.repo\ncurl http://mirrors.aliyun.com/repo/epel-7.repo >/etc/yum.repos.d/epel-7.repo\nsed -i '/aliyuncs/d' /etc/yum.repos.d/Centos-7.repo\nyum clean all && yum makecache fast\n\nyum -y install yum-utils\nyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\nyum install -y device-mapper-persistent-data lvm2\n\nyum install docker-ce -y\n\n#从docker1.13版本开始，docker会自动设置iptables的FORWARD默认策略为DROP，所以需要修改docker的启动配置文件/usr/lib/systemd/system/docker.service\n\ncat > /usr/lib/systemd/system/docker.service << \\EOF\n[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\nAfter=network-online.target firewalld.service containerd.service\nWants=network-online.target\nRequires=docker.socket\n[Service]\nType=notify\nExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock\nExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT\nExecReload=/bin/kill -s HUP \\$MAINPID\nTimeoutSec=0\nRestartSec=2\nRestart=always\nStartLimitBurst=3\nStartLimitInterval=60s\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\nTasksMax=infinity\nDelegate=yes\nKillMode=process\n[Install]\nWantedBy=multi-user.target\nEOF\n\n#设置加速器\ncurl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://41935bf4.m.daocloud.io\n#这个脚本在centos 7上有个bug,脚本会改变docker的配置文件/etc/docker/daemon.json但修改的时候多了一个逗号,导致docker无法启动\n\n#或者直接执行这个指令\ntee /etc/docker/daemon.json <<-'EOF'\n{\n\"registry-mirrors\": [\"https://1z45x7d0.mirror.aliyuncs.com\"],\n\"insecure-registries\": [\"192.168.56.11:5000\"],\n\"storage-driver\": \"overlay2\",\n\"log-driver\": \"json-file\",\n\"log-opts\": {\n    \"max-size\": \"100m\",\n    \"max-file\": \"3\"\n    }\n}\nEOF\nsystemctl daemon-reload\nsystemctl restart docker\n\n#查看加速器是否生效\nroot># docker info\n Registry Mirrors:\n  https://1z45x7d0.mirror.aliyuncs.com/   --发现参数已经生效\n Live Restore Enabled: false\n```\n\n## 三、RKE安装\n\n使用RKE安装，需要先安装好docker和设置好root和普通用户的免key登录\n\n1、下载RKE\n```\n#可以从https://github.com/rancher/rke/releases下载安装包,本文使用版本v0.3.0.下载完后将安装包上传至任意节点.\n\nwget https://github.com/rancher/rke/releases/download/v0.2.8/rke_linux-amd64\nchmod 777 rke_linux-amd64\nmv rke_linux-amd64 /usr/local/bin/rke\n```\n\n2、创建集群配置文件\n```\ncat >/tmp/cluster.yml <<EOF\nnodes:\n    - address: 192.168.56.11\n      user: docker\n      role:\n        - controlplane\n        - etcd\n        - worker\n    - address: 192.168.56.12\n      user: docker\n      role:\n        - controlplane\n        - etcd\n        - worker\n    - address: 192.168.56.13\n      user: docker\n      role:\n        - controlplane\n        - etcd\n        - worker\ncluster_name: paas_cluster\nEOF\n\nchmod 777 /tmp/cluster.yml\n```\n\n3、创建k8s集群(注意这里需要切换为普通用户操作)\n\n```\nsu - docker \nrke up --config /tmp/cluster.yml\n\n#为root用户配置kubectl访问k8s集群(因为这里指定了目录/tmp，所以kube_config_cluster.yml文件也在/tmp目录)\nsu - root\nmkdir -p /root/.kube\ncp /tmp/kube_config_cluster.yml /root/.kube/config\n\n#其他master02 master03节点也需要同步该文件\nssh root@k8s-master-02 mkdir -p /root/.kube\nscp /root/.kube/config root@k8s-master-02:/root/.kube/config\n\nssh root@k8s-master-03 mkdir -p /root/.kube\nscp /root/.kube/config root@k8s-master-03:/root/.kube/config\n\n#查看日志\ndocker logs kube-proxy\n```\n\n4、安装kubectl\n```\ncurl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl\nchmod +x kubectl\nmv kubectl /usr/local/bin/kubectl\nkubectl version\n```\n\n5、检查k8s集群pod状态\n```\n[root@master01 ~]# kubectl get pods --all-namespaces\nNAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE\ningress-nginx   default-http-backend-7f8fbb85db-rxs9r     1/1     Running     0          106s\ningress-nginx   nginx-ingress-controller-9vhbj            1/1     Running     0          10m\ningress-nginx   nginx-ingress-controller-lhvk4            1/1     Running     0          10m\nkube-system     canal-9lhlr                               2/2     Running     0          10m\nkube-system     canal-xxz5p                               2/2     Running     0          10m\nkube-system     kube-dns-5fd74c7488-54dgp                 3/3     Running     0          10m\nkube-system     kube-dns-autoscaler-c89df977f-fb42z       1/1     Running     0          10m\nkube-system     metrics-server-7fbd549b78-8hftl           1/1     Running     0          10m\nkube-system     rke-ingress-controller-deploy-job-8c9c2   0/1     Completed   0          10m\nkube-system     rke-kubedns-addon-deploy-job-lp5tc        0/1     Completed   0          10m\nkube-system     rke-metrics-addon-deploy-job-j585d        0/1     Completed   0          10m\nkube-system     rke-network-plugin-deploy-job-xssrc       0/1     Completed   0          10m\n\npod的状态只有以上两种状态为正常状态，若有其他状态则需要查看pod日志\n\nkubectl describe pod pod-xxx -n namespace\n```\n\n6、指令补全\n```\nyum install bash-completion -y\n\nsource <(kubectl completion bash)\necho \"source <(kubectl completion bash)\" >> ~/.bashrc\n```\n\n# 四、helm将rancher部署在k8s集群\n\n1、安装并配置helm客户端\n```\n#使用官方提供的脚本一键安装\ncurl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh\nchmod 700 get_helm.sh\n./get_helm.sh\n\n\n#手动下载安装\n#下载 Helm \nwget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz\n#解压 Helm\ntar -zxvf helm-v2.9.1-linux-amd64.tar.gz\n#复制客户端执行文件到 bin 目录下\ncp linux-amd64/helm /usr/local/bin/\n```\n\n2、配置helm客户端具有访问k8s集群的权限\n```\nkubectl -n kube-system create serviceaccount tiller\nkubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller\n\n```\n3、将helm server（titler）部署到k8s集群\n```\nhelm init --service-account tiller --tiller-image hongxiaolu/tiller:v2.12.3 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts\n```\n4、为helm客户端配置chart仓库\n```\nhelm repo add rancher-stable https://releases.rancher.com/server-charts/stable\n```\n5、检查rancher chart仓库可用\n```\nhelm search rancher\n```\n```\n安装证书管理器\nhelm install stable/cert-manager \\\n  --name cert-manager \\\n  --namespace kube-system\n  \n kubectl get pods --all-namespaces|grep cert-manager\n  \n  \n helm install rancher-stable/rancher \\\n  --name rancher \\\n  --namespace cattle-system \\\n  --set hostname=acai.rancher.com\n  \n```\n\n参考资料：\n\nhttp://www.acaiblog.cn/2019/03/15/RKE%E9%83%A8%E7%BD%B2rancher%E9%AB%98%E5%8F%AF%E7%94%A8%E9%9B%86%E7%BE%A4/\n\nhttps://blog.csdn.net/login_sonata/article/details/93847888\n"
  },
  {
    "path": "rke/cluster.yml",
    "content": "# If you intened to deploy Kubernetes in an air-gapped environment,\n# please consult the documentation on how to configure custom RKE images.\nnodes:\n- address: 10.198.1.156\n  port: \"22\"\n  internal_address: \"\"\n  role:\n  - controlplane\n  - worker\n  - etcd\n  hostname_override: \"\"\n  user: k8s\n  docker_socket: /var/run/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~/.ssh/id_rsa\n  labels: {}\n- address: 10.198.1.157\n  port: \"22\"\n  internal_address: \"\"\n  role:\n  - controlplane\n  - worker\n  - etcd\n  hostname_override: \"\"\n  user: k8s\n  docker_socket: /var/run/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~/.ssh/id_rsa\n  labels: {}\n- address: 10.198.1.158\n  port: \"22\"\n  internal_address: \"\"\n  role:\n  - worker\n  hostname_override: \"\"\n  user: k8s\n  docker_socket: /var/run/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~/.ssh/id_rsa\n  labels: {}\n- address: 10.198.1.159\n  port: \"22\"\n  internal_address: \"\"\n  role:\n  - worker\n  hostname_override: \"\"\n  user: k8s\n  docker_socket: /var/run/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~/.ssh/id_rsa\n  labels: {}\n- address: 10.198.1.160\n  port: \"22\"\n  internal_address: \"\"\n  role:\n  - worker\n  hostname_override: \"\"\n  user: k8s\n  docker_socket: /var/run/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~/.ssh/id_rsa\n  labels: {}\nservices:\n  etcd:\n    image: \"\"\n    extra_args: {}\n    extra_binds: []\n    extra_env: []\n    external_urls: []\n    ca_cert: \"\"\n    cert: \"\"\n    key: \"\"\n    path: \"\"\n    snapshot: null\n    retention: \"\"\n    creation: \"\"\n  kube-api:\n    image: \"\"\n    extra_args:\n      enable-admission-plugins: NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Initializers\n      runtime-config: api/all=true,admissionregistration.k8s.io/v1alpha1=true\n    extra_binds: []\n    extra_env: []\n    service_cluster_ip_range: 10.44.0.0/16\n    service_node_port_range: \"\"\n    pod_security_policy: true\n  kube-controller:\n    image: \"\"\n    extra_args: {}\n    extra_binds: []\n    extra_env: []\n    cluster_cidr: 10.46.0.0/16\n    service_cluster_ip_range: 10.44.0.0/16\n  scheduler:\n    image: \"\"\n    extra_args: {}\n    extra_binds: []\n    extra_env: []\n  kubelet:\n    image: \"\"\n    extra_args: \n      enforce-node-allocatable: \"pods,kube-reserved,system-reserved\"\n      system-reserved-cgroup: \"/system.slice\"\n      system-reserved: \"cpu=500m,memory=1Gi\"\n      kube-reserved-cgroup: \"/system.slice/kubelet.service\"\n      kube-reserved: \"cpu=1,memory=2Gi\"\n      eviction-soft: \"memory.available<10%,nodefs.available<10%,imagefs.available<10%\"\n      eviction-soft-grace-period: \"memory.available=2m,nodefs.available=2m,imagefs.available=2m\"\n    extra_binds: []\n    extra_env: []\n    cluster_domain: k8s.test.net\n    infra_container_image: \"\"\n    cluster_dns_server: 10.44.0.10\n    fail_swap_on: false\n  kubeproxy:\n    image: \"\"\n    extra_args: {}\n    extra_binds: []\n    extra_env: []\nnetwork:\n  plugin: calico\n  options: {}\nauthentication:\n  strategy: x509\n  options: {}\n  sans: []\naddons: \"\"\naddons_include: []\nsystem_images:\n  etcd: rancher/coreos-etcd:v3.2.24\n  alpine: rancher/rke-tools:v0.1.25\n  nginx_proxy: rancher/rke-tools:v0.1.25\n  cert_downloader: rancher/rke-tools:v0.1.25\n  kubernetes_services_sidecar: rancher/rke-tools:v0.1.25\n  kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.13\n  dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.13\n  kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.13\n  kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0\n  kubernetes: rancher/hyperkube:v1.12.6-rancher1\n  flannel: rancher/coreos-flannel:v0.10.0\n  flannel_cni: rancher/coreos-flannel-cni:v0.3.0\n  calico_node: rancher/calico-node:v3.1.3\n  calico_cni: rancher/calico-cni:v3.1.3\n  calico_controllers: \"\"\n  calico_ctl: rancher/calico-ctl:v2.0.0\n  canal_node: rancher/calico-node:v3.1.3\n  canal_cni: rancher/calico-cni:v3.1.3\n  canal_flannel: rancher/coreos-flannel:v0.10.0\n  wave_node: weaveworks/weave-kube:2.1.2\n  weave_cni: weaveworks/weave-npc:2.1.2\n  pod_infra_container: rancher/pause-amd64:3.1\n  ingress: rancher/nginx-ingress-controller:0.21.0-rancher1\n  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4\n  metrics_server: rancher/metrics-server-amd64:v0.3.1\nssh_key_path: ~/.ssh/id_rsa\nssh_agent_auth: false\nauthorization:\n  mode: rbac\n  options: {}\nignore_docker_version: false\nkubernetes_version: \"\"\nprivate_registries: []\ningress:\n  provider: \"\"\n  options: {}\n  node_selector: {}\n  extra_args: {}\ncluster_name: \"\"\ncloud_provider:\n  name: \"\"\nprefix_path: \"\"\naddon_job_timeout: 0\nbastion_host:\n  address: \"\"\n  port: \"\"\n  user: \"\"\n  ssh_key: \"\"\n  ssh_key_path: \"\"\nmonitoring:\n  provider: \"\"\n  options: {}\n"
  },
  {
    "path": "tools/Linux Kernel 升级.md",
    "content": "# Linux Kernel 升级\n\nk8s,docker,cilium等很多功能、特性需要较新的linux内核支持，所以有必要在集群部署前对内核进行升级；CentOS7 和 Ubuntu16.04可以很方便的完成内核升级。\n\n## CentOS7\n\n红帽企业版 Linux 仓库网站 https://www.elrepo.org，主要提供各种硬件驱动（显卡、网卡、声卡等）和内核升级相关资源；兼容 CentOS7 内核升级。如下按照网站提示载入elrepo公钥及最新elrepo版本，然后按步骤升级内核（以安装长期支持版本 kernel-lt 为例）\n\n``` bash\n# 载入公钥\nrpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org\n# 安装ELRepo\nrpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm\n# 载入elrepo-kernel元数据\nyum --disablerepo=\\* --enablerepo=elrepo-kernel repolist\n# 查看可用的rpm包\nyum --disablerepo=\\* --enablerepo=elrepo-kernel list kernel*\n# 安装长期支持版本的kernel\nyum --disablerepo=\\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64\n# 删除旧版本工具包\nyum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y\n# 安装新版本工具包\nyum --disablerepo=\\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64\n\n# 查看默认启动顺序\nawk -F\\' '$1==\"menuentry \" {print $2}' /etc/grub2.cfg  \nCentOS Linux (4.4.208-1.el7.elrepo.x86_64) 7 (Core)\nCentOS Linux (3.10.0-1062.9.1.el7.x86_64) 7 (Core)\nCentOS Linux (3.10.0-957.el7.x86_64) 7 (Core)\nCentOS Linux (0-rescue-292a31ba53a34a6aa077e3467b6f9541) 7 (Core)\n\n# 默认启动的顺序是从0开始，新内核是从头插入（目前位置在0，而4.4.4的是在1），所以需要选择0。\ngrub2-set-default 0\n\n# 将第一个内核作为默认内核\nsed -i 's/GRUB_DEFAULT=saved/GRUB_DEFAULT=0/g' /etc/default/grub\n\n# 更新 grub\ngrub2-mkconfig -o /boot/grub2/grub.cfg\n\n# 重启并检查\nreboot\n```\n\n## Ubuntu16.04\n\n``` bash\n打开 http://kernel.ubuntu.com/~kernel-ppa/mainline/ 并选择列表中选择你需要的版本（以4.16.3为例）。\n接下来，根据你的系统架构下载 如下.deb 文件：\nBuild for amd64 succeeded (see BUILD.LOG.amd64):\n  linux-headers-4.16.3-041603_4.16.3-041603.201804190730_all.deb\n  linux-headers-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deb\n  linux-image-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deb\n#安装后重启即可\n$ sudo dpkg -i *.deb\n```\n\n参考文档:\n\nhttps://github.com/easzlab/kubeasz/blob/master/docs/guide/kernel_upgrade.md  \n"
  },
  {
    "path": "tools/README.md",
    "content": "# 同步工具\n\n1、同步主机host文件\n```\n[root@master01 ~]# ./ssh_copy.sh /etc/hosts\nspawn scp /etc/hosts root@master01:/etc/hosts\nhosts                                                                                                                                              100%  440   940.4KB/s   00:00    \nspawn scp /etc/hosts root@master02:/etc/hosts\nhosts                                                                                                                                              100%  440   774.6KB/s   00:00    \nspawn scp /etc/hosts root@master03:/etc/hosts\nhosts                                                                                                                                              100%  440     1.4MB/s   00:00    \nspawn scp /etc/hosts root@slave01:/etc/hosts\nhosts                                                                                                                                              100%  440   912.6KB/s   00:00    \nspawn scp /etc/hosts root@slave02:/etc/hosts\nhosts                                                                                                                                              100%  440   826.8KB/s   00:00    \nspawn scp /etc/hosts root@slave03:/etc/hosts\nhosts \n```\n\n2、iptables多端口\n```bash\n#iptables多端口\n-A RH-Firewall-1-INPUT -s 13.138.33.20/32 -p tcp -m tcp -m multiport --dports 80,443,6443,20000:40000 -j ACCEPT\n\n#同步防火墙\n./ssh_copy.sh /etc/sysconfig/iptables\n```\n"
  },
  {
    "path": "tools/k8s域名解析coredns问题排查过程.md",
    "content": "参考资料：\n\nhttps://segmentfault.com/a/1190000019823091?utm_source=tag-newest\n"
  },
  {
    "path": "tools/kubernetes-node打标签.md",
    "content": "```\nkubectl get nodes -A --show-labels\n\n\nkubectl label nodes 10.199.1.159 node=10.199.1.159\nkubectl label nodes 10.199.1.160 node=10.199.1.160\n```\n"
  },
  {
    "path": "tools/kubernetes-常用操作.md",
    "content": "# 一、节点调度配置\n```\n[root@master01 ~]# kubectl get nodes -A       \nNAME          STATUS                     ROLES    AGE     VERSION\n10.19.2.246   Ready                      node     3h13m   v1.15.2\n10.19.2.247   Ready                      node     3h13m   v1.15.2\n10.19.2.248   Ready                      node     3h13m   v1.15.2\n10.19.2.56    Ready,SchedulingDisabled   master   4h55m   v1.15.2\n10.19.2.57    Ready,SchedulingDisabled   master   4h55m   v1.15.2\n10.19.2.58    Ready,SchedulingDisabled   master   4h55m   v1.15.2\n\n#方法一\n[root@master01 ~]# kubectl uncordon 10.19.2.56\nnode/10.19.2.56 uncordoned\n\n[root@master01 ~]# kubectl get nodes -A       \nNAME          STATUS                     ROLES    AGE     VERSION\n10.19.2.246   Ready                      node     3h13m   v1.15.2\n10.19.2.247   Ready                      node     3h13m   v1.15.2\n10.19.2.248   Ready                      node     3h13m   v1.15.2\n10.19.2.56    Ready                      master   4h56m   v1.15.2\n10.19.2.57    Ready,SchedulingDisabled   master   4h56m   v1.15.2\n10.19.2.58    Ready,SchedulingDisabled   master   4h56m   v1.15.2\n\n#方法二\n[root@master01 ~]# kubectl patch node 10.19.2.56 -p '{\"spec\":{\"unschedulable\":false}}'\nnode/10.19.2.56 patched\n\n[root@master01 ~]# kubectl get nodes -A\nNAME          STATUS                     ROLES    AGE     VERSION\n10.19.2.246   Ready                      node     3h17m   v1.15.2\n10.19.2.247   Ready                      node     3h17m   v1.15.2\n10.19.2.248   Ready                      node     3h17m   v1.15.2\n10.19.2.56    Ready                      master   5h      v1.15.2\n10.19.2.57    Ready,SchedulingDisabled   master   5h      v1.15.2\n10.19.2.58    Ready,SchedulingDisabled   master   5h      v1.15.2\n```\n\n# 二、标签查看\n```\n[root@master01 ~]# kubectl get nodes --show-labels\nNAME          STATUS                     ROLES    AGE     VERSION   LABELS\n10.19.2.246   Ready                      node     3h15m   v1.15.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.246,kubernetes.io/os=linux,kubernetes.io/role=node\n10.19.2.247   Ready                      node     3h15m   v1.15.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.247,kubernetes.io/os=linux,kubernetes.io/role=node\n10.19.2.248   Ready                      node     3h15m   v1.15.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.248,kubernetes.io/os=linux,kubernetes.io/role=node\n10.19.2.56    Ready                      master   4h57m   v1.15.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.56,kubernetes.io/os=linux,kubernetes.io/role=master\n10.19.2.57    Ready,SchedulingDisabled   master   4h57m   v1.15.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.57,kubernetes.io/os=linux,kubernetes.io/role=master\n10.19.2.58    Ready,SchedulingDisabled   master   4h57m   v1.15.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.58,kubernetes.io/os=linux,kubernetes.io/role=master\n```\n参考文档：\n\nhttps://blog.csdn.net/miss1181248983/article/details/88181434  Kubectl常用命令\n"
  },
  {
    "path": "tools/kubernetes-批量删除Pods.md",
    "content": "# 一、批量删除处于Pending状态的pod\n```\nkubectl get pods | grep Pending | awk '{print $1}' | xargs kubectl delete pod\n```\n\n# 二、批量删除处于Evicted状态的pod\n```\nkubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod\n```\n\n参考文档：\n\nhttps://blog.csdn.net/weixin_39686421/article/details/80574131  kubernetes-批量删除Evicted Pods\n"
  },
  {
    "path": "tools/kubernetes访问外部mysql服务.md",
    "content": "`\nk8s访问集群外独立的服务最好的方式是采用Endpoint方式(可以看作是将k8s集群之外的服务抽象为内部服务)，以mysql服务为例\n`\n\n# 一、创建endpoints\n```bash\n#创建 mysql-endpoints.yaml\ncat > mysql-endpoints.yaml <<\\EOF\nkind: Endpoints\napiVersion: v1\nmetadata:\n  name: mysql-production\n  namespace: default\nsubsets:\n  - addresses:\n      - ip: 10.198.1.155\n    ports:\n      - port: 3306\nEOF\n\nkubectl apply -f mysql-endpoints.yaml \n```\n\n# 二、创建service\n```bash\n#创建 mysql-service.yaml\ncat > mysql-service.yaml <<\\EOF\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-production\nspec:\n  ports:\n    - port: 3306\nEOF\n\nkubectl apply -f mysql-service.yaml\n```\n\n# 三、测试连接数据库\n```bash\ncat > mysql-rc.yaml <<\\EOF\napiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: mysql\nspec:\n  replicas: 1\n  selector:\n    app: mysql\n  template:\n    metadata:\n      labels:\n        app: mysql\n    spec:\n      containers:\n      - name: mysql\n        image: docker.io/mysql:5.7\n        imagePullPolicy: IfNotPresent\n        ports:\n        - containerPort: 3306\n        env:\n        - name: MYSQL_ROOT_PASSWORD\n          value: \"123456\"\nEOF\n\nkubectl apply -f mysql-rc.yaml\n```\n参考资料：\n\nhttps://blog.csdn.net/hxpjava1/article/details/80040407   使用kubernetes访问外部服务mysql/redis\n"
  },
  {
    "path": "tools/ssh_copy.sh",
    "content": "#!/bin/bash\n\nfor i in `echo master01 master02 master03 slave01 slave02 slave03`;do\nexpect -c \"\nspawn scp $1 root@$i:$1\n    expect {\n            \\\"*yes/no*\\\" {send \\\"yes\\r\\\"; exp_continue}\n            \\\"*password*\\\" {send \\\"123456\\r\\\"; exp_continue}\n            \\\"*Password*\\\" {send \\\"123456\\r\\\";}\n    } \"\ndone\n"
  }
]