Full Code of Lancger/opsfull for AI

master 5b36608dbe13 cached
104 files
533.3 KB
194.5k tokens
1 requests
Download .txt
Showing preview only (653K chars total). Download the full file or copy to clipboard to get everything.
Repository: Lancger/opsfull
Branch: master
Commit: 5b36608dbe13
Files: 104
Total size: 533.3 KB

Directory structure:
gitextract_y9znwe6w/

├── LICENSE
├── README.md
├── apps/
│   ├── README.md
│   ├── nginx/
│   │   └── README.md
│   ├── ops/
│   │   └── README.md
│   └── wordpress/
│       ├── README.md
│       ├── 基于PV_PVC部署Wordpress 示例.md
│       └── 部署Wordpress 示例.md
├── components/
│   ├── README.md
│   ├── cronjob/
│   │   └── README.md
│   ├── dashboard/
│   │   ├── Kubernetes-Dashboard v2.0.0.md
│   │   └── README.md
│   ├── external-storage/
│   │   ├── 0、nfs服务端搭建.md
│   │   ├── 1、k8s的pv和pvc简述.md
│   │   ├── 2、静态配置PV和PVC.md
│   │   ├── 3、动态申请PV卷.md
│   │   ├── 4、Kubernetes之MySQL持久存储和故障转移.md
│   │   ├── 5、Kubernetes之Nginx动静态PV持久存储.md
│   │   └── README.md
│   ├── heapster/
│   │   └── README.md
│   ├── ingress/
│   │   ├── 0.通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系.md
│   │   ├── 1.kubernetes部署Ingress-nginx单点和高可用.md
│   │   ├── 1.外部服务发现之Ingress介绍.md
│   │   ├── 2.ingress tls配置.md
│   │   ├── 3.ingress-http使用示例.md
│   │   ├── 4.ingress-https使用示例.md
│   │   ├── 5.hello-tls.md
│   │   ├── 6.ingress-https使用示例.md
│   │   ├── README.md
│   │   ├── nginx-ingress/
│   │   │   └── README.md
│   │   ├── traefik-ingress/
│   │   │   ├── 1.traefik反向代理Deamonset模式.md
│   │   │   ├── 2.traefik反向代理Deamonset模式TLS.md
│   │   │   └── README.md
│   │   └── 常用操作.md
│   ├── initContainers/
│   │   └── README.md
│   ├── job/
│   │   └── README.md
│   ├── k8s-monitor/
│   │   └── README.md
│   ├── kube-proxy/
│   │   └── README.md
│   ├── nfs/
│   │   └── README.md
│   └── pressure/
│       ├── README.md
│       ├── calico bgp网络需要物理路由和交换机支持吗.md
│       └── k8s集群更换网段方案.md
├── docs/
│   ├── Envoy的架构与基本术语.md
│   ├── Kubernetes学习笔记.md
│   ├── Kubernetes架构介绍.md
│   ├── Kubernetes集群环境准备.md
│   ├── app.md
│   ├── app2.md
│   ├── ca.md
│   ├── coredns.md
│   ├── dashboard.md
│   ├── dashboard_op.md
│   ├── delete.md
│   ├── docker-install.md
│   ├── etcd-install.md
│   ├── flannel.md
│   ├── k8s-error-resolution.md
│   ├── k8s_pv_local.md
│   ├── k8s重启pod.md
│   ├── master.md
│   ├── node.md
│   ├── operational.md
│   ├── 外部访问K8s中Pod的几种方式.md
│   └── 虚拟机环境准备.md
├── example/
│   ├── coredns/
│   │   └── coredns.yaml
│   └── nginx/
│       ├── nginx-daemonset.yaml
│       ├── nginx-deployment.yaml
│       ├── nginx-ingress.yaml
│       ├── nginx-pod.yaml
│       ├── nginx-rc.yaml
│       ├── nginx-rs.yaml
│       ├── nginx-service-nodeport.yaml
│       └── nginx-service.yaml
├── helm/
│   └── README.md
├── kubeadm/
│   ├── K8S-HA-V1.13.4-关闭防火墙版.md
│   ├── K8S-HA-V1.16.x-云环境-Calico.md
│   ├── K8S-V1.16.2-开启防火墙-Flannel.md
│   ├── Kubernetes 集群变更IP地址.md
│   ├── README.md
│   ├── k8S-HA-V1.15.3-Calico-开启防火墙版.md
│   ├── k8S-HA-V1.15.3-Flannel-开启防火墙版.md
│   ├── k8s清理.md
│   ├── kubeadm.yaml
│   ├── kubeadm初始化k8s集群延长证书过期时间.md
│   └── kubeadm无法下载镜像问题.md
├── manual/
│   ├── README.md
│   ├── v1.14/
│   │   └── README.md
│   └── v1.15.3/
│       └── README.md
├── mysql/
│   ├── README.md
│   └── kubernetes访问外部mysql服务.md
├── redis/
│   ├── K8s上Redis集群动态扩容.md
│   ├── K8s上运行Redis单实例.md
│   ├── K8s上运行Redis集群指南.md
│   └── README.md
├── rke/
│   ├── README.md
│   └── cluster.yml
└── tools/
    ├── Linux Kernel 升级.md
    ├── README.md
    ├── k8s域名解析coredns问题排查过程.md
    ├── kubernetes-node打标签.md
    ├── kubernetes-常用操作.md
    ├── kubernetes-批量删除Pods.md
    ├── kubernetes访问外部mysql服务.md
    └── ssh_copy.sh

================================================
FILE CONTENTS
================================================

================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# 一、K8S攻略
- [Kubernetes架构介绍](docs/Kubernetes架构介绍.md)
- [Kubernetes集群环境准备](docs/Kubernetes集群环境准备.md)
- [Docker安装](docs/docker-install.md)
- [CA证书制作](docs/ca.md)
- [ETCD集群部署](docs/etcd-install.md)
- [Master节点部署](docs/master.md)
- [Node节点部署](docs/node.md)
- [Flannel部署](docs/flannel.md)
- [应用创建](docs/app.md)
- [问题汇总](docs/k8s-error-resolution.md)
- [常用手册](docs/operational.md)
- [Envoy 的架构与基本术语](docs/Envoy的架构与基本术语.md)
- [K8S学习手册](docs/Kubernetes学习笔记.md)
- [K8S重启pod](docs/k8s%E9%87%8D%E5%90%AFpod.md)
- [K8S清理](docs/delete.md)
- [外部访问K8s中Pod的几种方式](docs/外部访问K8s中Pod的几种方式.md)
- [应用测试](docs/app2.md)
- [PVC](docs/k8s_pv_local.md)
- [dashboard操作](docs/dashboard_op.md)


# 使用手册
<table border="0">
    <tr>
        <td><strong>手动部署</strong></td>
        <td><a href="docs/Kubernetes集群环境准备.md">1.Kubernetes集群环境准备</a></td>
        <td><a href="docs/docker-install.md">2.Docker安装</a></td>
        <td><a href="docs/ca.md">3.CA证书制作</a></td>
        <td><a href="docs/etcd-install.md">4.ETCD集群部署</a></td>
        <td><a href="docs/master.md">5.Master节点部署</a></td>
        <td><a href="docs/node.md">6.Node节点部署</a></td>
        <td><a href="docs/flannel.md">7.Flannel部署</a></td>
        <td><a href="docs/app.md">8.应用创建</a></td>
    </tr>
    <tr>
        <td><strong>必备插件</strong></td>
        <td><a href="docs/coredns.md">1.CoreDNS部署</a></td>
        <td><a href="docs/dashboard.md">2.Dashboard部署</a></td>
        <td><a href="docs/heapster.md">3.Heapster部署</a></td>
        <td><a href="docs/ingress.md">4.Ingress部署</a></td>
        <td><a href="https://github.com/unixhot/devops-x">5.CI/CD</a></td>
        <td><a href="docs/helm.md">6.Helm部署</a></td>
        <td><a href="docs/helm.md">6.Helm部署</a></td>
    </tr>
</table>

# 二、k8s资源清理
```
1、# svc清理
$ kubectl delete svc $(kubectl get svc -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace
service "mysql-production" deleted
service "nginx-test" deleted
service "redis-cluster" deleted
service "redis-production" deleted

2、# deployment清理
$ kubectl delete deployment $(kubectl get deployment -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace
deployment.extensions "centos7-app" deleted

3、# configmap清理
$ kubectl delete cm $(kubectl get cm -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace
```


https://www.xiaodianer.net/index.php/kubernetes/istio/41-istio-https-demo

https://mp.weixin.qq.com/s/jnVn6_cyRUILBQ0cBhBNyQ  Kubernetes v1.18.2 二进制高可用部署


================================================
FILE: apps/README.md
================================================



================================================
FILE: apps/nginx/README.md
================================================



================================================
FILE: apps/ops/README.md
================================================



================================================
FILE: apps/wordpress/README.md
================================================



================================================
FILE: apps/wordpress/基于PV_PVC部署Wordpress 示例.md
================================================
# 一、PV(PersistentVolume)

PersistentVolume (PV) 是外部存储系统中的一块存储空间,由管理员创建和维护。与 Volume 一样,PV 具有持久性,生命周期独立于 Pod。

1、PV和PVC是一一对应关系,当有PV被某个PVC所占用时,会显示banding,其它PVC不能再使用绑定过的PV。

2、PVC一旦绑定PV,就相当于是一个存储卷,此时PVC可以被多个Pod所使用。(PVC支不支持被多个Pod访问,取决于访问模型accessMode的定义)。

3、PVC若没有找到合适的PV时,则会处于pending状态。

4、PV的reclaim policy选项:

    默认是Retain保留,保留生成的数据。
    可以改为recycle回收,删除生成的数据,回收pv
    delete,删除,pvc解除绑定后,pv也就自动删除。

# 二、PVC

PersistentVolumeClaim (PVC) 是对 PV 的申请 (Claim)。PVC 通常由普通用户创建和维护。需要为 Pod 分配存储资源时,用户可以创建一个 PVC,指明存储资源的容量大小和访问模式(比如只读)等信息,Kubernetes 会查找并提供满足条件的 PV。

有了 PersistentVolumeClaim,用户只需要告诉 Kubernetes 需要什么样的存储资源,而不必关心真正的空间从哪里分配,如何访问等底层细节信息。这些 Storage Provider 的底层信息交给管理员来处理,只有管理员才应该关心创建 PersistentVolume 的细节信息。

## PVC资源需要指定:

1、accessMode:访问模型;对象列表:

    ReadWriteOnce – the volume can be mounted as read-write by a single node:  RWO - ReadWriteOnce  一人读写
    ReadOnlyMany – the volume can be mounted read-only by many nodes:          ROX - ReadOnlyMany   多人只读
    ReadWriteMany – the volume can be mounted as read-write by many nodes:     RWX - ReadWriteMany  多人读写
    
2、resource:资源限制(比如:定义5GB空间,我们期望对应的存储空间至少5GB。)  

3、selector:标签选择器。不加标签,就会在所有PV找最佳匹配。

4、storageClassName:存储类名称:

5、volumeMode:指后端存储卷的模式。可以用于做类型限制,哪种类型的PV可以被当前claim所使用。

6、volumeName:卷名称,指定后端PVC(相当于绑定)

   
# 三、两者差异

1、PV是属于集群级别的,不能定义在名称空间中

2、PVC时属于名称空间级别的。

参考文档:

https://blog.csdn.net/weixin_42973226/article/details/86501693  基于rook-ceph部署wordpress

https://www.cnblogs.com/benjamin77/p/9944268.html  k8s的持久化存储PV&&PVC


================================================
FILE: apps/wordpress/部署Wordpress 示例.md
================================================
# 一、简述

&#8195;Wordpress应用主要涉及到两个镜像:wordpress 和 mysql,wordpress 是应用的核心程序,mysql 是用于数据存储的。现在我们来看看如何来部署我们的这个wordpress应用。这个服务主要有2个pod资源,优先使用Deployment来管理我们的Pod。

# 二、创建一个MySQL的Deployment对象

- 1、创建namespace空间,并使用Service暴露服务给集群内部使用

```bash
# 清理wordpress-db资源
kubectl delete -f wordpress-db.yaml

# 编写mysql的deployment文件
cat > wordpress-db.yaml <<\EOF
---
apiVersion: v1
kind: Namespace
metadata:
  name: blog

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mysql-deploy
  namespace: blog
  labels:
    app: mysql
spec:
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3306
          name: dbport
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: rootPassW0rd
        - name: MYSQL_DATABASE
          value: wordpress
        - name: MYSQL_USER
          value: wordpress
        - name: MYSQL_PASSWORD
          value: wordpress
        volumeMounts:
        - name: db
          mountPath: /var/lib/mysql
      volumes:
      - name: db
        hostPath:
          path: /var/lib/mysql

---
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  namespace: blog
spec:
  selector:
    app: mysql
  ports:
  - name: mysqlport
    protocol: TCP
    port: 3306
    targetPort: dbport
EOF

# 创建资源和服务
kubectl create -f wordpress-db.yaml
```

- 2、查看创建的svc服务

```bash
$ kubectl describe svc wordpress-mysql -n blog
Name:              wordpress-mysql
Namespace:         blog
Labels:            <none>
Annotations:       <none>
Selector:          app=mysql
Type:              ClusterIP
IP:                10.104.88.234
Port:              mysqlport  3306/TCP
TargetPort:        dbport/TCP
Endpoints:         10.244.1.115:3306
Session Affinity:  None
Events:            <none>
```

- 3、验证创建的mysql资源服务可用性

```bash
# 命令行跑一个centos7的bash基础容器
$ kubectl run mysql-test --rm -it --image=alpine /bin/sh
kubectl run centos7-app --rm -it --image=centos:7.2.1511 -n blog

# 进入到容器
kubectl exec `kubectl get pods -n blog|grep centos7-app|awk '{print $1}'` -it /bin/bash -n blog

# 安装mysql客户端
yum install vim net-tools telnet nc -y
yum install -y mariadb.x86_64 mariadb-libs.x86_64

# 测试mysql服务端口是否OK
nc -zv wordpress-mysql 3306

# 连接测试
mysql -h'wordpress-mysql' -u'root' -p'rootPassW0rd'  # 这里使用域名测试

mysql -h'10.104.88.234' -u'root' -p'rootPassW0rd'   # 这里使用集群IP测试,这个经常会变

mysql -h'10.244.1.115' -u'root' -p'rootPassW0rd'   # 这里使用Endpoints IP测试,这个经常会变
```

# 三、创建Wordpress服务Deployment对象

```bash
# 清理wordpress资源
kubectl delete -f wordpress.yaml

# 编写wordpress的deployment文件
cat > wordpress.yaml <<\EOF
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: wordpress-deploy
  namespace: blog
  labels:
    app: wordpress
spec:
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - name: wordpress
        image: wordpress
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: wdport
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql:3306
        - name: WORDPRESS_DB_USER
          value: wordpress
        - name: WORDPRESS_DB_PASSWORD
          value: wordpress

---
apiVersion: v1
kind: Service
metadata:
  name: wordpress-service
  namespace: blog
spec:
  type: NodePort
  selector:
    app: wordpress
  ports:
  - name: wordpressport
    protocol: TCP
    port: 80
    targetPort: wdport
    nodePort: 32380     #新增这一行,指定固定node端口
EOF

# 创建资源和服务
kubectl create -f wordpress.yaml

# 查看创建的pod资源
kubectl get pods -n blog

# 查看创建的svc资源
kubectl get svc -n blog

NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
wordpress-mysql     ClusterIP   10.104.88.234    <none>        3306/TCP       3m36s
wordpress-service   NodePort    10.111.212.108   <none>        80:32380/TCP   12s
```

# 四、访问测试

```bash
#可以看到wordpress服务产生了一个32380的端口,现在我们是不是就可以通过任意节点的NodeIP加上32255端口,就可以访问我们的wordpress应用了,在浏览器中打开,如果看到wordpress跳转到了安装页面,证明我们的嗯安装是没有任何问题的了,如果没有出现预期的效果,那么就需要去查看下Pod的日志来查看问题了:

http://192.168.56.11:32380/
```

![wordpress](https://github.com/Lancger/opsfull/blob/master/images/wordpress-01.png)


# 五、提高稳定性(进阶)

`1、当你使用kuberentes的时候,有没有遇到过Pod在启动后一会就挂掉然后又重新启动这样的恶性循环?你有没有想过kubernetes是如何检测pod是否还存活?虽然容器已经启动,但是kubernetes如何知道容器的进程是否准备好对外提供服务了呢?让我们通过kuberentes官网的这篇文章[Configure Liveness and Readiness Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/),来一探究竟。`

`2、Kubelet使用liveness probe(存活探针)来确定何时重启容器。例如,当应用程序处于运行状态但无法做进一步操作,liveness探针将捕获到deadlock,重启处于该状态下的容器,使应用程序在存在bug的情况下依然能够继续运行下去(谁的程序还没几个bug呢)。`

`3、Kubelet使用readiness probe(就绪探针)来确定容器是否已经就绪可以接受流量。只有当Pod中的容器都处于就绪状态时kubelet才会认定该Pod处于就绪状态。该信号的作用是控制哪些Pod应该作为service的后端。如果Pod处于非就绪状态,那么它们将会被从service的load balancer中移除。`
`

现在wordpress应用已经部署成功了,那么就万事大吉了吗?如果我们的网站访问量突然变大了怎么办,如果我们要更新我们的镜像该怎么办?如果我们的mysql服务挂掉了怎么办?

所以要保证我们的网站能够非常稳定的提供服务,我们做得还不够,我们可以通过做些什么事情来提高网站的稳定性呢?

## 第一. 增加健康检测

我们前面说过liveness probe和rediness probe是提高应用稳定性非常重要的方法:

```bash
livenessProbe:
  tcpSocket:
    port: 80
  initialDelaySeconds: 3
  periodSeconds: 3
readinessProbe:
  tcpSocket:
    port: 80
  initialDelaySeconds: 5
  periodSeconds: 10

#增加上面两个探针,每10s检测一次应用是否可读,每3s检测一次应用是否存活
```

## 第二. 增加 HPA

让我们的应用能够自动应对流量高峰期:

```bash
1、创建HPA资源(一定要设置Pod的资源限制参数: request, 否则HPA不会工作)

$ kubectl autoscale deployment wordpress-deploy --cpu-percent=10 --min=1 --max=10 -n blog

deployment "wordpress-deploy" autoscaled

# 我们用kubectl autoscale命令为我们的wordpress-deploy创建一个HPA对象,最小的 pod 副本数为1,最大为10,HPA会根据设定的 cpu使用率(10%)动态的增加或者减少pod数量。当然最好我们也为Pod声明一些资源限制:

resources:
  limits:
    cpu: 200m
    memory: 200Mi
  requests:
    cpu: 100m
    memory: 100Mi
    
# 查看HPA
$ kubectl get HorizontalPodAutoscaler -A 
NAMESPACE   NAME               REFERENCE                     TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
blog        wordpress-deploy   Deployment/wordpress-deploy   <unknown>/10%   1         10        1          4m19s

2、更新Deployment后,我们可以可以来测试下上面的HPA是否会生效:
$ kubectl run -i --tty load-generator --image=busybox /bin/sh

If you don't see a command prompt, try pressing enter.

while true; do wget -q -O- http://wordpress:80; done

3、观察Deployment的副本数是否有变化
$ kubectl get deployment wordpress-deploy

NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
wordpress-deploy   3         3         3            3           4d

4、删除HPA
$ kubectl delete HorizontalPodAutoscaler  wordpress-deploy -n blog

horizontalpodautoscaler.autoscaling "wordpress-deploy" deleted
```

## 第三. 增加滚动更新策略

这样可以保证我们在更新应用的时候服务不会被中断:

```bash
replicas: 2
revisionHistoryLimit: 10
minReadySeconds: 5
strategy:
  type: RollingUpdate
  rollingUpdate:
    maxSurge: 1
    maxUnavailable: 1
```

## 第四. 使用Service的名称来代替host

`如果mysql服务被重新创建了的话,它的clusterIP非常有可能就变化了,所以上面我们环境变量中的WORDPRESS_DB_HOST的值就会有问题,就会导致访问不了数据库服务了,这个地方我们可以直接使用Service的名称来代替host,这样即使clusterIP变化了,也不会有任何影响,这个我们会在后面的服务发现的章节和大家深入讲解的`

```bash
env:
- name: WORDPRESS_DB_HOST
  value: wordpress-mysql:3306
```

## 第五. 容器启动顺序

`在部署wordpress服务的时候,mysql服务以前启动起来了吗?如果没有启动起来是不是我们也没办法连接数据库了啊?该怎么办,是不是在启动wordpress应用之前应该去检查一下mysql服务,如果服务正常的话我们就开始部署应用了,这是不是就是InitContainer的用法`

```bash
initContainers:
- name: init-db
  image: busybox
  command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql service; sleep 2; done;']
  
# 直到mysql服务创建完成后,initContainer才结束,结束完成后我们才开始下面的部署。
```

# 六、优化文件合并

```bash
kubectl delete -f wordpress-all.yaml

cat > wordpress-all.yaml <<\EOF
---
apiVersion: v1
kind: Namespace
metadata:
  name: blog

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mysql-deploy
  namespace: blog
  labels:
    app: mysql
spec:
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        ports:
        - containerPort: 3306
          name: dbport
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: rootPassW0rd
        - name: MYSQL_DATABASE
          value: wordpress
        - name: MYSQL_USER
          value: wordpress
        - name: MYSQL_PASSWORD
          value: wordpress
        volumeMounts:
        - name: db
          mountPath: /var/lib/mysql
      volumes:
      - name: db
        hostPath:
          path: /var/lib/mysql

---
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  namespace: blog
spec:
  selector:
    app: mysql
  ports:
  - name: mysqlport
    protocol: TCP
    port: 3306
    targetPort: dbport

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: wordpress-deploy
  namespace: blog
  labels:
    app: wordpress
spec:
  revisionHistoryLimit: 10
  minReadySeconds: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      initContainers:
      - name: init-db
        image: busybox
        command: ['sh', '-c', 'until nslookup wordpress-mysql; do echo waiting for mysql service; sleep 2; done;']
      containers:
      - name: wordpress
        image: wordpress
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: wdport
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql:3306
        - name: WORDPRESS_DB_USER
          value: wordpress
        - name: WORDPRESS_DB_PASSWORD
          value: wordpress
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi

---
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  namespace: blog
spec:
  selector:
    app: wordpress
  type: NodePort
  ports:
  - name: wordpressport
    protocol: TCP
    port: 80
    nodePort: 32380
    targetPort: wdport
EOF

kubectl apply -f wordpress-all.yaml

watch kubectl get pods -n blog

# 检测mysql服务
$ kubectl run mysql-test --rm -it --image=alpine /bin/sh -n blog

$ nslookup wordpress-mysql
Name:      wordpress-mysql
Address 1: 10.99.230.27 wordpress-mysql.blog.svc.cluster.local

$ ping wordpress-mysql
PING wordpress-mysql (10.99.230.27): 56 data bytes
64 bytes from 10.99.230.27: seq=0 ttl=64 time=0.124 ms
64 bytes from 10.99.230.27: seq=0 ttl=64 time=0.124 ms

```

参考文档:

https://www.qikqiak.com/k8s-book/docs/31.%E9%83%A8%E7%BD%B2%20Wordpress%20%E7%A4%BA%E4%BE%8B.html   

https://blog.csdn.net/maoreyou/article/details/80050623  Kubernetes之路 3 - 解决服务依赖


================================================
FILE: components/README.md
================================================

# ingress

# helm

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports  需要开放的端口


================================================
FILE: components/cronjob/README.md
================================================
参考资料:

https://www.jianshu.com/p/62b4f0a3134b   Kubernetes对象之CronJob 


================================================
FILE: components/dashboard/Kubernetes-Dashboard v2.0.0.md
================================================
```bash
#安装
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

#卸载
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

#账号授权
kubectl delete -f admin.yaml

cat > admin.yaml << \EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
EOF

kubectl apply -f admin.yaml

kubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kube-system
```
参考文档:

http://www.mydlq.club/article/28/ 


================================================
FILE: components/dashboard/README.md
================================================
# 一、安装dashboard v1.10.1

## 1、使用NodePort方式暴露访问

1、下载对应的yaml文件
```
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

vim kubernetes-dashboard.yaml

1、# 修改镜像名称
......
    spec:
      containers:
      - name: kubernetes-dashboard
        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 #这个换成阿里云的镜像
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
......
```

2、# 修改Service为NodePort类型
```
......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort   # 新增这一行,指定为NodePort方式
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32370  #新增这一行,指定固定node端口
  selector:
    k8s-app: kubernetes-dashboard
```

3、dashboard最终文件

```
cat > kubernetes-dashboard.yaml << \EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort  # 新增这一行,指定为NodePort方式
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32370  #新增这一行,指定固定node端口
  selector:
    k8s-app: kubernetes-dashboard
EOF

kubectl apply -f kubernetes-dashboard.yaml
```

4、然后创建一个具有全局所有权限的用户来登录Dashboard:(admin.yaml)
```
cat > admin.yaml << \EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
EOF

kubectl apply -f admin.yaml

kubectl delete -f admin.yaml

#获取token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin | awk '{print $1}')
```
5、访问测试 `https://nodeip:32370`


## 2、使用Ingress方式访问

```bash
#清理NodePort方式的dashboard
kubectl delete -f kubernetes-dashboard.yaml

rm -f kubernetes-dashboard.yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

kubectl apply -n kube-system -f kubernetes-dashboard.yaml
```

1、创建和安装加密访问凭证

通过https进行访问必需要使用证书和密钥,在Kubernetes中可以通过配置一个加密凭证(TLS secret)来提供。

```bash
#1、创建 tls secret

#这里只是拿来自己使用,创建一个自己签名的证书。如果是公共服务,建议去数字证书颁发机构去申请一个正式的数字证书(需要一些服务费用);或者使用Let's encrypt去申请一个免费的(后面有介绍);如果使用Cloudflare可以自动生成证书和https转接服务,但是需要将域名迁移过去,高级功能是收费的。
#https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/tls/README.md

kubectl delete secret k8s-dashboard-secret -n kube-system

rm -rf /etc/certs/ssl/

mkdir -p /etc/certs/ssl/default
cd /etc/certs/ssl/default/
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_default.key -out tls_default.crt -subj "/CN=dashboard.devops.com"

#将会产生两个文件tls_default.key和tls_default.crt,你可以改成自己的文件名或放在特定的目录下(如果你是为公共服务器创建的,请保证这个不会被别人访问到)。后面的dashboard.devops.com是我的服务器IP地址,你可以改成自己的。
```
2、安装 tls secret

```bash
#下一步,将这两个文件的信息创建为一个Kubernetes的secret访问凭证,我将名称指定为 k8s-dashboard-secret ,这在后面的Ingress配置时将会用到。如果你修改了这个名字,注意后面的Ingress配置yaml文件也需要同步修改。

cd /etc/certs/ssl/default/

kubectl -n kube-system delete secret k8s-dashboard-secret

kubectl -n kube-system create secret tls k8s-dashboard-secret --key=tls_default.key --cert=tls_default.crt

#查看证书
kubectl get secret k8s-dashboard-secret -n kube-system

kubectl describe secret k8s-dashboard-secret -n kube-system

#注意:
    #上面命令的参数 -n 指定凭证安装的命名空间。
    #为了安全考虑,Ingress所有的资源(凭证、路由、服务)必须在同一个命名空间。
```

3、配置Ingress 路由

```bash
#将下面的内容保存为文件dashboard-ingress.yaml。里面的 / 设定为访问Kubernetes dashboard服务,/web 只是为了测试和占位,如果没有安装nginx,将会返回找不到服务的消息。

cat >dashboard-ingress.yaml<<\EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k8s-dashboard
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  tls:
   - secretName: traefik-cert   #注意这里需要跟traefik.toml文件设置的证书挂钩
   #- secretName: k8s-dashboard-secret
  rules:
   - host: dashboard.devops.com
     http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
EOF

kubectl apply -n kube-system -f dashboard-ingress.yaml

#注意
    #上面的annotations部分是必须的,以提供https和https service的支持。不过,不同的Ingress Controller可能的实现(或版本)有所不同,需要安装相应的实现(版本)进行设置。
    
    #参见,#issue:https://github.com/kubernetes/ingress-nginx/issues/2460
```


参考资料:

https://my.oschina.net/u/2306127/blog/1930169?from=timeline&isappinstalled=0  Kubernetes dashboard 通过 Ingress 提供HTTPS访问


================================================
FILE: components/external-storage/0、nfs服务端搭建.md
================================================
## 一、nfs服务端
```bash
#所有节点安装nfs
yum install -y nfs-utils rpcbind

#创建nfs目录
mkdir -p /nfs/data/

#修改权限
chmod -R 666 /nfs/data

#编辑export文件
vim /etc/exports
/nfs/data 192.168.56.0/24(rw,async,no_root_squash)
#如果设置为 /nfs/data *(rw,async,no_root_squash) 则对所以的IP都有效

常用选项:
   ro:客户端挂载后,其权限为只读,默认选项;
   rw:读写权限;
   sync:同时将数据写入到内存与硬盘中;
   async:异步,优先将数据保存到内存,然后再写入硬盘;
   Secure:要求请求源的端口小于1024
用户映射:
   root_squash:当NFS客户端使用root用户访问时,映射到NFS服务器的匿名用户;
   no_root_squash:当NFS客户端使用root用户访问时,映射到NFS服务器的root用户;
   all_squash:全部用户都映射为服务器端的匿名用户;
   anonuid=UID:将客户端登录用户映射为此处指定的用户uid;
   anongid=GID:将客户端登录用户映射为此处指定的用户gid

#配置生效
exportfs -r

#查看生效
exportfs

#启动rpcbind、nfs服务
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs

#查看 RPC 服务的注册状况  (注意/etc/hosts.deny 里面需要放开以下服务)
$ rpcinfo -p localhost      
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100024    1   udp  34666  status
    100024    1   tcp   7951  status
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  31088  nlockmgr
    100021    3   udp  31088  nlockmgr
    100021    4   udp  31088  nlockmgr
    100021    1   tcp  27131  nlockmgr
    100021    3   tcp  27131  nlockmgr
    100021    4   tcp  27131  nlockmgr

#修改/etc/hosts.allow放开rpcbind(nfs服务端和客户端都要加上)
chattr -i /etc/hosts.allow
echo "nfsd:all" >>/etc/hosts.allow
echo "rpcbind:all" >>/etc/hosts.allow
echo "mountd:all" >>/etc/hosts.allow
chattr +i /etc/hosts.allow

#showmount测试
showmount -e 192.168.56.11

#tcpdmatch测试
$ tcpdmatch rpcbind 192.168.56.11
client:   address  192.168.56.11
server:   process  rpcbind
access:   granted
```

## 二、nfs客户端
```bash
yum install -y nfs-utils rpcbind

#客户端创建目录,然后执行挂载
mkdir -p /mnt/nfs   #(注意挂载成功后,/mnt下原有数据将会被隐藏,无法找到)

mount -t nfs -o nolock,vers=4 192.168.56.11:/nfs/data /mnt/nfs
```

## 三、挂载nfs
```bash
#或者直接写到/etc/fstab文件中
vim /etc/fstab
192.168.56.11:/nfs/data /mnt/nfs/ nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0

#挂载
mount -a

#卸载挂载
umount /mnt/nfs

#查看nfs服务端信息
nfsstat -s

#查看nfs客户端信息
nfsstat -c
```

参考文档:

http://www.mydlq.club/article/3/  CentOS7 搭建 NFS 服务器

https://blog.rot13.org/2012/05/rpcbind-is-new-portmap-or-how-to-make-nfs-secure.html   

https://yq.aliyun.com/articles/694065

https://www.crifan.com/linux_fstab_and_mount_nfs_syntax_and_parameter_meaning/  Linux中fstab的语法和参数含义和mount NFS时相关参数含义


================================================
FILE: components/external-storage/1、k8s的pv和pvc简述.md
================================================
# 一、PV(PersistentVolume)

PersistentVolume (PV) 是外部存储系统中的一块存储空间,由管理员创建和维护。与 Volume 一样,PV 具有持久性,生命周期独立于 Pod。

1、PV和PVC是一一对应关系,当有PV被某个PVC所占用时,会显示banding,其它PVC不能再使用绑定过的PV。

2、PVC一旦绑定PV,就相当于是一个存储卷,此时PVC可以被多个Pod所使用。(PVC支不支持被多个Pod访问,取决于访问模型accessMode的定义)。

3、PVC若没有找到合适的PV时,则会处于pending状态。

4、PV的reclaim policy选项:

    默认是Retain保留,保留生成的数据。
    可以改为recycle回收,删除生成的数据,回收pv
    delete,删除,pvc解除绑定后,pv也就自动删除。

# 二、PVC

PersistentVolumeClaim (PVC) 是对 PV 的申请 (Claim)。PVC 通常由普通用户创建和维护。需要为 Pod 分配存储资源时,用户可以创建一个 PVC,指明存储资源的容量大小和访问模式(比如只读)等信息,Kubernetes 会查找并提供满足条件的 PV。

有了 PersistentVolumeClaim,用户只需要告诉 Kubernetes 需要什么样的存储资源,而不必关心真正的空间从哪里分配,如何访问等底层细节信息。这些 Storage Provider 的底层信息交给管理员来处理,只有管理员才应该关心创建 PersistentVolume 的细节信息。

## PVC资源需要指定:

1、accessMode:访问模型;对象列表:

    ReadWriteOnce – the volume can be mounted as read-write by a single node:  RWO - ReadWriteOnce  一人读写
    ReadOnlyMany – the volume can be mounted read-only by many nodes:          ROX - ReadOnlyMany   多人只读
    ReadWriteMany – the volume can be mounted as read-write by many nodes:     RWX - ReadWriteMany  多人读写
    
2、resource:资源限制(比如:定义5GB空间,我们期望对应的存储空间至少5GB。)  

3、selector:标签选择器。不加标签,就会在所有PV找最佳匹配。

4、storageClassName:存储类名称:

5、volumeMode:指后端存储卷的模式。可以用于做类型限制,哪种类型的PV可以被当前claim所使用。

6、volumeName:卷名称,指定后端PVC(相当于绑定)

   
# 三、两者差异

1、PV是属于集群级别的,不能定义在名称空间中

2、PVC时属于名称空间级别的。

参考文档:

https://blog.csdn.net/weixin_42973226/article/details/86501693  基于rook-ceph部署wordpress

https://www.cnblogs.com/benjamin77/p/9944268.html  k8s的持久化存储PV&&PVC


================================================
FILE: components/external-storage/2、静态配置PV和PVC.md
================================================
Table of Contents
=================

   * [一、环境介绍](#一环境介绍)
   * [二、PV操作](#二pv操作)
      * [01、创建PV卷](#01创建pv卷)
      * [02、PV配置参数介绍](#02pv配置参数介绍)
      * [03、创建PV资源](#03创建pv资源)
      * [04、查看PV](#04查看pv)
   * [三、PVC操作](#三pvc操作)
      * [01、创建PVC资源](#01创建pvc资源)
      * [02、查看PVC/PV](#02查看pvcpv)
   * [四、Pod中使用存储](#四pod中使用存储)
   * [五、验证](#五验证)
      * [01、验证PV是否可用](#01验证pv是否可用)
      * [02、进入pod查看挂载情况](#02进入pod查看挂载情况)
      * [03、删除pod](#03删除pod)
      * [04、继续删除pvc](#04继续删除pvc)
      * [05、继续删除pv](#05继续删除pv)
      
# 一、环境介绍

作为准备工作,我们已经在 k8s同一局域内网节点上搭建了一个 NFS 服务器,目录为 /data/nfs, pv是全局的,pvc可以指定namespace。

# 二、PV操作

## 01、创建PV卷

```bash
# 创建pv卷对应的目录
mkdir -p /data/nfs/pv001
mkdir -p /data/nfs/pv002

# 配置exportrs
$ vim /etc/exports
/data/nfs *(rw,no_root_squash,sync,insecure)
/data/nfs/pv001 *(rw,no_root_squash,sync,insecure)
/data/nfs/pv002 *(rw,no_root_squash,sync,insecure)

# 配置生效
exportfs -r

# 重启rpcbind、nfs服务
systemctl restart rpcbind && systemctl restart nfs

# 查看挂载点
$ showmount -e localhost
Export list for localhost:
/data/nfs/pv002 *
/data/nfs/pv001 *
/data/nfs       *
```

## 02、PV配置参数介绍

```bash
配置说明:

① capacity 指定 PV 的容量为 20G。

② accessModes 指定访问模式为 ReadWriteOnce,支持的访问模式有:
    ReadWriteOnce – PV 能以 read-write 模式 mount 到单个节点。
    ReadOnlyMany – PV 能以 read-only 模式 mount 到多个节点。
    ReadWriteMany – PV 能以 read-write 模式 mount 到多个节点。
    
③ persistentVolumeReclaimPolicy 指定当 PV 的回收策略为 Recycle,支持的策略有:
    Retain – 就是保留现场,K8S什么也不做,需要管理员手动去处理PV里的数据,处理完后,再手动删除PV
    Recycle – K8S会将PV里的数据删除,然后把PV的状态变成Available,又可以被新的PVC绑定使用
    Delete – K8S会自动删除该PV及里面的数据
    
④ storageClassName 指定 PV 的 class 为 nfs。相当于为 PV 设置了一个分类,PVC 可以指定 class 申请相应 class 的 PV。

⑤ 指定 PV 在 NFS 服务器上对应的目录。

一般来说,PV和PVC的生命周期分为5个阶段:
    Provisioning,即PV的创建,可以直接创建PV(静态方式),也可以使用StorageClass动态创建
    Binding,将PV分配给PVC
    Using,Pod通过PVC使用该Volume
    Releasing,Pod释放Volume并删除PVC
    Reclaiming,回收PV,可以保留PV以便下次使用,也可以直接从云存储中删除

根据这5个阶段,Volume的状态有以下4种:
    Available:可用
    Bound:已经分配给PVC
    Released:PVC解绑但还未执行回收策略
    Failed:发生错误

变成Released的PV会根据定义的回收策略做相应的回收工作。有三种回收策略:
    Retain 就是保留现场,K8S什么也不做,等待用户手动去处理PV里的数据,处理完后,再手动删除PV
    Delete K8S会自动删除该PV及里面的数据
    Recycle K8S会将PV里的数据删除,然后把PV的状态变成Available,又可以被新的PVC绑定使用
```

## 03、创建PV资源

1、nfs-pv001.yaml

```bash
# 清理pv资源
kubectl delete -f nfs-pv001.yaml

# 编写pv资源文件
cat > nfs-pv001.yaml <<\EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv001
  labels:
    pv: nfs-pv001
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /data/nfs/pv001
    server: 192.168.56.11
EOF

# 部署pv到集群中
kubectl apply -f nfs-pv001.yaml
```

2、nfs-pv002.yaml

```bash
# 清理pv资源
kubectl delete -f nfs-pv002.yaml

# 编写pv资源文件
cat > nfs-pv002.yaml <<\EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv002
  labels:
    pv: nfs-pv002
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /data/nfs/pv002
    server: 192.168.56.11
EOF

# 部署pv到集群中
kubectl apply -f nfs-pv002.yaml
```

## 04、查看PV

```bash
# 查看pv
$ kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
nfs-pv001     20Gi       RWO            Recycle          Available           nfs                      68s
nfs-pv002     30Gi       RWO            Recycle          Available           nfs                      33s

#STATUS 为 Available,表示 pv 就绪,可以被 PVC 申请。
```

# 三、PVC操作

## 01、创建PVC资源

接下来创建2个名为pvc001和pvc002的PVC,配置文件 nfs-pvc001.yaml 如下:

1、nfs-pvc001.yaml

```bash
# 清理pvc资源
kubectl delete -f nfs-pvc001.yaml

# 编写pvc资源文件
cat > nfs-pvc001.yaml <<\EOF           
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc001
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-pv001
EOF

# 部署pvc到集群中
kubectl apply -f nfs-pvc001.yaml
```

2、nfs-pvc002.yaml

```bash
# 清理pvc资源
kubectl delete -f nfs-pvc002.yaml

# 编写pvc资源文件
cat > nfs-pvc002.yaml <<\EOF           
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc002
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-pv002
EOF

# 部署pvc到集群中
kubectl apply -f nfs-pvc002.yaml
```

## 02、查看PVC/PV

```bash
$ kubectl get pvc --show-labels
NAME            STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc001      Bound    nfs-pv001       20Gi       RWO            nfs            18s
nfs-pvc002      Bound    nfs-pv002       30Gi       RWO            nfs            7s

$ kubectl get pv --show-labels
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
nfs-pv001     20Gi       RWO            Recycle          Bound    default/nfs-pvc001   nfs                     17m
nfs-pv002     30Gi       RWO            Recycle          Bound    default/nfs-pvc002   nfs                     17m

# 从 kubectl get pvc 和 kubectl get pv 的输出可以看到 pvc001 和pvc002分别绑定到pv001和pv002,申请成功。注意pvc绑定到对应pv通过labels标签方式实现,也可以不指定,将随机绑定到pv。
```

# 四、Pod中使用存储

```与使用普通 Volume 的格式类似,在 volumes 中通过 persistentVolumeClaim 指定使用nfs-pvc001和nfs-pvc002申请的 Volume。```

1、nfs-pod001.yaml 

```bash
# 清理pod资源
kubectl delete -f nfs-pod001.yaml

# 编写pod资源文件
cat > nfs-pod001.yaml <<\EOF
kind: Pod
apiVersion: v1
metadata:
  name: nfs-pod001
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: nfs-pv001
  volumes:
    - name: nfs-pv001
      persistentVolumeClaim:
        claimName: nfs-pvc001
EOF

# 创建pod资源
kubectl apply -f nfs-pod001.yaml

```

2、nfs-pod002.yaml

```bash
# 清理pod资源
kubectl delete -f nfs-pod002.yaml

# 编写pod资源文件
cat > nfs-pod002.yaml <<\EOF
kind: Pod
apiVersion: v1
metadata:
  name: nfs-pod002
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: nfs-pv002
  volumes:
    - name: nfs-pv002
      persistentVolumeClaim:
        claimName: nfs-pvc002
EOF

# 创建pod资源
kubectl apply -f nfs-pod002.yaml
```

# 五、验证

## 01、验证PV是否可用

```bash
# 进入到pod创建文件
kubectl exec nfs-pod001 touch /var/www/html/index001.html
kubectl exec nfs-pod002 touch /var/www/html/index002.html

# 登录到nfs-server上面查看文件是否创建成功
$ ls /data/nfs/pv001/
index001.html

$ ls /data/nfs/pv002/
index002.html
```

## 02、进入pod查看挂载情况

```bash
# 验证pod001的挂载
$ kubectl exec -it nfs-pod001 /bin/bash
$ root@nfs-pod001:/# df -h
Filesystem                   Size  Used Avail Use% Mounted on
overlay                      711G   85G  627G  12% /
tmpfs                         64M     0   64M   0% /dev
tmpfs                         16G     0   16G   0% /sys/fs/cgroup
/dev/sda3                    711G   85G  627G  12% /etc/hosts
shm                           64M     0   64M   0% /dev/shm
192.168.56.11:/data/nfs/pv001  932G  620M  931G   1% /var/www/html

# 验证pod002的挂载
$ kubectl exec -it nfs-pod002 /bin/bash
$ root@nfs-pod002:/# df -h
Filesystem                   Size  Used Avail Use% Mounted on
overlay                      711G   85G  627G  12% /
tmpfs                         64M     0   64M   0% /dev
tmpfs                         16G     0   16G   0% /sys/fs/cgroup
/dev/sda3                    711G   85G  627G  12% /etc/hosts
shm                           64M     0   64M   0% /dev/shm
192.168.56.11:/data/nfs/pv002  932G  620M  931G   1% /var/www/html
```

## 03、删除pod

pv和pvc不会被删除,nfs存储的数据不会被删除

```bash
$ kubectl delete -f nfs-pod001.yaml 
pod "nfs-pod001" deleted

$ kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS    REASON   AGE
nfs-pv001       20Gi       RWO            Recycle          Bound    default/nfs-pvc001     nfs                      13m
nfs-pv002       30Gi       RWO            Recycle          Bound    default/nfs-pvc002     nfs                      13m

$ kubectl get pvc
NAME            STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS    AGE
nfs-pvc001      Bound    nfs-pv001       20Gi       RWO            nfs             13m
nfs-pvc002      Bound    nfs-pv002       30Gi       RWO            nfs             13m
```
## 04、继续删除pvc

pv将被释放,处于 Available 可用状态,并且nfs存储中的数据被删除。

```bash
$ kubectl delete -f nfs-pvc001.yaml 
persistentvolumeclaim "nfs-pvc001" deleted

$ kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                   STORAGECLASS    REASON   AGE
nfs-pv001       20Gi       RWO            Recycle          Available                           nfs                      18m
nfs-pv002       30Gi       RWO            Recycle          Bound       default/nfs-pvc002      nfs                      18m

$ ls /nfs/data/pv001/  # 文件不存在
```

## 05、继续删除pv

```bash
$ kubectl delete -f nfs-pv001.yaml
persistentvolume "nfs-pv001" deleted
```

参考文档:

https://blog.csdn.net/networken/article/details/86697018  kubernetes部署NFS持久存储


================================================
FILE: components/external-storage/3、动态申请PV卷.md
================================================
Table of Contents
=================

   * [Kubernetes 中部署 NFS Provisioner 为 NFS 提供动态分配卷](#kubernetes-中部署-nfs-provisioner-为-nfs-提供动态分配卷)
      * [一、NFS Provisioner 简介](#一nfs-provisioner-简介)
      * [二、External NFS驱动的工作原理](#二external-nfs驱动的工作原理)
         * [1、nfs-client](#1nfs-client)
         * [2、nfs](#2nfs)
      * [三、部署服务](#三部署服务)
         * [1、配置授权](#1配置授权)
         * [2、部署nfs-client-provisioner](#2部署nfs-client-provisioner)
         * [3、部署NFS Provisioner](#3部署nfs-provisioner)
         * [4、创建StorageClass](#4创建storageclass)
      * [四、创建PVC](#四创建pvc)
         * [01、创建一个新的namespace,然后创建pvc资源](#01创建一个新的namespace然后创建pvc资源)
      * [五、创建测试Pod](#五创建测试pod)
         * [01、进入 NFS Server 服务器验证是否创建对应文件](#01进入-nfs-server-服务器验证是否创建对应文件)
         
# Kubernetes 中部署 NFS Provisioner 为 NFS 提供动态分配卷

## 一、NFS Provisioner 简介

NFS Provisioner 是一个自动配置卷程序,它使用现有的和已配置的 NFS 服务器来支持通过持久卷声明动态配置 Kubernetes 持久卷。

- 持久卷被配置为:namespace−{pvcName}-${pvName}。

## 二、External NFS驱动的工作原理

K8S的外部NFS驱动,可以按照其工作方式(是作为NFS server还是NFS client)分为两类:

### 1、nfs-client

- 也就是我们接下来演示的这一类,它通过K8S的内置的NFS驱动挂载远端的NFS服务器到本地目录;然后将自身作为storage provider,关联storage class。当用户创建对应的PVC来申请PV时,该provider就将PVC的要求与自身的属性比较,一旦满足就在本地挂载好的NFS目录中创建PV所属的子目录,为Pod提供动态的存储服务。

### 2、nfs

- 与nfs-client不同,该驱动并不使用k8s的NFS驱动来挂载远端的NFS到本地再分配,而是直接将本地文件映射到容器内部,然后在容器内使用ganesha.nfsd来对外提供NFS服务;在每次创建PV的时候,直接在本地的NFS根目录中创建对应文件夹,并export出该子目录。利用NFS动态提供Kubernetes后端存储卷

- 本文将介绍使用nfs-client-provisioner这个应用,利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV。前提条件是有已经安装好的NFS服务器,并且NFS服务器与Kubernetes的Slave节点都能网络连通。将nfs-client驱动做一个deployment部署到K8S集群中,然后对外提供存储服务。

`nfs-client-provisioner` 是一个Kubernetes的简易NFS的外部 provisioner,本身不提供NFS,需要现有的NFS服务器提供存储

## 三、部署服务

### 1、配置授权

现在的 Kubernetes 集群大部分是基于 RBAC 的权限控制,所以创建一个一定权限的 ServiceAccount 与后面要创建的 “NFS Provisioner” 绑定,赋予一定的权限。

```bash
# 清理rbac授权
kubectl delete -f nfs-rbac.yaml -n kube-system

# 编写yaml
cat >nfs-rbac.yaml<<-EOF
---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF

# 应用授权
kubectl apply -f nfs-rbac.yaml -n kube-system
```

### 2、部署nfs-client-provisioner

首先克隆仓库获取yaml文件
```
git clone https://github.com/kubernetes-incubator/external-storage.git
cp -R external-storage/nfs-client/deploy/ /root/
cd deploy
```
### 3、部署NFS Provisioner

修改deployment.yaml文件,这里修改的参数包括NFS服务器所在的IP地址(10.198.1.155),以及NFS服务器共享的路径(/data/nfs/),两处都需要修改为你实际的NFS服务器和共享目录。另外修改nfs-client-provisioner镜像从七牛云拉取。

设置 NFS Provisioner 部署文件,这里将其部署到 “kube-system” Namespace 中。

```bash
# 清理NFS Provisioner资源
kubectl delete -f nfs-provisioner-deploy.yaml -n kube-system

export NFS_ADDRESS='10.198.1.155'
export NFS_DIR='/data/nfs'

# 编写deployment.yaml
cat >nfs-provisioner-deploy.yaml<<-EOF
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate  #---设置升级策略为删除再创建(默认为滚动更新)
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #---由于quay.io仓库国内被墙,所以替换成七牛云的仓库
          #image: quay-mirror.qiniu.com/external_storage/nfs-client-provisioner:latest
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-client  #---nfs-provisioner的名称,以后设置的storageclass要和这个保持一致
            - name: NFS_SERVER
              value: ${NFS_ADDRESS}  #---NFS服务器地址,和 valumes 保持一致
            - name: NFS_PATH
              value: ${NFS_DIR}  #---NFS服务器目录,和 valumes 保持一致
      volumes:
        - name: nfs-client-root
          nfs:
            server: ${NFS_ADDRESS}  #---NFS服务器地址
            path: ${NFS_DIR} #---NFS服务器目录
EOF

# 部署deployment.yaml
kubectl apply -f nfs-provisioner-deploy.yaml -n kube-system

# 查看创建的pod
kubectl get pod -o wide -n kube-system|grep nfs-client

# 查看pod日志
kubectl logs -f `kubectl get pod -o wide -n kube-system|grep nfs-client|awk '{print $1}'` -n kube-system
```

### 4、创建StorageClass

storage class的定义,需要注意的是:provisioner属性要等于驱动所传入的环境变量`PROVISIONER_NAME`的值。否则,驱动不知道知道如何绑定storage class。
此处可以不修改,或者修改provisioner的名字,需要与上面的deployment的`PROVISIONER_NAME`名字一致。

```bash
# 清理storageclass资源
kubectl delete -f nfs-storage.yaml

# 编写yaml
cat >nfs-storage.yaml<<-EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  #---设置为默认的storageclass
provisioner: nfs-client  #---动态卷分配者名称,必须和上面创建的"PROVISIONER_NAME"变量中设置的Name一致
parameters:
  archiveOnDelete: "true"  #---设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions: 
  - hard        #指定为硬挂载方式
  - nfsvers=4   #指定NFS版本,这个需要根据 NFS Server 版本号设置
EOF

#部署class.yaml
kubectl apply -f nfs-storage.yaml

#查看创建的storageclass(这里可以看到nfs-storage已经变为默认的storageclass了)
$ kubectl get sc
NAME                    PROVISIONER      AGE
nfs-storage (default)   nfs-client       3m38s
```

## 四、创建PVC

### 01、创建一个新的namespace,然后创建pvc资源

```bash
# 删除命令空间
kubectl delete ns kube-public

# 创建命名空间
kubectl create ns kube-public

# 清理pvc
kubectl delete -f test-claim.yaml -n kube-public

# 编写yaml
cat >test-claim.yaml<<\EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-storage #---需要与上面创建的storageclass的名称一致
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
EOF

#创建PVC
kubectl apply -f test-claim.yaml -n kube-public

#查看创建的PV和PVC
$ kubectl get pvc -n kube-public
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-593f241f-a75f-459a-af18-a672e5090921   100Gi      RWX            nfs-storage    3s

kubectl get pv

#然后,我们进入到NFS的export目录,可以看到对应该volume name的目录已经创建出来了。其中volume的名字是namespace,PVC name以及uuid的组合:

#注意,出现pvc在pending的原因可能为nfs-client-provisioner pod 出现了问题,删除重建的时候会出现镜像问题
```

## 五、创建测试Pod

```bash
# 清理资源
kubectl delete -f test-pod.yaml -n kube-public

# 编写yaml
cat > test-pod.yaml <<\EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

#创建pod
kubectl apply -f test-pod.yaml -n kube-public

#查看创建的pod
kubectl get pod -o wide -n kube-public
```

### 01、进入 NFS Server 服务器验证是否创建对应文件

进入 NFS Server 服务器的 NFS 挂载目录,查看是否存在 Pod 中创建的文件:

```bash
$ cd /data/nfs/
$ ls
archived-kube-public-test-claim-pvc-2dd4740d-f2d1-4e88-a0fc-383c00e37255  kube-public-test-claim-pvc-ad304939-e75d-414f-81b5-7586ef17db6c
archived-kube-public-test-claim-pvc-593f241f-a75f-459a-af18-a672e5090921  kube-system-test1-claim-pvc-f84dc09c-b41e-4e67-a239-b14f8d342efc
archived-kube-public-test-claim-pvc-b08b209d-c448-4ce4-ab5c-1bf37cc568e6  pv001
default-test-claim-pvc-4f18ed06-27cd-465b-ac87-b2e0e9565428               pv002

# 可以看到已经生成 SUCCESS 该文件,并且可知通过 NFS Provisioner 创建的目录命名方式为 “namespace名称-pvc名称-pv名称”,pv 名称是随机字符串,所以每次只要不删除 PVC,那么 Kubernetes 中的与存储绑定将不会丢失,要是删除 PVC 也就意味着删除了绑定的文件夹,下次就算重新创建相同名称的 PVC,生成的文件夹名称也不会一致,因为 PV 名是随机生成的字符串,而文件夹命名又跟 PV 有关,所以删除 PVC 需谨慎。
```


参考文档:

https://blog.csdn.net/qq_25611295/article/details/86065053  k8s pv与pvc持久化存储(静态与动态)

https://blog.csdn.net/networken/article/details/86697018 kubernetes部署NFS持久存储

https://www.jianshu.com/p/5e565a8049fc  kubernetes部署NFS持久存储(静态和动态)


================================================
FILE: components/external-storage/4、Kubernetes之MySQL持久存储和故障转移.md
================================================
Table of Contents
=================

   * [一、MySQL持久化演练](#一mysql持久化演练)
      * [1、数据库提供持久化存储,主要分为下面几个步骤:](#1数据库提供持久化存储主要分为下面几个步骤)
   * [二、静态PV PVC](#二静态pv-pvc)
      * [1、创建 PV](#1创建-pv)
      * [2、创建PVC](#2创建pvc)
   * [三、部署 MySQL](#三部署-mysql)
      * [1、MySQL 的配置文件mysql.yaml如下:](#1mysql-的配置文件mysqlyaml如下)
      * [2、更新 MySQL 数据](#2更新-mysql-数据)
      * [3、故障转移](#3故障转移)
   * [四、全新命名空间使用](#四全新命名空间使用)
   
# 一、MySQL持久化演练

## 1、数据库提供持久化存储,主要分为下面几个步骤:

    1、创建 PV 和 PVC

    2、部署 MySQL

    3、向 MySQL 添加数据

    4、模拟节点宕机故障,Kubernetes 将 MySQL 自动迁移到其他节点

    5、验证数据一致性
   

# 二、静态PV PVC

```bash
PV就好比是一个仓库,我们需要先购买一个仓库,即定义一个PV存储服务,例如CEPH,NFS,Local Hostpath等等。

PVC就好比租户,pv和pvc是一对一绑定的,挂载到POD中,一个pvc可以被多个pod挂载。
```

## 1、创建 PV

```bash
# 清理pv资源
kubectl delete -f mysql-static-pv.yaml

# 编写pv yaml资源文件
cat > mysql-static-pv.yaml <<\EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-static-pv
spec:
  capacity:
    storage: 80Gi

  accessModes:
    - ReadWriteOnce
  #ReadWriteOnce - 卷可以由单个节点以读写方式挂载
  #ReadOnlyMany  - 卷可以由许多节点以只读方式挂载
  #ReadWriteMany - 卷可以由许多节点以读写方式挂载

  persistentVolumeReclaimPolicy: Retain
  #Retain,不清理, 保留 Volume(需要手动清理)
  #Recycle,删除数据,即 rm -rf /thevolume/*(只有 NFS 和 HostPath 支持)
  #Delete,删除存储资源,比如删除 AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持)

  nfs:
    path: /data/nfs/mysql/
    server: 10.198.1.155
  mountOptions:
    - vers=4
    - minorversion=0
    - noresvport
EOF

# 部署pv到集群中
kubectl apply -f mysql-static-pv.yaml

# 查看pv
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                           STORAGECLASS          REASON   AGE
mysql-static-pv                            80Gi       RWO            Retain           Available                                                                                  4m20s
```

## 2、创建PVC

```bash
# 清理pvc资源
kubectl delete -f mysql-pvc.yaml 

# 编写pvc yaml资源文件
cat > mysql-pvc.yaml <<\EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-static-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 80Gi
EOF

# 创建pvc资源
kubectl apply -f mysql-pvc.yaml

# 查看pvc
$ kubectl get pvc
NAME               STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-static-pvc   Bound         pvc-c55f8695-2a0b-4127-a60b-5c1aba8b9104   80Gi       RWO            nfs-storage    81s
```

# 三、部署 MySQL

## 1、MySQL 的配置文件mysql.yaml如下:

```bash
kubectl delete -f mysql.yaml

cat >mysql.yaml<<\EOF
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.6
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-static-pvc
EOF

kubectl apply -f mysql.yaml

# PVC mysql-static-pvc Bound 的 PV mysql-static-pv 将被 mount 到 MySQL 的数据目录 /var/lib/mysql。
```

## 2、更新 MySQL 数据

MySQL 被部署到 k8s-node02,下面通过客户端访问 Service mysql:

```bash
$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
If you don't see a command prompt, try pressing enter.
mysql>

我们在mysql库中创建一个表myid,然后在表里新增几条数据。

mysql> use mysql
Database changed

mysql> drop table myid;
Query OK, 0 rows affected (0.12 sec)

mysql> create table myid(id int(4));
Query OK, 0 rows affected (0.23 sec)

mysql> insert myid values(888);
Query OK, 1 row affected (0.03 sec)

mysql> select * from myid;
+------+
| id   |
+------+
|  888 |
+------+
1 row in set (0.00 sec)
```

## 3、故障转移

我们现在把 node02 机器关机,模拟节点宕机故障。


```bash
1、一段时间之后,Kubernetes 将 MySQL 迁移到 k8s-node01

$ kubectl get pod -o wide
NAME                     READY   STATUS        RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
mysql-7686899cf9-8z6tc   1/1     Running       0          21s   10.244.1.19   node01   <none>           <none>
mysql-7686899cf9-d4m42   1/1     Terminating   0          23m   10.244.2.17   node02   <none>           <none>

2、验证数据的一致性

$ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
If you don't see a command prompt, try pressing enter.
mysql> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from myid;
+------+
| id   |
+------+
|  888 |
+------+
1 row in set (0.00 sec)

3、MySQL 服务恢复,数据也完好无损,我们可以可以在存储节点上面查看一下生成的数据库文件。

[root@nfs_server mysql-pv]# ll
-rw-rw---- 1 systemd-bus-proxy ssh_keys       56 12月 14 09:53 auto.cnf
-rw-rw---- 1 systemd-bus-proxy ssh_keys 12582912 12月 14 10:15 ibdata1
-rw-rw---- 1 systemd-bus-proxy ssh_keys 50331648 12月 14 10:15 ib_logfile0
-rw-rw---- 1 systemd-bus-proxy ssh_keys 50331648 12月 14 09:53 ib_logfile1
drwx------ 2 systemd-bus-proxy ssh_keys     4096 12月 14 10:05 mysql
drwx------ 2 systemd-bus-proxy ssh_keys     4096 12月 14 09:53 performance_schema
```

# 四、全新命名空间使用

pv是全局的,pvc可以指定namespace

```bash
kubectl delete ns test-ns

kubectl create ns test-ns

kubectl apply -f mysql-pvc.yaml -n test-ns

kubectl apply -f mysql.yaml -n test-ns

kubectl get pods -n test-ns -o wide

kubectl -n test-ns logs -f $(kubectl get pods -n test-ns|grep mysql|awk '{print $1}')

kubectl run -n test-ns -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
```

参考文档:

https://blog.51cto.com/wzlinux/2330295   Kubernetes 之 MySQL 持久存储和故障转移(十一)

https://qingmu.io/2019/08/11/Run-mysql-on-kubernetes/ 从部署mysql聊一聊有状态服务和PV及PVC


================================================
FILE: components/external-storage/5、Kubernetes之Nginx动静态PV持久存储.md
================================================
Table of Contents
=================

   * [一、nginx使用nfs静态PV](#一nginx使用nfs静态pv)
      * [1、静态nfs-static-nginx-rc.yaml](#1静态nfs-static-nginx-rcyaml)
      * [2、静态nfs-static-nginx-deployment.yaml](#2静态nfs-static-nginx-deploymentyaml)
      * [3、nginx多目录挂载](#3nginx多目录挂载)
   * [二、nginx使用nfs动态PV](#二nginx使用nfs动态pv)
      * [1、动态nfs-dynamic-nginx.yaml](#1动态nfs-dynamic-nginxyaml)
      
# 一、nginx使用nfs静态PV

## 1、静态nfs-static-nginx-rc.yaml

```bash
##清理资源
kubectl delete -f nfs-static-nginx-rc.yaml -n test

cat >nfs-static-nginx-rc.yaml<<\EOF
##创建namespace
---
apiVersion: v1
kind: Namespace
metadata:
   name: test
   labels:
     name: test
##创建nfs-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  labels:
    pv: nfs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs  # 注意这里使用nfs的storageClassName,如果没改k8s的默认storageClassName的话,这里可以省略
  nfs:
    path: /data/nfs/nginx/
    server: 10.198.1.155
##创建nfs-pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pvc
  namespace: test
  labels:
    pvc: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-pv
##部署应用nginx
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-test
  namespace: test
  labels:
    name: nginx-test
spec:
  replicas: 2
  selector:
    name: nginx-test
  template:
    metadata:
      labels:
       name: nginx-test
    spec:
      containers:
      - name: nginx-test
        image: docker.io/nginx
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-data
        ports:
        - containerPort: 80
      volumes:
      - name: nginx-data
        persistentVolumeClaim:
          claimName: nfs-pvc
##创建service
---
apiVersion: v1
kind: Service
metadata:
  namespace: test
  name: nginx-test
  labels:
    name: nginx-test
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: http
    nodePort: 30080
  selector:
    name: nginx-test
EOF

##创建资源
kubectl apply -f nfs-static-nginx-rc.yaml -n test

##查看pv资源
kubectl get pv -n test --show-labels

##查看pvc资源
kubectl get pvc -n test --show-labels

##查看pod
$ kubectl get pods -n test
NAME               READY   STATUS    RESTARTS   AGE
nginx-test-r4n2j   1/1     Running   0          54s
nginx-test-zstf5   1/1     Running   0          54s

#可以看到,nginx应用已经部署成功。
#nginx应用的数据目录是使用的nfs共享存储,我们在nfs共享的目录里加入index.html文件,然后再访问nginx-service暴露的端口
#切换到到nfs-server服务器上

echo "Test NFS Share discovery with nfs-static-nginx-rc" > /data/nfs/nginx/index.html

#在浏览器上访问kubernetes主节点的 http://master:30080,就能访问到这个页面内容了
```

## 2、静态nfs-static-nginx-deployment.yaml

```bash
##清理资源
kubectl delete -f nfs-static-nginx-deployment.yaml -n test

cat >nfs-static-nginx-deployment.yaml<<\EOF
##创建namespace
---
apiVersion: v1
kind: Namespace
metadata:
   name: test
   labels:
     name: test
##创建nfs-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  labels:
    pv: nfs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs  # 注意这里使用nfs的storageClassName,如果没改k8s的默认storageClassName的话,这里可以省略
  nfs:
    path: /data/nfs/nginx/
    server: 10.198.1.155
##创建nfs-pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pvc
  namespace: test
  labels:
    pvc: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nfs-pv
##部署应用nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: test
  labels:
    name: nginx-test
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx-test
  template:
    metadata:
      labels:
       name: nginx-test
    spec:
      containers:
      - name: nginx-test
        image: docker.io/nginx
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-data
        ports:
        - containerPort: 80
      volumes:
      - name: nginx-data
        persistentVolumeClaim:
          claimName: nfs-pvc
##创建service
---
apiVersion: v1
kind: Service
metadata:
  namespace: test
  name: nginx-test
  labels:
    name: nginx-test
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: http
    nodePort: 30080
  selector:
    name: nginx-test
EOF

##创建资源
kubectl apply -f nfs-static-nginx-deployment.yaml -n test

##查看pv资源
kubectl get pv -n test --show-labels

##查看pvc资源
kubectl get pvc -n test --show-labels

##查看pod
$ kubectl get pods -n test
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-64d6f78cdf-8bw8t   1/1     Running   0          55s
nginx-deployment-64d6f78cdf-n5n4q   1/1     Running   0          55s

#可以看到,nginx应用已经部署成功。
#nginx应用的数据目录是使用的nfs共享存储,我们在nfs共享的目录里加入index.html文件,然后再访问nginx-service暴露的端口
#切换到到nfs-server服务器上

echo "Test NFS Share discovery with nfs-static-nginx-deployment" > /data/nfs/nginx/index.html

#在浏览器上访问kubernetes主节点的 http://master:30080,就能访问到这个页面内容了
```

## 3、nginx多目录挂载

```
1、PV和PVC是一一对应关系,当有PV被某个PVC所占用时,会显示banding,其它PVC不能再使用绑定过的PV。

2、PVC一旦绑定PV,就相当于是一个存储卷,此时PVC可以被多个Pod所使用。(PVC支不支持被多个Pod访问,取决于访问模型accessMode的定义)。

3、PVC若没有找到合适的PV时,则会处于pending状态。

4、PV是属于集群级别的,不能定义在名称空间中。

5、PVC时属于名称空间级别的。
```

```bash
##清理资源
kubectl delete -f nfs-static-nginx-dp-many.yaml -n test

cat >nfs-static-nginx-dp-many.yaml<<\EOF
##创建namespace
---
apiVersion: v1
kind: Namespace
metadata:
   name: test
   labels:
     name: test
##创建nginx-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nginx-data-pv
  labels:
    pv: nginx-data-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs  # 注意这里使用nfs的storageClassName,如果没改k8s的默认storageClassName的话,这里可以省略
  nfs:
    path: /data/nfs/nginx/
    server: 10.198.1.155
##创建nginx-etc-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nginx-etc-pv
  labels:
    pv: nginx-etc-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs  # 注意这里使用nfs的storageClassName,如果没改k8s的默认storageClassName的话,这里可以省略
  nfs:
    path: /data/nfs/nginx/
    server: 10.198.1.155
##创建pvc名字为nfs-nginx-data,存放数据
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-nginx-data
  namespace: test
  labels:
    pvc: nfs-nginx-data
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nginx-data-pv
##创建pvc名字为nfs-nginx-etc,存放配置文件
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-nginx-etc
  namespace: test
  labels:
    pvc: nfs-nginx-etc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nginx-etc-pv
##部署应用nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: test
  labels:
    name: nginx-test
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx-test
  template:
    metadata:
      labels:
       name: nginx-test
    spec:
      containers:
      - name: nginx-test
        image: docker.io/nginx
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-data
        # - mountPath: /etc/nginx   #--这里需要注意,如果是这么挂载,那么需要事先现在/data/nfs/nginx/目录下把nginx的完整配置提前拷贝好
        #   name: nginx-etc
        ports:
        - containerPort: 80
      volumes:
      - name: nginx-data
        persistentVolumeClaim:
          claimName: nfs-nginx-data
      # - name: nginx-etc
      #   persistentVolumeClaim:
      #     claimName: nfs-nginx-etc
##创建service
---
apiVersion: v1
kind: Service
metadata:
  namespace: test
  name: nginx-test
  labels:
    name: nginx-test
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: http
    nodePort: 30080
  selector:
    name: nginx-test
EOF

##创建资源
kubectl apply -f nfs-static-nginx-dp-many.yaml -n test

##查看pv资源
kubectl get pv -n test --show-labels

##查看pvc资源
kubectl get pvc -n test --show-labels

##查看pod
$ kubectl get pods -n test
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-64d6f78cdf-8bw8t   1/1     Running   0          55s
nginx-deployment-64d6f78cdf-n5n4q   1/1     Running   0          55s

##进入容器
kubectl exec -it nginx-deployment-f687cdf47-xncj8 -n test /bin/bash

#可以看到,nginx应用已经部署成功。
#nginx应用的数据目录是使用的nfs共享存储,我们在nfs共享的目录里加入index.html文件,然后再访问nginx-service暴露的端口
#切换到到nfs-server服务器上

echo "Test NFS Share discovery with nfs-static-nginx-dp-many" > /data/nfs/nginx/index.html

#在浏览器上访问kubernetes主节点的 http://master:30080,就能访问到这个页面内容了
```

## 4、参数namespace

```bash
##清理资源
export NAMESPACE="mos-namespace"

kubectl delete -f nfs-static-nginx-dp-many.yaml -n ${NAMESPACE}

cat >nfs-static-nginx-dp-many.yaml<<-EOF
##创建namespace
---
apiVersion: v1
kind: Namespace
metadata:
   name: ${NAMESPACE}
   labels:
     name: ${NAMESPACE}
##创建nginx-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nginx-data-pv
  labels:
    pv: nginx-data-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs  # 注意这里使用nfs的storageClassName,如果没改k8s的默认storageClassName的话,这里可以省略
  nfs:
    path: /data/nfs/nginx/
    server: 10.198.1.155
##创建nginx-log-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nginx-log-pv
  labels:
    pv: nginx-log-pv
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs  # 注意这里使用nfs的storageClassName,如果没改k8s的默认storageClassName的话,这里可以省略
  nfs:
    path: /data/nfs/nginx/
    server: 10.198.1.155
##创建pvc名字为nfs-nginx-data,存放数据
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-nginx-data
  labels:
    pvc: nfs-nginx-data
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nginx-data-pv
##创建pvc名字为nfs-nginx-log,存放日志文件
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-nginx-log
  labels:
    pvc: nfs-nginx-log
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  storageClassName: nfs
  selector:
    matchLabels:
      pv: nginx-log-pv
##部署应用nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    name: nginx-test
spec:
  replicas: 2
  selector:
    matchLabels:
      name: nginx-test
  template:
    metadata:
      labels:
       name: nginx-test
    spec:
      containers:
      - name: nginx-test
        image: docker.io/nginx
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-data
        - mountPath: /var/log/nginx
          name: nginx-log
        ports:
        - containerPort: 80
      volumes:
      - name: nginx-data
        persistentVolumeClaim:
          claimName: nfs-nginx-data
      - name: nginx-log
        persistentVolumeClaim:
          claimName: nfs-nginx-log
##创建service
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-test
  labels:
    name: nginx-test
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: http
    nodePort: 30180
  selector:
    name: nginx-test
EOF

##创建资源
kubectl apply -f nfs-static-nginx-dp-many.yaml -n ${NAMESPACE}
```


# 二、nginx使用nfs动态PV

`https://github.com/Lancger/opsfull/blob/master/components/external-storage/3%E3%80%81%E5%8A%A8%E6%80%81%E7%94%B3%E8%AF%B7PV%E5%8D%B7.md`

## 1、动态nfs-dynamic-nginx.yaml

通过参数控制在哪个命名空间创建

```bash
##清理命名空间
kubectl delete ns k8s-public

##创建命名空间
kubectl create ns k8s-public

##清理资源
kubectl delete -f nfs-dynamic-nginx-deployment.yaml -n k8s-public

cat >nfs-dynamic-nginx-deployment.yaml<<\EOF
##动态申请nfs-dynamic-pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-dynamic-claim
spec:
  storageClassName: nfs-storage #--需要与上面创建的storageclass的名称一致
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 90Gi
##部署应用nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    name: nginx-test
spec:
  replicas: 3
  selector:
    matchLabels:
      name: nginx-test
  template:
    metadata:
      labels:
       name: nginx-test
    spec:
      containers:
      - name: nginx-test
        image: docker.io/nginx
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: nginx-data
        ports:
        - containerPort: 80
      volumes:
      - name: nginx-data
        persistentVolumeClaim:
          claimName: nfs-dynamic-claim
##创建service
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-test
  labels:
    name: nginx-test
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: http
    nodePort: 30090
  selector:
    name: nginx-test
EOF

##创建资源
kubectl apply -f nfs-dynamic-nginx-deployment.yaml -n k8s-public

##查看pv资源
kubectl get pv -n k8s-public --show-labels

##查看pvc资源
kubectl get pvc -n k8s-public --show-labels

##查看pod
$ kubectl get pods -n k8s-public
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-544f569478-5t8wm   1/1     Running   0          40s
nginx-deployment-544f569478-8gks5   1/1     Running   0          40s
nginx-deployment-544f569478-pw96x   1/1     Running   0          40s

#可以看到,nginx应用已经部署成功。
#nginx应用的数据目录是使用的nfs共享存储,我们在nfs共享的目录里加入index.html文件,然后再访问nginx-service暴露的端口
#切换到到nfs-server服务器上

#注意动态的在这个目录,创建的目录命名方式为 “namespace名称-pvc名称-pv名称”
/data/nfs/kube-public-test-claim-pvc-ad304939-e75d-414f-81b5-7586ef17db6c

echo "Test NFS Share discovery with nfs-dynamic-nginx-deployment" > /data/nfs/kube-public-test-claim-pvc-ad304939-e75d-414f-81b5-7586ef17db6c/index.html

#在浏览器上访问kubernetes主节点的 http://master:30090,就能访问到这个页面内容了
```

![](https://github.com/Lancger/opsfull/blob/master/images/dynamic-pv.png)


参考文档:

https://kubernetes.io/zh/docs/tasks/run-application/run-stateless-application-deployment/  

https://blog.51cto.com/ylw6006/2071845 在kubernetes集群中运行nginx


================================================
FILE: components/external-storage/README.md
================================================
PersistenVolume(PV):对存储资源创建和使用的抽象,使得存储作为集群中的资源管理

PV分为静态和动态,动态能够自动创建PV

PersistentVolumeClaim(PVC):让用户不需要关心具体的Volume实现细节

容器与PV、PVC之间的关系,可以如下图所示:

  ![PV](https://github.com/Lancger/opsfull/blob/master/images/pv01.png)

总的来说,PV是提供者,PVC是消费者,消费的过程就是绑定

# 问题一

pv挂载正常,pvc一直处于Pending状态

```bash 
#在test的命名空间创建pvc
$ kubectl get pvc -n test
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Pending---这里发现一直处于Pending的状态                                      nfs-storage    10s

#查看日志
$ kubectl describe pvc nfs-pvc -n test
failed to provision volume with StorageClass "nfs-storage": claim Selector is not supported
#从日志中发现,问题出在标签匹配的地方
```

参考资料:

https://blog.csdn.net/qq_25611295/article/details/86065053  k8s pv与pvc持久化存储(静态与动态)

https://www.jianshu.com/p/5e565a8049fc  kubernetes部署NFS持久存储(静态和动态)


================================================
FILE: components/heapster/README.md
================================================
# 一、问题现象
heapster:  已经被k8s给舍弃掉了
```bash
heapster logs这个报错啥情况 
E0918 16:56:05.022867       1 manager.go:101] Error in scraping containers from kubelet_summary:10.10.188.242:10255: Get http://10.10.188.242:10255/stats/summary/: dial tcp 10.10.188.242:10255: getsockopt: connection refused
```
# 排查思路


```
1、排查下kubelet,10255是它暴露的端口

service kubelet status  #看状态是正常的

#在10.10.188.242上执行
[root@localhost ~]# netstat -lnpt | grep 10255
tcp        0      0 10.10.188.240:10255     0.0.0.0:*               LISTEN      9243/kubelet

看了下/var/log/pods/kube-system_heapster-5f848f54bc-rtbv4_abf53b7c-491f-472a-9e8b-815066a6ae3d/heapster下日志  所有的物理节点都是10255 拒绝连接


2、浏览器访问查看数据

10.10.188.242 是你节点的IP吧,正常的话浏览器访问http://IP:10255/stats/summary是有值的,你看下,如果没有那就是kubelet的配置出问题

```
![heapster获取数据异常1](https://github.com/Lancger/opsfull/blob/master/images/heapster-01.png)

![heapster获取数据异常2](https://github.com/Lancger/opsfull/blob/master/images/heapster-02.png)


================================================
FILE: components/ingress/0.通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系.md
================================================
# 一、通俗的讲:

- 1、Service 是后端真实服务的抽象,一个 Service 可以代表多个相同的后端服务

- 2、Ingress 是反向代理规则,用来规定 HTTP/S 请求应该被转发到哪个 Service 上,比如根据请求中不同的 Host 和 url 路径让请求落到不同的 Service 上

- 3、Ingress Controller 就是一个反向代理程序,它负责解析 Ingress 的反向代理规则,如果 Ingress 有增删改的变动,所有的 Ingress Controller 都会及时更新自己相应的转发规则,当 Ingress Controller 收到请求后就会根据这些规则将请求转发到对应的 Service

# 二、数据流向图

Kubernetes 并没有自带 Ingress Controller,它只是一种标准,具体实现有多种,需要自己单独安装,常用的是 Nginx Ingress Controller 和 Traefik Ingress Controller。 所以 Ingress 是一种转发规则的抽象,Ingress Controller 的实现需要根据这些 Ingress 规则来将请求转发到对应的 Service,我画了个图方便大家理解:

  ![Ingress Controller数据流向图](https://github.com/Lancger/opsfull/blob/master/images/Ingress%20Controller01.png)

从图中可以看出,Ingress Controller 收到请求,匹配 Ingress 转发规则,匹配到了就转发到后端 Service,而 Service 可能代表的后端 Pod 有多个,选出一个转发到那个 Pod,最终由那个 Pod 处理请求。

# 三、Ingress Controller对外暴露方式

有同学可能会问,既然 Ingress Controller 要接受外面的请求,而 Ingress Controller 是部署在集群中的,怎么让 Ingress Controller 本身能够被外面访问到呢,有几种方式:

- 1、Ingress Controller 用 Deployment 方式部署,给它添加一个 Service,类型为 LoadBalancer,这样会自动生成一个 IP 地址,通过这个 IP 就能访问到了,并且一般这个 IP 是高可用的(前提是集群支持 LoadBalancer,通常云服务提供商才支持,自建集群一般没有)

- 2、使用集群内部的某个或某些节点作为边缘节点,给 node 添加 label 来标识,Ingress Controller 用 DaemonSet 方式部署,使用 nodeSelector 绑定到边缘节点,保证每个边缘节点启动一个 Ingress Controller 实例,用 hostPort 直接在这些边缘节点宿主机暴露端口,然后我们可以访问边缘节点中 Ingress Controller 暴露的端口,这样外部就可以访问到 Ingress Controller 了

- 3、Ingress Controller 用 Deployment 方式部署,给它添加一个 Service,类型为 NodePort,部署完成后查看会给出一个端口,通过 kubectl get svc 我们可以查看到这个端口,这个端口在集群的每个节点都可以访问,通过访问集群节点的这个端口就可以访问 Ingress Controller 了。但是集群节点这么多,而且端口又不是 80和443,太不爽了,一般我们会在前面自己搭个负载均衡器,比如用 Nginx,将请求转发到集群各个节点的那个端口上,这样我们访问 Nginx 就相当于访问到 Ingress Controller 了

一般比较推荐的是前面两种方式。

参考资料:

https://cloud.tencent.com/developer/article/1326535  通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系


================================================
FILE: components/ingress/1.kubernetes部署Ingress-nginx单点和高可用.md
================================================
# 一、Ingress-nginx简介

&#8195;Pod的IP以及service IP只能在集群内访问,如果想在集群外访问kubernetes提供的服务,可以使用nodeport、proxy、loadbalacer以及ingress等方式,由于service的IP集群外不能访问,就是使用ingress方式再代理一次,即ingress代理service,service代理pod.
Ingress基本原理图如下:

  ![Ingress-nginx](https://github.com/Lancger/opsfull/blob/master/images/Ingress-nginx.png)
 
# 二、部署nginx-ingress-controller

```bash
# github地址
https://github.com/kubernetes/ingress-nginx
https://kubernetes.github.io/ingress-nginx/

# 1、下载nginx-ingress-controller配置文件
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

# 2、service-nodeport.yaml为ingress通过nodeport对外提供服务,注意默认nodeport暴露端口为随机,可以编辑该文件自定义端口
Using NodePort:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

# 3、查看ingress-nginx组件状态
root># kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-568867bf56-mbvm2   1/1     Running   0          4m46s

查看创建的ingress service暴露的端口:
root># kubectl get svc -n ingress-nginx 
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.97.243.123   <none>        80:30725/TCP,443:32314/TCP   5m46s
```

# 二、创建ingress-nginx后端服务

1.创建一个Service及后端Deployment(以nginx为例)

```
cat > deploy-demon.yaml<<\EOF
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: httpd
          containerPort: 80
EOF
          
root># kubectl apply -f deploy-demon.yaml

root># kubectl get pods

root># kubectl get svc myapp
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
myapp   ClusterIP   10.106.30.175   <none>        80/TCP    59s

# 通过ClusterIP方式内部测试访问Services
root># curl 10.106.30.175
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
```

# 三、创建myapp的ingress规则

```
cat > ingress-myapp.yaml<<\EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  namespace: default
  annotations:
    kubernets.io/ingress.class: "nginx"
spec:
  rules:
  - host: www.k8s-devops.com
    http:
      paths:
      - path:
        backend:
          serviceName: myapp
          servicePort: 80
EOF
          
root># kubectl apply -f ingress-myapp.yaml  

root># kubectl get ingress
NAME            HOSTS                ADDRESS         PORTS   AGE
ingress-myapp   www.k8s-devops.com   10.97.243.123   80      5s

# 通过Ingress方式内部测试访问域名
root># curl -x 10.97.243.123:80 http://www.k8s-devops.com
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
```

# 四、查看ingress-default-backend的详细信息:

```
root># kubectl exec -n ingress-nginx -it nginx-ingress-controller-568867bf56-mbvm2 -- /bin/sh

$ cat nginx.conf

        ## start server www.k8s-devops.com
        server {
                server_name www.k8s-devops.com ;

                listen 80  ;
                listen 443  ssl http2 ;

                set $proxy_upstream_name "-";

                ssl_certificate_by_lua_block {
                        certificate.call()
                }

                location / {

                        set $namespace      "default";
                        set $ingress_name   "ingress-myapp";
                        set $service_name   "myapp";
                        set $service_port   "80";
                        set $location_path  "/";
                        
```

# 五、测试域名
```
1、这是nginx-ingress-controller采用的deployment部署的多副本
root># kubectl get deployment -A
ingress-nginx          nginx-ingress-controller    6/6     6            6           65m (这里有6个副本)

root># kubectl get svc -n ingress-nginx 
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.97.243.123   <none>        80:30725/TCP,443:32314/TCP   69m

root># kubectl describe svc ingress-nginx -n ingress-nginx
Name:                     ingress-nginx
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type:                     NodePort
IP:                       10.97.243.123
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  30725/TCP
Endpoints:                10.244.154.195:80,10.244.154.196:80,10.244.44.197:80 + 3 more...   这里转到6个pod
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  32314/TCP
Endpoints:                10.244.154.195:443,10.244.154.196:443,10.244.44.197:443 + 3 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

root># kubectl get endpoints -n ingress-nginx
NAME            ENDPOINTS                                                          AGE
ingress-nginx   10.244.154.195:80,10.244.154.196:80,10.244.44.197:80 + 9 more...   68m

Ingress Controller 用 Deployment 方式部署,给它添加一个 Service,类型为 NodePort,部署完成后查看会给出一个端口,通过 kubectl get svc 我们可以查看到这个端口,这个端口在集群的每个节点都可以访问,通过访问集群节点的这个端口就可以访问 Ingress Controller 了。但是集群节点这么多,而且端口又不是 80和443,太不爽了,一般我们会在前面自己搭个负载均衡器,比如用 Nginx,将请求转发到集群各个节点的那个端口上,这样我们访问 Nginx 就相当于访问到 Ingress Controller 了。

# 通过Nodeport方式测试(主机IP+端口)
curl 10.10.0.24:30725
curl 10.10.0.32:30725
curl 10.10.0.23:30725
curl 10.10.0.25:30725
curl 10.10.0.29:30725
curl 10.10.0.12:30725

2、通过Ingress IP 绑定域名测试

root># kubectl get ingress -A
NAMESPACE   NAME            HOSTS                ADDRESS         PORTS   AGE
default     ingress-myapp   www.k8s-devops.com   10.97.243.123   80      45m

root># curl -x 10.97.243.123:80 http://www.k8s-devops.com
```

# 六、Ingress高可用

&#8195;Ingress高可用,我们可以通过修改deployment的副本数来实现高可用,但是由于ingress承载着整个集群流量的接入,所以生产环境中,建议把ingress通过DaemonSet的方式部署集群中,而且该节点打上污点不允许业务pod进行调度,以避免业务应用与Ingress服务发生资源争抢。然后通过SLB把ingress节点主机添为后端服务器,进行流量转发。

1、修改为DaemonSet方式部署

```
wget -N https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml -O ingress-nginx-mandatory.yaml

1、类型的修改
sed  -i 's/kind: Deployment/kind: DaemonSet/g' ingress-nginx-mandatory.yaml
sed  -i 's/replicas:/#replicas:/g' ingress-nginx-mandatory.yaml

2、镜像的修改(可忽略)
#sed -i -e 's?quay.io?quay.azk8s.cn?g' -e 's?k8s.gcr.io?gcr.azk8s.cn/google-containers?g'  ingress-nginx-mandatory.yaml

3、使pod共享宿主机网络,暴露所监听的端口以及让容器使用K8S的DNS
# spec.template.spec 下面
# serviceAccountName: nginx-ingress-serviceaccount 的前后,平级加上 hostNetwork: true 和 dnsPolicy: "ClusterFirstWithHostNet"

sed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\      hostNetwork: true' ingress-nginx-mandatory.yaml
sed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\      dnsPolicy: "ClusterFirstWithHostNet"' ingress-nginx-mandatory.yaml

4、节点打标签和污点
# 添加节点标签append to  serviceAccountName
      nodeSelector:
        node-ingress: "true"
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
        
sed -i '/serviceAccountName: nginx-ingress-serviceaccount/a\      nodeSelector:\n        node-ingress: "true"' ingress-nginx-mandatory.yaml

修改参数如下:
  kind: Deployment #修改为DaemonSet
  replicas: 1 #注销此行,DaemonSet不需要此参数
  hostNetwork: true #添加该字段让docker使用物理机网络,在物理机暴露服务端口(80),注意物理机80端口提前不能被占用
  dnsPolicy: ClusterFirstWithHostNet #使用hostNetwork后容器会使用物理机网络包括DNS,会无法解析内部service,使用此参数让容器使用K8S的DNS
  nodeSelector:node-ingress: "true" #添加节点标签
  tolerations: 添加对指定节点容忍度
  
注意一点,因为我们创建的ingress-controller采用的时hostnetwork模式,所以无需在创建ingress-svc服务来把端口映射到节点主机上。
```

&#8195;这里我在3台master节点部署(生产环境不要使用master节点,应该部署在独立的节点上),因为我们采用DaemonSet的方式,所以我们需要对3个节点打标签以及容忍度。

```
## 查看标签
root># kubectl get nodes --show-labels

## 给节点打标签
[root@k8s-master-01]# kubectl label nodes k8s-master-01 node-ingress="true"
[root@k8s-master-01]# kubectl label nodes k8s-master-02 node-ingress="true"
[root@k8s-master-01]# kubectl label nodes k8s-master-03 node-ingress="true"

## 节点打污点
### master节点我之前已经打过污点,如果你没有打污点,执行下面3条命令。此污点名称需要与yaml文件中pod的容忍污点对应
[root@k8s-master-01]# kubectl taint nodes k8s-master-01 node-role.kubernetes.io/master=:NoSchedule
[root@k8s-master-01]# kubectl taint nodes k8s-master-02 node-role.kubernetes.io/master=:NoSchedule
[root@k8s-master-01]# kubectl taint nodes k8s-master-03 node-role.kubernetes.io/master=:NoSchedule
```

2、最终配置文件DaemonSet版的Ingress
```
cat >ingress-nginx-mandatory.yaml<<\EOF
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
#kind: Deployment  #修改为DaemonSet
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  #replicas: 1  #注销此行,DaemonSet不需要此参数
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true  #添加该字段让docker使用物理机网络,在物理机暴露服务端口(80),注意物理机80端口提前不能被占用
      dnsPolicy: ClusterFirstWithHostNet  #使用hostNetwork后容器会使用物理机网络包括DNS,会无法解析内部service,使用此参数让容器使用K8S的DNS
      nodeSelector:
        kubernetes.io/os: linux
      nodeSelector:
        node-ingress: "true"  #添加节点标签
      tolerations:  #添加对指定节点容忍度
      - key: "node-role.kubernetes.io/master"
        operator: "Equal"
        value: ""
        effect: "NoSchedule"
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---
EOF

kubectl apply -f ingress-nginx-mandatory.yaml
```

3、创建资源
```
[root@k8s-master01 ingress-master]# kubectl apply -f ingress-nginx-mandatory.yaml
## 查看资源分布情况
### 可以看到两个ingress-controller已经根据我们选择,部署在3个master节点上
[root@k8s-master01 ingress-master]# kubectl get pod -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP              NODE           NOMINATED NODE   READINESS GATES
nginx-ingress-controller-298dq   1/1     Running   0          134m   172.16.11.122   k8s-master03   <none>           <none>
nginx-ingress-controller-sh9h2   1/1     Running   0          134m   172.16.11.121   k8s-master02   <none>           <none>
```

4、测试
```
#配置集群外域名解析,当前测试环境我们使用windows hosts文件进行解析(针对于node节点有公网IP的类型)
92.168.92.56  www.k8s-devops.com  
92.168.92.57  www.k8s-devops.com 
92.168.92.58  www.k8s-devops.com

使用域名进行访问:
www.k8s-devops.com
```

参考资料:

https://www.cnblogs.com/tchua/p/11174386.html   Kubernetes集群Ingress高可用部署

https://github.com/kubernetes/ingress-nginx/blob/04e2ad8fcd51b0741263a37b8e7424ca3979137c/docs/deploy/index.md  官网

https://blog.csdn.net/networken/article/details/85881558   kubernetes部署Ingress-nginx

https://www.jianshu.com/p/a8e18cef13b2  HA ingress-nginx: DaemonSet hostNetwork keepavlied


================================================
FILE: components/ingress/1.外部服务发现之Ingress介绍.md
================================================
# 一、ingress介绍

&#8195;&#8195;K8s集群对外暴露服务的方式目前只有三种:`LoadBlancer`、`NodePort`、`Ingress`。前两种熟悉起来比较快,而且使用起来也比较方便,在此就不进行介绍了。

&#8195;&#8195;Ingress其实就是从 kuberenets 集群外部访问集群的一个入口,将外部的请求转发到集群内不同的 Service 上,其实就相当于 nginx、haproxy 等负载均衡代理服务器,有的同学可能觉得我们直接使用 nginx 就实现了,但是只使用 nginx 这种方式有很大缺陷,每次有新服务加入的时候怎么改 Nginx 配置?不可能让我们去手动更改或者滚动更新前端的 Nginx Pod 吧?那我们再加上一个服务发现的工具比如 consul 如何?貌似是可以,对吧?而且在之前单独使用 docker 的时候,这种方式已经使用得很普遍了,Ingress 实际上就是这样实现的,只是服务发现的功能自己实现了,不需要使用第三方的服务了,然后再加上一个域名规则定义,路由信息的刷新需要一个靠 Ingress controller 来提供。

&#8195;&#8195;其中ingress controller目前主要有两种:基于`nginx`服务的ingress controller和基于`traefik`的ingress controller。而其中traefik的ingress controller,目前支持http和https协议

# 二、ingress的工作原理

## 1、ingress由两部分组成: ingress controller和ingress服务

&#8195;&#8195;Ingress controller 可以理解为一个监听器,通过不断地与 kube-apiserver 打交道,实时的感知后端 service、pod 的变化,当得到这些变化信息后,Ingress controller 再结合 Ingress 的配置,更新反向代理负载均衡器,达到服务发现的作用。其实这点和服务发现工具 consul consul-template 非常类似。

## 2、ingress具体的工作原理如下

&#8195;&#8195;ingress contronler通过与k8s的api进行交互,动态的去感知k8s集群中ingress服务规则的变化,然后读取它,并按照定义的ingress规则,转发到k8s集群中对应的service。而这个ingress规则写明了哪个域名对应k8s集群中的哪个service,然后再根据ingress-controller中的nginx配置模板,生成一段对应的nginx配置。然后再把该配置动态的写到ingress-controller的pod里,该ingress-controller的pod里面运行着一个nginx服务,控制器会把生成的nginx配置写入到nginx的配置文件中,然后reload一下,使其配置生效。以此来达到域名分配置及动态更新的效果。

# 三、Traefik

&#8195;&#8195;Traefik 是一款开源的反向代理与负载均衡工具。它最大的优点是能够与常见的微服务系统直接整合,可以实现自动化动态配置。目前支持 Docker、Swarm、Mesos/Marathon、 Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB、Rest API 等等后端模型。

&#8195;&#8195;要使用 traefik,我们同样需要部署 traefik 的 Pod,由于我们演示的集群中只有 master 节点有外网网卡,所以我们这里只有 master 这一个边缘节点,我们将 traefik 部署到该节点上即可。

  ![traefik原理图](https://github.com/Lancger/opsfull/blob/master/images/traefik-architecture.png)

- 1、 首先,为安全起见我们这里使用 RBAC 安全认证方式:([rbac.yaml](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml))

```
# vim rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
```

- 2、直接在集群中创建即可:

```
$ kubectl create -f rbac.yaml

serviceaccount "traefik-ingress-controller" created
clusterrole.rbac.authorization.k8s.io "traefik-ingress-controller" created
clusterrolebinding.rbac.authorization.k8s.io "traefik-ingress-controller" created
```

- 3、然后使用 Deployment 来管理 traefik Pod,直接使用官方的 traefik 镜像部署即可([traefik.yaml](https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-deployment.yaml))

```
# vim traefik.yaml
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/hostname: master  #默认master是不允许被调度的,加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看
      containers:
      - image: traefik:v1.7
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          #hostPort: 80
        - name: admin
          containerPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      # 该端口为 traefik ingress-controller的服务端口
      port: 80
      name: web
      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围
      # 从默认20000~40000之间选一个可用端口,让ingress-controller暴露给外部的访问
      nodePort: 23456
    - protocol: TCP
      # 该端口为 traefik 的管理WEB界面
      port: 8080
      name: admin
      nodePort: 23457
  type: NodePort
```

- 4、直接创建上面的资源对象即可:

```
$ kubectl create -f traefik.yaml

deployment.extensions "traefik-ingress-controller" created
service "traefik-ingress-service" created
```

- 5、要注意上面 yaml 文件:

```
tolerations:
- operator: "Exists"
nodeSelector:
  kubernetes.io/hostname: master

由于我们这里的特殊性,只有 master 节点有外网访问权限,所以我们使用nodeSelector标签将traefik的固定调度到master这个节点上,那么上面的tolerations是干什么的呢?这个是因为我们集群使用的 kubeadm 安装的,master 节点默认是不能被普通应用调度的,要被调度的话就需要添加这里的 tolerations 属性,当然如果你的集群和我们的不太一样,直接去掉这里的调度策略就行。

nodeSelector 和 tolerations 都属于 Pod 的调度策略,在后面的课程中会为大家讲解。

```

- 6、traefik 还提供了一个 web ui 工具,就是上面的 8080 端口对应的服务,为了能够访问到该服务,我们这里将服务设置成的 NodePort:

```
$ kubectl get pods -n kube-system -l k8s-app=traefik-ingress-lb -o wide
NAME                                          READY     STATUS    RESTARTS   AGE       IP            NODE
traefik-ingress-controller-57c4f787d9-bfhnl   1/1       Running   0          8m        10.244.0.18   master
$ kubectl get svc -n kube-system
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
...
traefik-ingress-service   NodePort    10.102.183.112   <none>        80:23456/TCP,8080:23457/TCP   8m
...
```

现在在浏览器中输入 [http://master_node_ip:23457 例如 http://16.21.206.156:23457/dashboard/ 注意这里是使用的IP] 就可以访问到 traefik 的 dashboard 了

# 四、Ingress 对象

以上我们是通过 NodePort 来访问 traefik 的 Dashboard 的,那怎样通过 ingress 来访问呢? 首先,需要创建一个 ingress 对象:(ingress.yaml)

```
# vim ingress.yaml
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefik-ui.test.com
    http:
      paths:
      - backend:
          serviceName: traefik-ingress-service
          #servicePort: 8080
          servicePort: admin  #跟上面service的name对应
```

然后为 traefik dashboard 创建对应的 ingress 对象:

```
$ kubectl create -f ingress.yaml

ingress.extensions "traefik-web-ui" created
```

要注意上面的 ingress 对象的规则,特别是 rules 区域,我们这里是要为 traefik 的 dashboard 建立一个 ingress 对象,所以这里的 serviceName 对应的是上面我们创建的 traefik-ingress-service,端口也要注意对应 8080 端口,为了避免端口更改,这里的 servicePort 的值也可以替换成上面定义的 port 的名字:admin

创建完成后,我们应该怎么来测试呢?

- 1、第一步,在本地的/etc/hosts里面添加上 traefik-ui.test.com 与 master 节点外网 IP 的映射关系

- 2、第二步,在浏览器中访问:http://traefik-ui.test.com 我们会发现并没有得到我们期望的 dashboard 界面,这是因为我们上面部署 traefik 的时候使用的是 NodePort 这种 Service 对象,所以我们只能通过上面的 23456 端口访问到我们的目标对象:[http://traefik-ui.test.com:23456](http://traefik-ui.test.com:23456) 加上端口后我们发现可以访问到 dashboard 了,而且在 dashboard 当中多了一条记录,正是上面我们创建的 ingress 对象的数据,我们还可以切换到 HEALTH 界面中,可以查看当前 traefik 代理的服务的整体的健康状态 

&#8195;&#8195;注意这里为何是23456而不是23457,因为这里是通过ingress设置的域名来访问的,宿主机的23456端口对应宿主机上traefik-ingress-controller-nginx-pod容器的80端口,然后再经过ingress代理到service对应的pod节点上,如果traefik-ingress-controller-nginx-pod设置了宿主机端口映射,那么可以省略23456端口,下面会讲到hostPort: 80参数的使用,因为走了多层代理,所以直接Nodeport方式的性能会好一些,但是量一多,维护起来就比较麻烦)

- 3、第三步,上面我们可以通过自定义域名加上端口可以访问我们的服务了,但是我们平时服务别人的服务是不是都是直接用的域名啊,http 或者 https 的,几乎很少有在域名后面加上端口访问的吧?为什么?太麻烦啊,端口也记不住,要解决这个问题,怎么办,我们只需要把我们上面的 traefik 的核心应用的端口隐射到 master 节点上的 80 端口,是不是就可以了,因为 http 默认就是访问 80 端口,但是我们在 Service 里面是添加的一个 NodePort 类型的服务,没办法映射 80 端口,怎么办?这里就可以直接在 Pod 中指定一个 hostPort 即可,更改上面的 traefik.yaml 文件中的容器端口:

```
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
  containerPort: 80
  hostPort: 80
- name: admin
  containerPort: 8080
```

添加以后 hostPort: 80,然后更新应用:

```
$ kubectl apply -f traefik.yaml
```

更新完成后,这个时候我们在浏览器中直接使用域名方法测试下:

- 4、第四步,正常来说,我们如果有自己的域名,我们可以将我们的域名添加一条 DNS 记录,解析到 master 的外网 IP 上面,这样任何人都可以通过域名来访问我的暴露的服务了。如果你有多个边缘节点的话,可以在每个边缘节点上部署一个 ingress-controller 服务,然后在边缘节点前面挂一个负载均衡器,比如 nginx,将所有的边缘节点均作为这个负载均衡器的后端,这样就可以实现 ingress-controller 的高可用和负载均衡了。

到这里我们就通过 ingress 对象对外成功暴露了一个服务,下节课我们再来详细了解 traefik 的更多用法。

# 五、traefik 合并文件

1、创建文件 traefik-controller-ingress.yaml
```
vim traefik-controller-ingress.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/hostname: master  #默认master是不允许被调度的,加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes来查看
      containers:
      - image: traefik:v1.7
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO

---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      # 该端口为 traefik ingress-controller的服务端口
      port: 80
      name: web
      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围
      # 从默认20000~40000之间选一个可用端口,让ingress-controller暴露给外部的访问
      nodePort: 23456
    - protocol: TCP
      # 该端口为 traefik 的管理WEB界面
      port: 8080
      name: admin
      nodePort: 23457
  type: NodePort

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefik-ui.test.com
    http:
      paths:
      - backend:
          serviceName: traefik-ingress-service
          #servicePort: 8080
          servicePort: admin  #跟上面service的name对应
```

2、更新应用

```
$ kubectl apply -f traefik-controller-ingress.yaml
```

3、访问测试

```
http://traefik-ui.test.com    绑定master的公网IP或者VIP
```

https://blog.csdn.net/oyym_mv/article/details/86986510  Kubernetes实录(11) kubernetes使用traefik作为反向代理(Deamonset模式)


================================================
FILE: components/ingress/2.ingress tls配置.md
================================================
# 1、Ingress tls

上节课给大家展示了 traefik 的安装使用以及简单的 ingress 的配置方法,这节课我们来学习一下 ingress tls 以及 path 路径在 ingress 对象中的使用方法。

# 2、TLS 认证

在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书:
```
mkdir -p /ssl/
cd /ssl/
openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt
```
现在我们有了证书,我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书:
```
kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system
```
# 3、配置 Traefik

前面我们使用的是 Traefik 的默认配置,现在我们来配置 Traefik,让其支持 https:

```
mkdir -p /config/
cd /config/

cat > traefik.toml <<\EOF
defaultEntryPoints = ["http", "https"]

[entryPoints]
  [entryPoints.http]
  address = ":80"
    [entryPoints.http.redirect]
      entryPoint = "https"
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
      CertFile = "/ssl/tls.crt"
      KeyFile = "/ssl/tls.key"
EOF
 
上面的配置文件中我们配置了 http 和 https 两个入口,并且配置了将 http 服务强制跳转到 https 服务,这样我们所有通过 traefik 进来的服务都是 https 的,要访问 https 服务,当然就得配置对应的证书了,可以看到我们指定了 CertFile 和 KeyFile 两个文件,由于 traefik pod 中并没有这两个证书,所以我们要想办法将上面生成的证书挂载到 Pod 中去,是不是前面我们讲解过 secret 对象可以通过 volume 形式挂载到 Pod 中?至于上面的 traefik.toml 这个文件我们要怎么让 traefik pod 能够访问到呢?还记得我们前面讲过的 ConfigMap 吗?我们是不是可以将上面的 traefik.toml 配置文件通过一个 ConfigMap 对象挂载到 traefik pod 中去:

kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system

root># kubectl get configmap -n kube-system
NAME                                 DATA   AGE
coredns                              1      11h
extension-apiserver-authentication   6      11h
kube-flannel-cfg                     2      11h
kube-proxy                           2      11h
kubeadm-config                       2      11h
kubelet-config-1.15                  1      11h
traefik-conf                         1      10s

现在就可以更改下上节课的 traefik pod 的 yaml 文件了:

cd /data/components/ingress/

cat > traefik.yaml <<\EOF
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      volumes:
      - name: ssl
        secret:
          secretName: traefik-cert
      - name: config
        configMap:
          name: traefik-conf
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/hostname: linux-node1.example.com
      containers:
      - image: traefik
        name: traefik-ingress-lb
        volumeMounts:
        - mountPath: "/ssl"  #这里注意挂载的路径
          name: "ssl"
        - mountPath: "/config" #这里注意挂载的路径
          name: "config"
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
          hostPort: 443
        - name: admin
          containerPort: 8080
        args:
        - --configfile=/config/traefik.toml
        - --api
        - --kubernetes
        - --logLevel=INFO
EOF

和之前的比较,我们增加了 443 的端口配置,以及启动参数中通过 configfile 指定了 traefik.toml 配置文件,这个配置文件是通过 volume 挂载进来的。然后更新下 traefik pod:

kubectl apply -f traefik.yaml
kubectl logs -f traefik-ingress-controller-7dcfd9c6df-v58k7 -n kube-system

更新完成后我们查看 traefik pod 的日志,如果出现类似于上面的一些日志信息,证明更新成功了。现在我们去访问 traefik 的 dashboard 会跳转到 https 的地址,并会提示证书相关的报警信息,这是因为我们的证书是我们自建的,并不受浏览器信任,如果你是正规机构购买的证书并不会出现改报警信息,你应该可以看到我们常见的绿色标志: 

https://traefik.k8s.com/dashboard/
```

# 4、配置 ingress

其实上面的 TLS 认证方式已经成功了,接下来我们通过一个实例来说明下 ingress 中 path 的用法,这里我们部署了3个简单的 web 服务,通过一个环境变量来标识当前运行的是哪个服务:(backend.yaml)

```
cd /data/components/ingress/

cat > backend.yaml <<\EOF
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: svc1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: svc1
    spec:
      containers:
      - name: svc1
        image: cnych/example-web-service
        env:
        - name: APP_SVC
          value: svc1
        ports:
        - containerPort: 8080
          protocol: TCP

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: svc2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: svc2
    spec:
      containers:
      - name: svc2
        image: cnych/example-web-service
        env:
        - name: APP_SVC
          value: svc2
        ports:
        - containerPort: 8080
          protocol: TCP

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: svc3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: svc3
    spec:
      containers:
      - name: svc3
        image: cnych/example-web-service
        env:
        - name: APP_SVC
          value: svc3
        ports:
        - containerPort: 8080
          protocol: TCP

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: svc1
  name: svc1
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: svc1

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: svc2
  name: svc2
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: svc2

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: svc3
  name: svc3
spec:
  type: ClusterIP
  ports:
  - port: 8080
    name: http
  selector:
    app: svc3
EOF

可以看到上面我们定义了3个 Deployment,分别对应3个 Service:

kubectl create -f backend.yaml

然后我们创建一个 ingress 对象来访问上面的3个服务:(example-ingress.yaml)

cat > example-ingress.yaml <<\EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example-web-app
  annotations:
    kubernetes.io/ingress.class: "traefik"
spec:
  rules:
  - host: example.k8s.com
    http:
      paths:
      - path: /s1
        backend:
          serviceName: svc1
          servicePort: 8080
      - path: /s2
        backend:
          serviceName: svc2
          servicePort: 8080
      - path: /
        backend:
          serviceName: svc3
          servicePort: 8080
EOF


注意我们这里定义的 ingress 对象和之前有一个不同的地方是我们增加了 path 路径的定义,不指定的话默认是 '/',创建该 ingress 对象:

kubectl create -f example-ingress.yaml

现在我们可以在本地 hosts 里面给域名 example.k8s.com 添加对应的 hosts 解析,然后就可以在浏览器中访问,可以看到默认也会跳转到 https 的页面: 
```

参考文档:

https://www.qikqiak.com/k8s-book/docs/41.ingress%20config.html


================================================
FILE: components/ingress/3.ingress-http使用示例.md
================================================
# 一、ingress-http测试示例

## 1、关键三个点:

    注意这3个资源的namespace: kube-system需要一致

    Deployment

    Service

    Ingress

```
$ vim nginx-deployment-http.yaml

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: kube-system
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.5
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: kube-system
  annotations:
    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度
spec:
  template:
    metadata:
      labels:
        name: nginx-service
spec:
  selector:
    app: nginx-pod
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: k8s.nginx.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
```

## 2、创建资源

```
$ kubectl apply -f nginx-deployment-http.yaml

deployment.apps/nginx-pod create
service/nginx-service create
ingress.extensions/nginx-ingress create
```

## 3、访问刚创建的资源

首先这里需要先找到traefik-ingress pod 分布到到了那个节点,这里我们发现是落在了10.199.1.159的节点,然后我们绑定该节点对应的公网IP,这里假设为16.21.26.139

```
16.21.26.139 k8s.nginx.com
```

```
$ kubectl get pod -A -o wide|grep traefik-ingress
kube-system   traefik-ingress-controller-7d454d7c68-8qpjq   1/1     Running   0          21h   10.46.2.10    10.199.1.159   <none>           <none>
```

  ![ingress测试示例1](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-01.png)


## 4、清理资源

### 1、清理deployment
```
# 获取deployment
$ kubectl get deploy -A

NAMESPACE     NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns                      2/2     2            2           3d
kube-system   heapster                     1/1     1            1           3d
kube-system   kubernetes-dashboard         1/1     1            1           3d
kube-system   metrics-server               1/1     1            1           3d
kube-system   nginx-pod                    2/2     2            2           25m
kube-system   traefik-ingress-controller   1/1     1            1           2d22h

# 清理deployment
$ kubectl delete deploy nginx-pod -n kube-system

deployment.extensions "nginx-pod" deleted
```

### 2、清理service
```
# 获取svc
$ kubectl get svc -A

NAMESPACE     NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
default       kubernetes                ClusterIP   10.44.0.1       <none>        443/TCP                                     3d
kube-system   heapster                  ClusterIP   10.44.158.46    <none>        80/TCP                                      3d
kube-system   kube-dns                  ClusterIP   10.44.0.2       <none>        53/UDP,53/TCP,9153/TCP                      3d
kube-system   kubernetes-dashboard      NodePort    10.44.176.99    <none>        443:27008/TCP                               3d
kube-system   metrics-server            ClusterIP   10.44.40.157    <none>        443/TCP                                     3d
kube-system   nginx-service             ClusterIP   10.44.148.252   <none>        80/TCP                                      28m
kube-system   traefik-ingress-service   NodePort    10.44.67.195    <none>        80:23456/TCP,443:23457/TCP,8080:33192/TCP   2d22h

# 清理svc
$ kubectl delete svc nginx-service -n kube-system

service "nginx-service" deleted
```

### 3、清理ingress

```
# 获取ingress
$ kubectl get ingress -A

NAMESPACE     NAME                   HOSTS                 ADDRESS   PORTS   AGE
kube-system   kubernetes-dashboard   dashboard.test.com              80      2d22h
kube-system   nginx-ingress          k8s.nginx.com                   80      29m
kube-system   traefik-web-ui         traefik-ui.test.com             80      2d22h

# 清理ingress
$ kubectl delete ingress nginx-ingress -n kube-system

ingress.extensions "nginx-ingress" deleted
```


参考资料:

https://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/     Kubernetes traefik ingress使用


================================================
FILE: components/ingress/4.ingress-https使用示例.md
================================================
# 一、ingress-https测试示例

1、TLS 认证

在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书:

```
mkdir -p /ssl-k8s/
cd /ssl-k8s/
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_k8s.key -out tls_k8s.crt -subj "/CN=hello.k8s.com"
```

现在我们有了证书,我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书:(这个需手动执行创建好)

```
kubectl create secret generic traefik-k8s --from-file=tls_k8s.crt --from-file=tls_k8s.key -n kube-system
```

```
# vim /config/traefik.toml

defaultEntryPoints = ["http", "https"]
[entryPoints]
  [entryPoints.http]
    address = ":80"
  [entryPoints.https]
    address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/tls_first.crt"
        KeyFile = "/ssl/tls_first.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/tls_second.crt"
        KeyFile = "/ssl/tls_second.key"
```

## 1、关键五个点:

    注意这5个资源的namespace: kube-system需要一致

    secret          ---secret 对象来存储ssl证书
    
    configmap       ---configmap 用来保存一个或多个key/value信息
    
    Deployment

    Service

    Ingress

## 2、合并创建secret,configmap以及traefik文件
```
# vim traefik-controller-https.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: traefik-conf
  namespace: kube-system
data:
  traefik.toml: |
    insecureSkipVerify = true
    defaultEntryPoints = ["http", "https"]
    [entryPoints]
      [entryPoints.http]
        address = ":80"
      [entryPoints.https]
        address = ":443"
        [entryPoints.https.tls]
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/tls_first.crt"
            KeyFile = "/ssl/tls_first.key"
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/tls_second.crt"
            KeyFile = "/ssl/tls_second.key"
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      volumes:
      - name: ssl
        secret:
          secretName: traefik-cert
      - name: config
        configMap:
          name: traefik-conf
      #nodeSelector:
      #  node-role.kubernetes.io/traefik: "true"
      containers:
      - image: traefik:v1.7.12
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        volumeMounts:
        - mountPath: "/ssl"
          name: "ssl"
        - mountPath: "/config"
          name: "config"
        resources:
          limits:
            cpu: 1000m
            memory: 800Mi
          requests:
            cpu: 500m
            memory: 600Mi
        args:
        - --configfile=/config/traefik.toml
        - --api
        - --kubernetes
        - --logLevel=INFO
        securityContext:
          capabilities:
            drop:
              - ALL
            add:
              - NET_BIND_SERVICE
        ports:
          - name: http
            containerPort: 80
            hostPort: 80
          - name: https
            containerPort: 443
            hostPort: 443
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      # 该端口为 traefik ingress-controller的服务端口
      port: 80
      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围
      # 从默认20000~40000之间选一个可用端口,让ingress-controller暴露给外部的访问
      nodePort: 23456
      name: http
    - protocol: TCP
      # 
      port: 443
      nodePort: 23457
      name: https
    - protocol: TCP
      # 该端口为 traefik 的管理WEB界面
      port: 8080
      name: admin
  type: NodePort
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
```
# 二、应用测试示例
```
$ vim nginx-deployment-https.yaml

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: kube-system
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.5
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: kube-system
  annotations:
    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度
spec:
  template:
    metadata:
      labels:
        name: nginx-service
spec:
  selector:
    app: nginx-pod
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: k8s.nginx.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
  tls:
  - secretName: traefik-k8s
```

## 2、创建资源

```
$ kubectl apply -f nginx-deployment-https.yaml

deployment.apps/nginx-pod create
service/nginx-service create
ingress.extensions/nginx-ingress create
```

## 3、访问刚创建的资源

首先这里需要先找到traefik-ingress pod 分布到到了那个节点,这里我们发现是落在了10.199.1.159的节点,然后我们绑定该节点对应的公网IP,这里假设为16.21.26.139

```
16.21.26.139 k8s.nginx.com
```

```
$ kubectl get pod -A -o wide|grep traefik-ingress
kube-system   traefik-ingress-controller-7d454d7c68-8qpjq   1/1     Running   0          21h   10.46.2.10    10.199.1.159   <none>           <none>
```

  ![ingress测试示例1](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-01.png)


## 4、清理资源

### 1、清理deployment
```
# 获取deployment
$ kubectl get deploy -A

NAMESPACE     NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns                      2/2     2            2           3d
kube-system   heapster                     1/1     1            1           3d
kube-system   kubernetes-dashboard         1/1     1            1           3d
kube-system   metrics-server               1/1     1            1           3d
kube-system   nginx-pod                    2/2     2            2           25m
kube-system   traefik-ingress-controller   1/1     1            1           2d22h

# 清理deployment
$ kubectl delete deploy nginx-pod -n kube-system

deployment.extensions "nginx-pod" deleted
```

### 2、清理service
```
# 获取svc
$ kubectl get svc -A

NAMESPACE     NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
default       kubernetes                ClusterIP   10.44.0.1       <none>        443/TCP                                     3d
kube-system   heapster                  ClusterIP   10.44.158.46    <none>        80/TCP                                      3d
kube-system   kube-dns                  ClusterIP   10.44.0.2       <none>        53/UDP,53/TCP,9153/TCP                      3d
kube-system   kubernetes-dashboard      NodePort    10.44.176.99    <none>        443:27008/TCP                               3d
kube-system   metrics-server            ClusterIP   10.44.40.157    <none>        443/TCP                                     3d
kube-system   nginx-service             ClusterIP   10.44.148.252   <none>        80/TCP                                      28m
kube-system   traefik-ingress-service   NodePort    10.44.67.195    <none>        80:23456/TCP,443:23457/TCP,8080:33192/TCP   2d22h

# 清理svc
$ kubectl delete svc nginx-service -n kube-system

service "nginx-service" deleted
```

### 3、清理ingress

```
# 获取ingress
$ kubectl get ingress -A

NAMESPACE     NAME                   HOSTS                 ADDRESS   PORTS   AGE
kube-system   kubernetes-dashboard   dashboard.test.com              80      2d22h
kube-system   nginx-ingress          k8s.nginx.com                   80      29m
kube-system   traefik-web-ui         traefik-ui.test.com             80      2d22h

# 清理ingress
$ kubectl delete ingress nginx-ingress -n kube-system

ingress.extensions "nginx-ingress" deleted
```


参考资料:

https://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/     Kubernetes traefik ingress使用

http://docs.kubernetes.org.cn/558.html  


================================================
FILE: components/ingress/5.hello-tls.md
================================================
# 证书文件

1、生成证书
```
mkdir -p /ssl/{default,first,second}
cd /ssl/default/
openssl req -x509 -nodes -days 165 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=k8s.test.com"
kubectl -n kube-system create secret tls traefik-cert --key=tls.key --cert=tls.crt

cd /ssl/first/
openssl req -x509 -nodes -days 265 -newkey rsa:2048 -keyout tls_first.key -out tls_first.crt -subj "/CN=k8s.first.com"
kubectl create secret generic first-k8s --from-file=tls_first.crt --from-file=tls_first.key -n kube-system

cd /ssl/second/
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_second.key -out tls_second.crt -subj "/CN=k8s.second.com"
kubectl create secret generic second-k8s --from-file=tls_second.crt --from-file=tls_second.key -n kube-system

#查看证书
kubectl get secret traefik-cert first-k8s second-k8s -n kube-system
kubectl describe secret traefik-cert first-k8s second-k8s -n kube-system
```

2、删除证书

```
$ kubectl delete secret traefik-cert first-k8s second-k8s -n kube-system

secret "second-k8s" deleted
secret "traefik-cert" deleted
secret "first-k8s" deleted
```

# 证书配置

1、创建configMap(cm)

```
mkdir -p /config/
cd /config/

$  vim traefik.toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
  [entryPoints.http]
    address = ":80"
  [entryPoints.https]
    address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/default/tls.crt"
        KeyFile = "/ssl/default/tls.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/first/tls_first.crt"
        KeyFile = "/ssl/first/tls_first.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/second/tls_second.crt"
        KeyFile = "/ssl/second/tls_second.key"
        
$ kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system

$ kubectl get configmap traefik-conf -n kube-system

$ kubectl describe cm traefik-conf -n kube-system
```
2、删除configMap(cm)

```
$ kubectl delete cm traefik-conf -n kube-system
```

# traefik-ingress-controller文件

1、创建文件
```
$ vim traefik-controller-tls.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: traefik-conf
  namespace: kube-system
data:
  traefik.toml: |
    insecureSkipVerify = true
    defaultEntryPoints = ["http", "https"]
    [entryPoints]
      [entryPoints.http]
        address = ":80"
      [entryPoints.https]
        address = ":443"
        [entryPoints.https.tls]
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/default/tls.crt"
            KeyFile = "/ssl/default/tls.key"
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/first/tls_first.crt"
            KeyFile = "/ssl/first/tls_first.key"
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/second/tls_second.crt"
            KeyFile = "/ssl/second/tls_second.key"
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      volumes:
      - name: ssl
        secret:
          secretName: traefik-cert
      - name: config
        configMap:
          name: traefik-conf
      #nodeSelector:
      #  node-role.kubernetes.io/traefik: "true"
      containers:
      - image: traefik:v1.7.12
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        volumeMounts:
        - mountPath: "/ssl"
          name: "ssl"
        - mountPath: "/config"
          name: "config"
        resources:
          limits:
            cpu: 1000m
            memory: 800Mi
          requests:
            cpu: 500m
            memory: 600Mi
        args:
        - --configfile=/config/traefik.toml
        - --api
        - --kubernetes
        - --logLevel=INFO
        securityContext:
          capabilities:
            drop:
              - ALL
            add:
              - NET_BIND_SERVICE
        ports:
          - name: http
            containerPort: 80
            hostPort: 80
          - name: https
            containerPort: 443
            hostPort: 443
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      # 该端口为 traefik ingress-controller的服务端口
      port: 80
      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围
      # 从默认20000~40000之间选一个可用端口,让ingress-controller暴露给外部的访问
      nodePort: 23456
      name: http
    - protocol: TCP
      # 
      port: 443
      nodePort: 23457
      name: https
    - protocol: TCP
      # 该端口为 traefik 的管理WEB界面
      port: 8080
      name: admin
  type: NodePort
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
```

2、应用生效
```
$ kubectl apply -f traefik-controller-tls.yaml 

configmap/traefik-conf created
deployment.apps/traefik-ingress-controller created
service/traefik-ingress-service created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
serviceaccount/traefik-ingress-controller created

#删除资源
$ kubectl delete -f traefik-controller-tls.yaml 
```
# 测试deployment和ingress
```
$ vim nginx-ingress-deploy.yaml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: kube-system
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.5
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: kube-system
  annotations:
    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度
spec:
  template:
    metadata:
      labels:
        name: nginx-service
spec:
  selector:
    app: nginx-pod
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  tls:
  - secretName: first-k8s
  - secretName: second-k8s
  rules:
  - host: k8s.first.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
  - host: k8s.senond.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
          
$ kubectl apply -f nginx-ingress-deploy.yaml 
$ kubectl delete -f nginx-ingress-deploy.yaml 
```


================================================
FILE: components/ingress/6.ingress-https使用示例.md
================================================
# ingress-https测试示例

# 一、证书文件

## 1、TLS 认证

在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书:
```
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=hello.test.com"
```

现在我们有了证书,我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书:(这个需手动执行创建好)

```
kubectl -n kube-system create secret tls traefik-cert --key=tls.key --cert=tls.crt
```

## 2、多证书创建

```
mkdir -p /ssl/{default,first,second}
cd /ssl/default/
openssl req -x509 -nodes -days 165 -newkey rsa:2048 -keyout tls_default.key -out tls_default.crt -subj "/CN=k8s.test.com"
kubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt

cd /ssl/first/
openssl req -x509 -nodes -days 265 -newkey rsa:2048 -keyout tls_first.key -out tls_first.crt -subj "/CN=k8s.first.com"
kubectl -n kube-system create secret tls first-k8s --key=tls_first.key --cert=tls_first.crt

cd /ssl/second/
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_second.key -out tls_second.crt -subj "/CN=k8s.second.com"
kubectl -n kube-system create secret tls second-k8s --key=tls_second.key --cert=tls_second.crt

#查看证书
kubectl get secret traefik-cert first-k8s second-k8s -n kube-system
kubectl describe secret traefik-cert first-k8s second-k8s -n kube-system
```

## 3、删除证书

```
$ kubectl delete secret traefik-cert first-k8s second-k8s -n kube-system

secret "second-k8s" deleted
secret "traefik-cert" deleted
secret "first-k8s" deleted
```

## 4、关键5个点
```

注意这5个资源的namespace: kube-system 需要一致

secret          ---secret 对象来存储ssl证书

configmap       ---configmap 用来保存一个或多个key/value信息

Deployment

Service

Ingress

```

# 二、证书配置

## 1、创建configMap(cm)

```
mkdir -p /config/
cd /config/

$  vim traefik.toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
  [entryPoints.http]
    address = ":80"
  [entryPoints.https]
    address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/default/tls_default.crt"
        KeyFile = "/ssl/default/tls_default.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/first/tls_first.crt"
        KeyFile = "/ssl/first/tls_first.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/ssl/second/tls_second.crt"
        KeyFile = "/ssl/second/tls_second.key"
        
$ kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system

$ kubectl get configmap traefik-conf -n kube-system

$ kubectl describe cm traefik-conf -n kube-system
```
## 2、删除configMap(cm)

```
$ kubectl delete cm traefik-conf -n kube-system
```

# 三、traefik-ingress-controller控制文件

## 1、创建文件
```
$ cd /config/

$ vim traefik-controller-tls.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: traefik-conf
  namespace: kube-system
data:
  traefik.toml: |
    insecureSkipVerify = true
    defaultEntryPoints = ["http", "https"]
    [entryPoints]
      [entryPoints.http]
        address = ":80"
      [entryPoints.https]
        address = ":443"
        [entryPoints.https.tls]
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/default/tls.crt"
            KeyFile = "/ssl/default/tls.key"
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/first/tls_first.crt"
            KeyFile = "/ssl/first/tls_first.key"
          [[entryPoints.https.tls.certificates]]
            CertFile = "/ssl/second/tls_second.crt"
            KeyFile = "/ssl/second/tls_second.key"
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      volumes:
      - name: ssl
        secret:
          secretName: traefik-cert
      - name: config
        configMap:
          name: traefik-conf
      #nodeSelector:
      #  node-role.kubernetes.io/traefik: "true"
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/hostname: 10.198.1.156    #指定traefik-ingress-controller跑在这个node节点上面
      containers:
      - image: traefik:v1.7.12
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        volumeMounts:
        - mountPath: "/ssl"
          name: "ssl"
        - mountPath: "/config"
          name: "config"
        resources:
          limits:
            cpu: 1000m
            memory: 800Mi
          requests:
            cpu: 500m
            memory: 600Mi
        args:
        - --configfile=/config/traefik.toml
        - --api
        - --kubernetes
        - --logLevel=INFO
        securityContext:
          capabilities:
            drop:
              - ALL
            add:
              - NET_BIND_SERVICE
        ports:
          - name: http
            containerPort: 80
            hostPort: 80
          - name: https
            containerPort: 443
            hostPort: 443
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      # 该端口为 traefik ingress-controller的服务端口
      port: 80
      # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围
      # 从默认20000~40000之间选一个可用端口,让ingress-controller暴露给外部的访问
      nodePort: 23456
      name: http
    - protocol: TCP
      port: 443
      nodePort: 23457
      name: https
    - protocol: TCP
      # 该端口为 traefik 的管理WEB界面
      port: 8080
      name: admin
  type: NodePort
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
```

## 2、应用生效
```
$ kubectl apply -f traefik-controller-tls.yaml 

configmap/traefik-conf created
deployment.apps/traefik-ingress-controller created
service/traefik-ingress-service created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
serviceaccount/traefik-ingress-controller created

#删除资源
$ kubectl delete -f traefik-controller-tls.yaml 
```
# 四、命令行创建 https ingress 例子
```
# 创建示例应用
$ kubectl run test-hello --image=nginx:alpine --port=80 --expose -n kube-system

# 删除示例应用(kubectl run 默认创建的是deployment资源应用 )
$ kubectl delete deployment test-hello -n kube-system
$ kubectl delete svc test-hello -n kube-system

# hello-tls-ingress 示例
$ cd /config/
$ vim hello-tls.ing.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-tls-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: k8s.test.com
    http:
      paths:
      - backend:
          serviceName: test-hello
          servicePort: 80
  tls:
  - secretName: traefik-cert
  
# 创建 https ingress
$ kubectl apply -f /config/hello-tls.ing.yaml

# 注意根据hello示例,需要在kube-system命名空间创建对应的secret: traefik-cert(这步在开篇已经创建了,无须再创建)
$ kubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt

# 删除 https ingress
$ kubectl delete -f /config/hello-tls.ing.yaml
```
#测试访问(找到traefik-controller pod运行在哪个node节点上,然后绑定该节点的IP,然后访问该url)

https://k8s.test.com:23457

  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-02.png)

# 五、测试deployment和ingress
```
$ vim nginx-ingress-deploy.yaml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: kube-system
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.5
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: kube-system
  annotations:
    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度
spec:
  template:
    metadata:
      labels:
        name: nginx-service
spec:
  selector:
    app: nginx-pod
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  tls:
  - secretName: first-k8s
  - secretName: second-k8s
  rules:
  - host: k8s.first.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
  - host: k8s.second.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
          
$ kubectl apply -f nginx-ingress-deploy.yaml 
$ kubectl delete -f nginx-ingress-deploy.yaml
```

#访问测试

https://k8s.first.com:23457/

  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-03.png)

https://k8s.second.com:23457/

  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-04.png)


参考资料:

https://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/ Kubernetes traefik ingress使用

http://docs.kubernetes.org.cn/558.html


================================================
FILE: components/ingress/README.md
================================================
参考资料:

https://segmentfault.com/a/1190000019908991  k8s ingress原理及ingress-nginx部署测试

https://www.cnblogs.com/tchua/p/11174386.html  Kubernetes集群Ingress高可用部署


================================================
FILE: components/ingress/nginx-ingress/README.md
================================================



================================================
FILE: components/ingress/traefik-ingress/1.traefik反向代理Deamonset模式.md
================================================
# 一、Deamonset方式部署traefik-controller-ingress

https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml

这里使用的DaemonSet,只是用traefik-ds.yaml ,traefik-rbac.yaml , ui.yaml

```bash
kubectl delete -f traefik-ds.yaml

rm -f ./traefik-ds.yaml

cat >traefik-ds.yaml<<\EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      #=======添加nodeSelector信息:只在master节点创建=======
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/role: master  #默认master是不允许被调度的,加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看
      #===================================================
      containers:
      - image: traefik:v1.7
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
          hostPort: 8080
        securityContext:
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
EOF

kubectl apply -f traefik-ds.yaml
```

# 二、traefik-rbac配置

https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml

```
kubectl delete -f traefik-rbac.yaml

rm -f ./traefik-rbac.yaml

cat >traefik-rbac.yaml<<\EOF
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
    - extensions
    resources:
    - ingresses/status
    verbs:
    - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
EOF

kubectl apply -f traefik-rbac.yaml
```

# 三、traefik-ui使用traefik进行代理

https://github.com/containous/traefik/blob/v1.7/examples/k8s/ui.yaml

1、代理方式一

```bash
kubectl delete -f ui.yaml

rm -f ./ui.yaml

cat >ui.yaml<<\EOF
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik-ui.devops.com
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web
---
EOF

kubectl apply -f ui.yaml
```

2、代理方式二

```
kubectl delete -f ui.yaml

rm -f ./ui.yaml

cat >ui.yaml<<\EOF
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      # 该端口为 traefik ingress-controller的服务端口
      port: 80
      name: web
    - protocol: TCP
      # 该端口为 traefik 的管理WEB界面
      port: 8080
      name: admin
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefik-ui.devops.com
    http:
      paths:
      - backend:
          serviceName: traefik-ingress-service
          #servicePort: 8080
          servicePort: admin  #跟上面service的name对应
---
EOF

kubectl apply -f ui.yaml
```

# 四、访问测试 

`http://traefik-ui.devops.com`

# 五、汇总
```
kubectl delete -f all-ds.yaml

rm -f ./all-ds.yaml

cat >all-ds.yaml<<\EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      #=======添加nodeSelector信息:只在master节点创建=======
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/role: master  #默认master是不允许被调度的,加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看
      #===================================================
      containers:
      - image: traefik:v1.7
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
          hostPort: 8080
        securityContext:
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
    - extensions
    resources:
    - ingresses/status
    verbs:
    - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik-ui.devops.com
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web
EOF

kubectl apply -f all-ds.yaml
```

参考资料:

https://blog.csdn.net/oyym_mv/article/details/86986510  kubernetes使用traefik作为反向代理(Deamonset模式)

https://www.cnblogs.com/twodoge/p/11663006.html  第二个坑新版本的 apps.v1 API需要在yaml文件中,selector变为必选项


================================================
FILE: components/ingress/traefik-ingress/2.traefik反向代理Deamonset模式TLS.md
================================================
# Ingress-Https测试示例

# 一、证书文件

## 1、TLS 认证

&#8195;在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书:
```bash
rm -rf /etc/certs/ssl/

mkdir -p /etc/certs/ssl/
cd /etc/certs/ssl/
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=hello.test.com"
```

&#8195;现在我们有了证书,我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书:(这个需手动执行创建好)

```bash
kubectl -n kube-system create secret tls traefik-cert --key=tls.key --cert=tls.crt
```

## 2、多证书创建

```bash
kubectl delete secret traefik-cert first-k8s second-k8s -n kube-system

rm -rf /etc/certs/ssl/

mkdir -p /etc/certs/ssl/{default,first,second}
cd /etc/certs/ssl/default/
openssl req -x509 -nodes -days 165 -newkey rsa:2048 -keyout tls_default.key -out tls_default.crt -subj "/CN=*.devops.com"
kubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt

cd /etc/certs/ssl/first/
openssl req -x509 -nodes -days 265 -newkey rsa:2048 -keyout tls_first.key -out tls_first.crt -subj "/CN=k8s.first.com"
kubectl -n kube-system create secret tls first-k8s --key=tls_first.key --cert=tls_first.crt

cd /etc/certs/ssl/second/
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_second.key -out tls_second.crt -subj "/CN=k8s.second.com"
kubectl -n kube-system create secret tls second-k8s --key=tls_second.key --cert=tls_second.crt

#查看证书
kubectl get secret traefik-cert first-k8s second-k8s -n kube-system
kubectl describe secret traefik-cert first-k8s second-k8s -n kube-system
```
## 3、关键5个点
```bash
注意这5个资源的namespace: kube-system 需要一致

secret          ---secret 对象来存储ssl证书

configmap       ---configmap 用来保存一个或多个key/value信息

Deployment  or  DaemonSet

Service

Ingress
```

# 二、证书配置,创建configMap(cm)

1、http和https并存
```bash
kubectl delete cm traefik-conf -n kube-system

rm -rf /etc/certs/config/

mkdir -p /etc/certs/config/
cd /etc/certs/config/

cat >traefik.toml<<\EOF
# 设置insecureSkipVerify = true,可以配置backend为443(比如dashboard)的ingress规则
insecureSkipVerify = true
defaultEntryPoints = ["http", "https"]
[entryPoints]
  [entryPoints.http]
    address = ":80"
  [entryPoints.https]
    address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
        CertFile = "/etc/certs/ssl/default/tls_default.crt"
        KeyFile = "/etc/certs/ssl/default/tls_default.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/etc/certs/ssl/first/tls_first.crt"
        KeyFile = "/etc/certs/ssl/first/tls_first.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/etc/certs/ssl/second/tls_second.crt"
        KeyFile = "/etc/certs/ssl/second/tls_second.key"
EOF
        
kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system

kubectl get configmap traefik-conf -n kube-system

kubectl describe cm traefik-conf -n kube-system
```
2、http跳转到https

```bash
kubectl delete cm traefik-conf -n kube-system

rm -rf /etc/certs/config/

mkdir -p /etc/certs/config/
cd /etc/certs/config/

cat >traefik.toml<<\EOF
# 指定了 "traefik" 在访问 "https" 后端时可以忽略TLS证书验证错误,从而使得 "https" 的后端,可以像http后端一样直接通过 "traefik" 透出,如kubernetes dashboard
insecureSkipVerify = true
# 
defaultEntryPoints = ["http", "https"]
[entryPoints]
  [entryPoints.http]
  address = ":80"
    [entryPoints.http.redirect]
    entryPoint = "https"
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
        CertFile = "/etc/certs/ssl/default/tls_default.crt"
        KeyFile = "/etc/certs/ssl/default/tls_default.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/etc/certs/ssl/first/tls_first.crt"
        KeyFile = "/etc/certs/ssl/first/tls_first.key"
      [[entryPoints.https.tls.certificates]]
        CertFile = "/etc/certs/ssl/second/tls_second.crt"
        KeyFile = "/etc/certs/ssl/second/tls_second.key"
EOF
        
kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system

kubectl get configmap traefik-conf -n kube-system

kubectl describe cm traefik-conf -n kube-system
```

# 三、traefik-ingress-controller控制文件

## 1、创建文件
```
kubectl delete -f traefik-controller-tls.yaml 

rm -f ./traefik-controller-tls.yaml

cat >traefik-controller-tls.yaml<<\EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      hostNetwork: true  #添加该字段让docker使用物理机网络,在物理机暴露服务端口(80),注意物理机80端口提前不能被占用
      dnsPolicy: ClusterFirstWithHostNet  #使用hostNetwork后容器会使用物理机网络包括DNS,会无法解析内部service,使用此参数让容器使用K8S的DNS
      volumes:
      - name: ssl
        secret:
          secretName: traefik-cert
      - name: config
        configMap:
          name: traefik-conf
      #=======添加nodeSelector信息:只在master节点创建=======
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: "Equal"
        value: ""
        effect: NoSchedule
      nodeSelector:
        node-role.kubernetes.io/master: ""
      #===================================================
      containers:
      - image: traefik:v1.7.12
        name: traefik-ingress-lb
        volumeMounts:
        - mountPath: "/etc/certs/ssl"
          name: "ssl"
        - mountPath: "/etc/certs/config"
          name: "config"
        resources:
          limits:
            cpu: 1000m
            memory: 800Mi
          requests:
            cpu: 500m
            memory: 600Mi
        ports:
          - name: http
            containerPort: 80
            hostPort: 80
          - name: https
            containerPort: 443
            hostPort: 443
          - name: admin
            containerPort: 8080
            hostPort: 8080
        securityContext:
          capabilities:
            drop:
             - ALL
            add:
             - NET_BIND_SERVICE
        args:
        - --configfile=/etc/certs/config/traefik.toml
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
    - extensions
    resources:
    - ingresses/status
    verbs:
    - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: http
    - protocol: TCP
      port: 443
      name: https
    - protocol: TCP
      port: 8080
      name: admin
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  tls:
  - secretName: traefik-cert
  rules:
  - host: traefik-ui.devops.com
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-ingress-service
          servicePort: admin
---
EOF

kubectl apply -f traefik-controller-tls.yaml
```

## 2、删除资源
```
kubectl delete -f traefik-controller-tls.yaml 
```
# 四、命令行创建 https ingress 例子
```
# 创建示例应用
$ kubectl run test-hello --image=nginx:alpine --port=80 --expose -n kube-system

# 删除示例应用(kubectl run 默认创建的是deployment资源应用 )
$ kubectl delete deployment test-hello -n kube-system
$ kubectl delete svc test-hello -n kube-system

# hello-tls-ingress 示例
$ cd /config/
$ vim hello-tls.ing.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hello-tls-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: k8s.test.com
    http:
      paths:
      - backend:
          serviceName: test-hello
          servicePort: 80
  tls:
  - secretName: traefik-cert
  
# 创建 https ingress
$ kubectl apply -f /config/hello-tls.ing.yaml

# 注意根据hello示例,需要在kube-system命名空间创建对应的secret: traefik-cert(这步在开篇已经创建了,无须再创建)
$ kubectl -n kube-system create secret tls traefik-cert --key=tls_default.key --cert=tls_default.crt

# 删除 https ingress
$ kubectl delete -f /config/hello-tls.ing.yaml
```
#测试访问(找到traefik-controller pod运行在哪个node节点上,然后绑定该节点的IP,然后访问该url)

https://k8s.test.com:23457

  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-02.png)

# 五、测试deployment和ingress
```
$ vim nginx-ingress-deploy.yaml
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: kube-system
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.5
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: kube-system
  annotations:
    traefik.ingress.kubernetes.io/load-balancer-method: drr  #动态加权轮训调度
spec:
  template:
    metadata:
      labels:
        name: nginx-service
spec:
  selector:
    app: nginx-pod
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  tls:
  - secretName: first-k8s
  - secretName: second-k8s
  rules:
  - host: k8s.first.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
  - host: k8s.second.com
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
          
$ kubectl apply -f nginx-ingress-deploy.yaml 
$ kubectl delete -f nginx-ingress-deploy.yaml
```

#访问测试

https://k8s.first.com:23457/

  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-03.png)

https://k8s.second.com:23457/

  ![ingress测试](https://github.com/Lancger/opsfull/blob/master/images/ingress-k8s-04.png)


参考资料:

https://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/ Kubernetes traefik ingress使用

http://docs.kubernetes.org.cn/558.html


================================================
FILE: components/ingress/traefik-ingress/README.md
================================================



================================================
FILE: components/ingress/常用操作.md
================================================
```
[root@master ingress]# kubectl get ingress -A
NAMESPACE     NAME                   HOSTS                 ADDRESS   PORTS   AGE
default       nginx-ingress          k8s.nginx.com                   80      40m
kube-system   kubernetes-dashboard   dashboard.test.com              80      2d21h
kube-system   traefik-web-ui         traefik-ui.test.com             80      2d21h



[root@master ingress]# kubectl delete ingress hello-tls-ingress
ingress.extensions "hello-tls-ingress" deleted

```

# 1、rbac.yaml

首先,为安全起见我们这里使用 RBAC 安全认证方式:(rbac.yaml)

```
mkdir -p /data/components/ingress

cat > /data/components/ingress/rbac.yaml << \EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
EOF

kubectl create -f /data/components/ingress/rbac.yaml
```

# 2、traefik.yaml

然后使用 Deployment 来管理 Pod,直接使用官方的 traefik 镜像部署即可(traefik.yaml)
```
cat > /data/components/ingress/traefik.yaml << \EOF
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      tolerations:
      - operator: "Exists"
      nodeSelector:
        kubernetes.io/hostname: linux-node1.example.com  #默认master是不允许被调度的,加上tolerations后允许被调度
      containers:
      - image: traefik
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
        - name: admin
          containerPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort
EOF

kubectl create -f /data/components/ingress/traefik.yaml

kubectl apply -f /data/components/ingress/traefik.yaml
```
```
要注意上面 yaml 文件:
tolerations:
- operator: "Exists"
nodeSelector:
  kubernetes.io/hostname: master
  
由于我们这里的特殊性,只有 master 节点有外网访问权限,所以我们使用nodeSelector标签将traefik的固定调度到master这个节点上,那么上面的tolerations是干什么的呢?这个是因为我们集群使用的 kubeadm 安装的,master 节点默认是不能被普通应用调度的,要被调度的话就需要添加这里的 tolerations 属性,当然如果你的集群和我们的不太一样,直接去掉这里的调度策略就行。

nodeSelector 和 tolerations 都属于 Pod 的调度策略,在后面的课程中会为大家讲解。

```
# 3、traefik-ui

traefik 还提供了一个 web ui 工具,就是上面的 8080 端口对应的服务,为了能够访问到该服务,我们这里将服务设置成的 NodePort

```
root># kubectl get pods -n kube-system -l k8s-app=traefik-ingress-lb -o wide
NAME                                          READY   STATUS    RESTARTS   AGE   IP           NODE                      NOMINATED NODE   READINESS GATES
traefik-ingress-controller-5b58d5c998-6dn97   1/1     Running   0          88s   10.244.0.2   linux-node1.example.com   <none>           <none>

root># kubectl get svc -n kube-system|grep traefik-ingress-service
traefik-ingress-service   NodePort    10.102.214.49   <none>        80:32472/TCP,8080:32482/TCP   44s

现在在浏览器中输入 master_node_ip:32303 就可以访问到 traefik 的 dashboard 了
```
http://192.168.56.11:32482/dashboard/

# 4、Ingress 对象

现在我们是通过 NodePort 来访问 traefik 的 Dashboard 的,那怎样通过 ingress 来访问呢? 首先,需要创建一个 ingress 对象:(ingress.yaml)

```
cat > /data/components/ingress/ingress.yaml <<\EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: traefik.k8s.com
    http:
      paths:
      - backend:
          serviceName: traefik-ingress-service
          #servicePort: 8080
          servicePort: admin  #这里建议使用servicePort: admin,这样就避免端口的调整
EOF

kubectl create -f /data/components/ingress/ingress.yaml
kubectl apply -f /data/components/ingress/ingress.yaml

要注意上面的 ingress 对象的规则,特别是 rules 区域,我们这里是要为 traefik 的 dashboard 建立一个 ingress 对象,所以这里的 serviceName 对应的是上面我们创建的 traefik-ingress-service,端口也要注意对应 8080 端口,为了避免端口更改,这里的 servicePort 的值也可以替换成上面定义的 port 的名字:admin
```
创建完成后,我们应该怎么来测试呢?

```
第一步,在本地的/etc/hosts里面添加上 traefik.k8s.com 与 master 节点外网 IP 的映射关系

第二步,在浏览器中访问:http://traefik.k8s.com 我们会发现并没有得到我们期望的 dashboard 界面,这是因为我们上面部署 traefik 的时候使用的是 NodePort 这种 Service 对象,所以我们只能通过上面的 32482 端口访问到我们的目标对象:http://traefik.k8s.com:32482

加上端口后我们发现可以访问到 dashboard 了,而且在 dashboard 当中多了一条记录,正是上面我们创建的 ingress 对象的数据,我们还可以切换到 HEALTH 界面中,可以查看当前 traefik 代理的服务的整体的健康状态 

第三步,上面我们可以通过自定义域名加上端口可以访问我们的服务了,但是我们平时服务别人的服务是不是都是直接用的域名啊,http 或者 https 的,几乎很少有在域名后面加上端口访问的吧?为什么?太麻烦啊,端口也记不住,要解决这个问题,怎么办,我们只需要把我们上面的 traefik 的核心应用的端口隐射到 master 节点上的 80 端口,是不是就可以了,因为 http 默认就是访问 80 端口,但是我们在 Service 里面是添加的一个 NodePort 类型的服务,没办法映射 80 端口,怎么办?这里就可以直接在 Pod 中指定一个 hostPort 即可,更改上面的 traefik.yaml 文件中的容器端口:

containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
  containerPort: 80
  hostPort: 80      #新增这行
- name: admin
  containerPort: 8080
  
添加以后hostPort: 80,然后更新应用
kubectl apply -f traefik.yaml

更新完成后,这个时候我们在浏览器中直接使用域名方法测试下
http://traefik.k8s.com

第四步,正常来说,我们如果有自己的域名,我们可以将我们的域名添加一条 DNS 记录,解析到 master 的外网 IP 上面,这样任何人都可以通过域名来访问我的暴露的服务了。

如果你有多个边缘节点的话,可以在每个边缘节点上部署一个 ingress-controller 服务,然后在边缘节点前面挂一个负载均衡器,比如 nginx,将所有的边缘节点均作为这个负载均衡器的后端,这样就可以实现 ingress-controller 的高可用和负载均衡了。
```

# 5、ingress tls

上节课给大家展示了 traefik 的安装使用以及简单的 ingress 的配置方法,这节课我们来学习一下 ingress tls 以及 path 路径在 ingress 对象中的使用方法。

1、TLS 认证

在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书:
```
openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt
```
现在我们有了证书,我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书:
```
kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system
```

3、配置 Traefik

前面我们使用的是 Traefik 的默认配置,现在我们来配置 Traefik,让其支持 https:



================================================
FILE: components/initContainers/README.md
================================================
参考资料:

https://www.cnblogs.com/yanh0606/p/11395920.html    Kubernetes的初始化容器initContainers

https://www.jianshu.com/p/e57c3e17ce8c  理解 Init 容器


================================================
FILE: components/job/README.md
================================================
参考资料:

https://www.jianshu.com/p/bd6cd1b4e076  Kubernetes对象之Job


https://www.cnblogs.com/lvcisco/p/9670100.html   k8s Job、Cronjob 的使用 


================================================
FILE: components/k8s-monitor/README.md
================================================
```
# 1、持久化监控数据
cat > prometheus-class.yaml <<-EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"
EOF

#部署class.yaml
kubectl apply -f prometheus-class.yaml

#查看创建的storageclass
kubectl get sc

#2、修改 Prometheus 持久化
prometheus是一种 StatefulSet 有状态集的部署模式,所以直接将 StorageClass 配置到里面,在下面的yaml中最下面添加持久化配置
#cat prometheus/prometheus-prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  labels:
    prometheus: k8s
  name: k8s
  namespace: monitoring
spec:
  alerting:
    alertmanagers:
    - name: alertmanager-main
      namespace: monitoring
      port: web
  baseImage: quay.io/prometheus/prometheus
  nodeSelector:
    kubernetes.io/os: linux
  podMonitorSelector: {}
  replicas: 2
  resources:
    requests:
      memory: 400Mi
  ruleSelector:
    matchLabels:
      prometheus: k8s
      role: alert-rules
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccountName: prometheus-k8s
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: v2.11.0
  storage:                     #----添加持久化配置,指定StorageClass为上面创建的fast
    volumeClaimTemplate:
      spec:
        storageClassName: fast #---指定为fast
        resources:
          requests:
            storage: 300Gi
            
kubectl apply -f prometheus/prometheus-prometheus.yaml

#3、修改 Grafana 持久化配置

由于 Grafana 是部署模式为 Deployment,所以我们提前为其创建一个 grafana-pvc.yaml 文件,加入下面 PVC 配置。
#vim grafana-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana
  namespace: monitoring  #---指定namespace为monitoring
spec:
  storageClassName: fast #---指定StorageClass为上面创建的fast
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi

kubectl apply -f grafana-pvc.yaml

#vim grafana/grafana-deployment.yaml
......
      volumes:
      - name: grafana-storage       #-------新增持久化配置
        persistentVolumeClaim:
          claimName: grafana        #-------设置为创建的PVC名称
      #- emptyDir: {}               #-------注释掉旧的配置
      #  name: grafana-storage
      - name: grafana-datasources
        secret:
          secretName: grafana-datasources
      - configMap:
          name: grafana-dashboards
        name: grafana-dashboards
......

kubectl apply -f grafana/grafana-deployment.yaml
```
参考资料:

https://www.cnblogs.com/skyflask/articles/11410063.html  kubernetes监控方案--cAdvisor+Heapster+InfluxDB+Grafana

https://www.cnblogs.com/skyflask/p/11480988.html  kubernetes监控终极方案-kube-promethues

http://www.mydlq.club/article/10/#wow1  Kube-promethues监控k8s集群

https://jicki.me/docker/kubernetes/2019/07/22/kube-prometheus/   Coreos kube-prometheus 监控


================================================
FILE: components/kube-proxy/README.md
================================================
# Kube-Proxy简述

```
运行在每个节点上,监听 API Server 中服务对象的变化,再通过管理 IPtables 来实现网络的转发
Kube-Proxy 目前支持三种模式:

UserSpace
    k8s v1.2 后就已经淘汰

IPtables
    目前默认方式

IPVS
    需要安装ipvsadm、ipset 工具包和加载 ip_vs 内核模块

```
参考资料:

https://ywnz.com/linuxyffq/2530.html  解析从外部访问Kubernetes集群中应用的几种方法  

https://www.jianshu.com/p/b2d13cec7091  浅谈 k8s service&kube-proxy  

https://www.codercto.com/a/90806.html  探究K8S Service内部iptables路由规则

https://blog.51cto.com/goome/2369150  k8s实践7:ipvs结合iptables使用过程分析

https://blog.csdn.net/xinghun_4/article/details/50492041  kubernetes中port、target port、node port的对比分析,以及kube-proxy代理


================================================
FILE: components/nfs/README.md
================================================



================================================
FILE: components/pressure/README.md
================================================
# 一、生产大规模集群,网络组件选择

如果用calico-RR反射器这种模式,保证性能的情况下大概能支撑好多个节点?

RR反射器还分为两种 可以由calico的节点服务承载 也可以是直接的物理路由器做RR

超大规模Calico如果全以BGP来跑没什么问题 只是要做好网络地址规划 即便是不同集群容器地址也不能重叠

# 二、flanel网络组件压测

```
flannel受限于cpu压力
```
  ![k8s网络组件flannel压测](https://github.com/Lancger/opsfull/blob/master/images/pressure_flannel_01.png)

# 三、calico网络组件压测

```
calico则轻轻松松与宿主机性能相差无几

如果单单一个集群 节点数超级多 如果不做BGP路由聚合 物理路由器或三层交换机会扛不住的
```
  ![k8s网络组件calico压测](https://github.com/Lancger/opsfull/blob/master/images/pressure_calico_01.png)

# 四、calico网络和宿主机压测

  ![k8s网络组件压测](https://github.com/Lancger/opsfull/blob/master/images/pressure_physical_01.png)


================================================
FILE: components/pressure/calico bgp网络需要物理路由和交换机支持吗.md
================================================
![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_01.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_02.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_03.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_04.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_05.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_06.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_07.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_08.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_09.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_10.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_11.png)

![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_12.png)


================================================
FILE: components/pressure/k8s集群更换网段方案.md
================================================
```
1、服务器IP更换网段  有什么解决方案吗?不重新搭建集群的话?

方案一:

改监听地址,重做集群证书

不然还真不好搞的

方案二:

如果etcd一开始是静态的 那就不好玩了

得一开始就是基于dns discovery方式

简明扼要的说

就是但凡涉及IP地址的地方

全部用fqdn

无论是证书还是配置文件

这四句话核心就够了

etcd官方本来就有正式文档讲dns discovery部署

只是k8s部分,官方部署没有提

```

![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_01.png)

![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_02.png)

![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_05.png)

![](https://github.com/Lancger/opsfull/blob/master/images/change_ip_06.png)


来自:  广大群友讨论集锦

https://github.com/etcd-io/etcd/blob/a4018f25c91fff8f4f15cd2cee9f026650c7e688/Documentation/clustering.md#dns-discovery  


================================================
FILE: docs/Envoy的架构与基本术语.md
================================================
参考文档:

https://jimmysong.io/kubernetes-handbook/usecases/envoy-terminology.html  Envoy 的架构与基本术语


================================================
FILE: docs/Kubernetes学习笔记.md
================================================
参考文档:

https://blog.gmem.cc/kubernetes-study-note  Kubernetes学习笔记


================================================
FILE: docs/Kubernetes架构介绍.md
================================================
# Kubernetes架构介绍

## Kubernetes架构

![](https://github.com/Lancger/opsfull/blob/master/images/kubernetes%E6%9E%B6%E6%9E%84.jpg)

## k8s架构图

![](https://github.com/Lancger/opsfull/blob/master/images/k8s%E6%9E%B6%E6%9E%84%E5%9B%BE.jpg)

## 一、K8S Master节点
### API Server
apiserver提供集群管理的REST API接口,包括认证授权、数据校验以 及集群状态变更等
只有API Server才直接操作etcd
其他模块通过API Server查询或修改数据
提供其他模块之间的数据交互和通信的枢纽

### Scheduler
scheduler负责分配调度Pod到集群内的node节点
监听kube-apiserver,查询还未分配Node的Pod
根据调度策略为这些Pod分配节点

### Controller Manager
controller-manager由一系列的控制器组成,它通过apiserver监控整个 集群的状态,并确保集群处于预期的工作状态

### ETCD
所有持久化的状态信息存储在ETCD中

## 二、K8S Node节点
### Kubelet
1. 管理Pods以及容器、镜像、Volume等,实现对集群 对节点的管理。
### Kube-proxy
2. 提供网络代理以及负载均衡,实现与Service通信。
### Docker Engine
3. 负责节点的容器的管理工作。

## 三、资源对象介绍

### 3.1 Replication Controller,RC

1. RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中
的Pod来保证集群中运行指定数目的Pod副本。

2. 指定的数目可以是多个也可以是1个;少于指定数目,RC就会启动运
行新的Pod副本;多于指定数目,RC就会杀死多余的Pod副本。

3. 即使在指定数目为1的情况下,通过RC运行Pod也比直接运行Pod更 明智,因为RC也可以发挥它高可用的能力,保证永远有1个Pod在运 行。

### 3.2 Replica Set,RS

1. RS是新一代RC,提供同样的高可用能力,区别主要在于RS后来居上, 能支持更多中的匹配模式。副本集对象一般不单独使用,而是作为部 署的理想状态参数使用。

2. 是K8S 1.2中出现的概念,是RC的升级。一般和Deployment共同使用。

### 3.3 Deployment
1. Deployment表示用户对K8s集群的一次更新操作。Deployment是 一个比RS应用模式更广的API对象,

2. 可以是创建一个新的服务,更新一个新的服务,也可以是滚动升 级一个服务。滚动升级一个服务,实际是创建一个新的RS,然后 逐渐将新RS中副本数增加到理想状态,将旧RS中的副本数减小 到0的复合操作;

3. 这样一个复合操作用一个RS是不太好描述的,所以用一个更通用 的Deployment来描述。

### 3.4 Service
1. RC、RS和Deployment只是保证了支撑服务的POD的数量,但是没有解 决如何访问这些服务的问题。一个Pod只是一个运行服务的实例,随时可 能在一个节点上停止,在另一个节点以一个新的IP启动一个新的Pod,因 此不能以确定的IP和端口号提供服务。

2. 要稳定地提供服务需要服务发现和负载均衡能力。服务发现完成的工作, 是针对客户端访问的服务,找到对应的的后端服务实例。

3. 在K8集群中,客户端需要访问的服务就是Service对象。每个Service会对 应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。

## 四、K8S的IP地址
1. Node IP: 节点设备的IP,如物理机,虚拟机等容器宿主的实际IP。 

2. Pod IP: Pod 的IP地址,是根据docker0网格IP段进行分配的。 

3. Cluster IP: Service的IP,是一个虚拟IP,仅作用于service对象,由k8s
管理和分配,需要结合service port才能使用,单独的IP没有通信功能,
集群外访问需要一些修改。

4. 在K8S集群内部,nodeip podip clusterip的通信机制是由k8s制定的路由
规则,不是IP路由。


================================================
FILE: docs/Kubernetes集群环境准备.md
================================================
# 一、k8s集群实验环境准备

  ![架构图](https://github.com/Lancger/opsfull/blob/master/images/K8S.png)

<table border="0">
    <tr>
        <td><strong>主机名</strong></td>
        <td><strong>IP地址(NAT)</strong></td>
        <td><strong>描述</strong></td>
    </tr>
     <tr>
        <td><strong>linux-node1.example.com</strong></td>
        <td>eth0:192.168.56.11</td>
        <td>Kubernets Master节点/Etcd节点</td>
    </tr>
    <tr>
        <td><strong>linux-node2.example.com</strong></td>
        <td>eth0:192.168.56.12</td>
        <td>Kubernets Node节点/ Etcd节点</td>
    </tr>
    <tr>
        <td><strong>linux-node3.example.com</strong></td>
        <td>eth0:192.168.56.13</td>
        <td>Kubernets Node节点/ Etcd节点</td>
    </tr>
</table>

# 二、准备工作
  
1、设置主机名
```
hostnamectl set-hostname linux-node1
hostnamectl set-hostname linux-node2
hostnamectl set-hostname linux-node3
```

2、绑定主机host
```
cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.11 linux-node1 linux-node1.example.com
192.168.56.12 linux-node2 linux-node2.example.com
192.168.56.13 linux-node3 linux-node3.example.com
EOF
```

3、设置部署节点(Master)到其它所有节点的SSH免密码登(包括本机)
```
[root@linux-node1 ~]# ssh-keygen -t rsa
[root@linux-node1 ~]# ssh-copy-id linux-node1
[root@linux-node1 ~]# ssh-copy-id linux-node2
[root@linux-node1 ~]# ssh-copy-id linux-node3
```

4、关闭防火墙和selinux
```
#关闭防火墙
systemctl disable firewalld
systemctl stop firewalld

#关闭selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/SELINUXTYPE=targeted/SELINUXTYPE=disabled/g" /etc/sysconfig/selinux
setenforce 0
```

5、其他配置
```
yum install -y ntpdate wget lrzsz vim net-tools

#加入crontab
1 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null 2>&1

#设置时区
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

#SSH登录慢
sed -i "s/#UseDNS yes/UseDNS no/"  /etc/ssh/sshd_config
sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/"  /etc/ssh/sshd_config
systemctl restart sshd.service
```

6、软件包下载

k8s-v1.12.0版本网盘地址: https://pan.baidu.com/s/1jU427W1f3oSDnzB3bU2s5w

```
#所有文件存放在/opt/kubernetes目录下
mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

#使用二进制方式进行部署
官网下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#downloads-for-v1121

#添加环境变量
vim /root/.bash_profile
PATH=$PATH:$HOME/bin:/opt/kubernetes/bin
source /root/.bash_profile
```
![官网下载链接](https://github.com/Lancger/opsfull/blob/master/images/k8s-soft.jpg)

7、解压软件包
```
tar -zxvf kubernetes.tar.gz -C /usr/local/src/
tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /usr/local/src/
tar -zxvf kubernetes-client-linux-amd64.tar.gz -C /usr/local/src/
tar -zxvf kubernetes-node-linux-amd64.tar.gz -C /usr/local/src/
```



================================================
FILE: docs/app.md
================================================
1.创建一个测试用的deployment
```
[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000

[root@linux-node1 ~]# kubectl get deployment
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
net-test   2         2         2            2           2h
[root@linux-node1 ~]# kubectl delete deployment net-test
```

2.查看获取IP情况
```
[root@linux-node1 ~]# kubectl get pod -o wide
NAME                        READY     STATUS    RESTARTS   AGE       IP          NODE
net-test-5767cb94df-6smfk   1/1       Running   1          1h        10.2.69.3   192.168.56.12
net-test-5767cb94df-ctkhz   1/1       Running   1          1h        10.2.17.3   192.168.56.13
```

3.测试联通性
```
[root@linux-node1 ~]# ping -c 1 10.2.69.3
PING 10.2.69.3 (10.2.69.3) 56(84) bytes of data.
64 bytes from 10.2.69.3: icmp_seq=1 ttl=61 time=1.39 ms

--- 10.2.69.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.396/1.396/1.396/0.000 ms

[root@linux-node1 ~]# ping -c 1 10.2.17.3
PING 10.2.17.3 (10.2.17.3) 56(84) bytes of data.
64 bytes from 10.2.17.3: icmp_seq=1 ttl=61 time=1.16 ms

--- 10.2.17.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.164/1.164/1.164/0.000 ms

#如果要在master节点不能ping通pod的IP,则需要检查flanneld服务,下面是各节点的网卡ip情况(发现各节点的flannel0的ip网段都是不一样的)
#node1
[root@linux-node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.2.41.1  netmask 255.255.255.0  broadcast 10.2.41.255
        ether 02:42:77:d9:95:e3  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.11  netmask 255.255.255.0  broadcast 192.168.56.255
        ether 00:0c:29:e6:00:79  txqueuelen 1000  (Ethernet)
        RX packets 75548  bytes 10771254 (10.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 74344  bytes 12700211 (12.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 10.2.41.0  netmask 255.255.0.0  destination 10.2.41.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 30  bytes 2520 (2.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 30  bytes 2520 (2.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 34140  bytes 8049438 (7.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 34140  bytes 8049438 (7.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#node2
[root@linux-node2 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
        inet 10.2.69.1  netmask 255.255.255.0  broadcast 10.2.69.255
        ether 02:42:de:56:b5:1e  txqueuelen 0  (Ethernet)
        RX packets 10  bytes 448 (448.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9  bytes 546 (546.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.12  netmask 255.255.255.0  broadcast 192.168.56.255
        ether 00:0c:29:ee:65:40  txqueuelen 1000  (Ethernet)
        RX packets 32893  bytes 4996885 (4.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32877  bytes 3737878 (3.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 10.2.69.0  netmask 255.255.0.0  destination 10.2.69.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 3  bytes 252 (252.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 252 (252.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 347  bytes 36887 (36.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 347  bytes 36887 (36.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth09ea856c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
        ether c6:be:00:bd:a9:18  txqueuelen 0  (Ethernet)
        RX packets 10  bytes 588 (588.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9  bytes 546 (546.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

#node3
[root@linux-node3 ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
        inet 10.2.17.1  netmask 255.255.255.0  broadcast 10.2.17.255
        ether 02:42:ac:11:ac:3c  txqueuelen 0  (Ethernet)
        RX packets 32  bytes 2408 (2.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 31  bytes 2814 (2.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.13  netmask 255.255.255.0  broadcast 192.168.56.255
        ether 00:0c:29:53:f4:b1  txqueuelen 1000  (Ethernet)
        RX packets 47504  bytes 7138550 (6.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 48402  bytes 8310935 (7.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 10.2.17.0  netmask 255.255.0.0  destination 10.2.17.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 27  bytes 2268 (2.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 27  bytes 2268 (2.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 129  bytes 13510 (13.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 129  bytes 13510 (13.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth8630a55b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
        ether 72:e9:df:4f:f6:64  txqueuelen 0  (Ethernet)
        RX packets 32  bytes 2856 (2.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 31  bytes 2814 (2.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
```

4、创建nginx服务
```
#创建deployment文件
[root@linux-node1 ~]# vim  nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13.12
        ports:
        - containerPort: 80

#创建deployment
[root@linux-node1 ~]# kubectl create -f nginx-deployment.yaml
deployment.apps "nginx-deployment" created


#查看deployment
[root@linux-node1 ~]# kubectl get deployment
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3         3         3            2           48s


#查看deployment详情
[root@linux-node1 ~]# kubectl describe deployment nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Tue, 09 Oct 2018 15:11:33 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision=1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.13.12
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-6c45fc49c
Download .txt
gitextract_y9znwe6w/

├── LICENSE
├── README.md
├── apps/
│   ├── README.md
│   ├── nginx/
│   │   └── README.md
│   ├── ops/
│   │   └── README.md
│   └── wordpress/
│       ├── README.md
│       ├── 基于PV_PVC部署Wordpress 示例.md
│       └── 部署Wordpress 示例.md
├── components/
│   ├── README.md
│   ├── cronjob/
│   │   └── README.md
│   ├── dashboard/
│   │   ├── Kubernetes-Dashboard v2.0.0.md
│   │   └── README.md
│   ├── external-storage/
│   │   ├── 0、nfs服务端搭建.md
│   │   ├── 1、k8s的pv和pvc简述.md
│   │   ├── 2、静态配置PV和PVC.md
│   │   ├── 3、动态申请PV卷.md
│   │   ├── 4、Kubernetes之MySQL持久存储和故障转移.md
│   │   ├── 5、Kubernetes之Nginx动静态PV持久存储.md
│   │   └── README.md
│   ├── heapster/
│   │   └── README.md
│   ├── ingress/
│   │   ├── 0.通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系.md
│   │   ├── 1.kubernetes部署Ingress-nginx单点和高可用.md
│   │   ├── 1.外部服务发现之Ingress介绍.md
│   │   ├── 2.ingress tls配置.md
│   │   ├── 3.ingress-http使用示例.md
│   │   ├── 4.ingress-https使用示例.md
│   │   ├── 5.hello-tls.md
│   │   ├── 6.ingress-https使用示例.md
│   │   ├── README.md
│   │   ├── nginx-ingress/
│   │   │   └── README.md
│   │   ├── traefik-ingress/
│   │   │   ├── 1.traefik反向代理Deamonset模式.md
│   │   │   ├── 2.traefik反向代理Deamonset模式TLS.md
│   │   │   └── README.md
│   │   └── 常用操作.md
│   ├── initContainers/
│   │   └── README.md
│   ├── job/
│   │   └── README.md
│   ├── k8s-monitor/
│   │   └── README.md
│   ├── kube-proxy/
│   │   └── README.md
│   ├── nfs/
│   │   └── README.md
│   └── pressure/
│       ├── README.md
│       ├── calico bgp网络需要物理路由和交换机支持吗.md
│       └── k8s集群更换网段方案.md
├── docs/
│   ├── Envoy的架构与基本术语.md
│   ├── Kubernetes学习笔记.md
│   ├── Kubernetes架构介绍.md
│   ├── Kubernetes集群环境准备.md
│   ├── app.md
│   ├── app2.md
│   ├── ca.md
│   ├── coredns.md
│   ├── dashboard.md
│   ├── dashboard_op.md
│   ├── delete.md
│   ├── docker-install.md
│   ├── etcd-install.md
│   ├── flannel.md
│   ├── k8s-error-resolution.md
│   ├── k8s_pv_local.md
│   ├── k8s重启pod.md
│   ├── master.md
│   ├── node.md
│   ├── operational.md
│   ├── 外部访问K8s中Pod的几种方式.md
│   └── 虚拟机环境准备.md
├── example/
│   ├── coredns/
│   │   └── coredns.yaml
│   └── nginx/
│       ├── nginx-daemonset.yaml
│       ├── nginx-deployment.yaml
│       ├── nginx-ingress.yaml
│       ├── nginx-pod.yaml
│       ├── nginx-rc.yaml
│       ├── nginx-rs.yaml
│       ├── nginx-service-nodeport.yaml
│       └── nginx-service.yaml
├── helm/
│   └── README.md
├── kubeadm/
│   ├── K8S-HA-V1.13.4-关闭防火墙版.md
│   ├── K8S-HA-V1.16.x-云环境-Calico.md
│   ├── K8S-V1.16.2-开启防火墙-Flannel.md
│   ├── Kubernetes 集群变更IP地址.md
│   ├── README.md
│   ├── k8S-HA-V1.15.3-Calico-开启防火墙版.md
│   ├── k8S-HA-V1.15.3-Flannel-开启防火墙版.md
│   ├── k8s清理.md
│   ├── kubeadm.yaml
│   ├── kubeadm初始化k8s集群延长证书过期时间.md
│   └── kubeadm无法下载镜像问题.md
├── manual/
│   ├── README.md
│   ├── v1.14/
│   │   └── README.md
│   └── v1.15.3/
│       └── README.md
├── mysql/
│   ├── README.md
│   └── kubernetes访问外部mysql服务.md
├── redis/
│   ├── K8s上Redis集群动态扩容.md
│   ├── K8s上运行Redis单实例.md
│   ├── K8s上运行Redis集群指南.md
│   └── README.md
├── rke/
│   ├── README.md
│   └── cluster.yml
└── tools/
    ├── Linux Kernel 升级.md
    ├── README.md
    ├── k8s域名解析coredns问题排查过程.md
    ├── kubernetes-node打标签.md
    ├── kubernetes-常用操作.md
    ├── kubernetes-批量删除Pods.md
    ├── kubernetes访问外部mysql服务.md
    └── ssh_copy.sh
Condensed preview — 104 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (662K chars).
[
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 2438,
    "preview": "# 一、K8S攻略\n- [Kubernetes架构介绍](docs/Kubernetes架构介绍.md)\n- [Kubernetes集群环境准备](docs/Kubernetes集群环境准备.md)\n- [Docker安装](docs/do"
  },
  {
    "path": "apps/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "apps/nginx/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "apps/ops/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "apps/wordpress/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "apps/wordpress/基于PV_PVC部署Wordpress 示例.md",
    "chars": 1490,
    "preview": "# 一、PV(PersistentVolume)\n\nPersistentVolume (PV) 是外部存储系统中的一块存储空间,由管理员创建和维护。与 Volume 一样,PV 具有持久性,生命周期独立于 Pod。\n\n1、PV和PVC是一一"
  },
  {
    "path": "apps/wordpress/部署Wordpress 示例.md",
    "chars": 10474,
    "preview": "# 一、简述\n\n&#8195;Wordpress应用主要涉及到两个镜像:wordpress 和 mysql,wordpress 是应用的核心程序,mysql 是用于数据存储的。现在我们来看看如何来部署我们的这个wordpress应用。这个服"
  },
  {
    "path": "components/README.md",
    "chars": 137,
    "preview": "\n# ingress\n\n# helm\n\nhttps://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-require"
  },
  {
    "path": "components/cronjob/README.md",
    "chars": 70,
    "preview": "参考资料:\n\nhttps://www.jianshu.com/p/62b4f0a3134b   Kubernetes对象之CronJob \n"
  },
  {
    "path": "components/dashboard/Kubernetes-Dashboard v2.0.0.md",
    "chars": 1001,
    "preview": "```bash\n#安装\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended."
  },
  {
    "path": "components/dashboard/README.md",
    "chars": 9287,
    "preview": "# 一、安装dashboard v1.10.1\n\n## 1、使用NodePort方式暴露访问\n\n1、下载对应的yaml文件\n```\nwget https://raw.githubusercontent.com/kubernetes/dash"
  },
  {
    "path": "components/external-storage/0、nfs服务端搭建.md",
    "chars": 2982,
    "preview": "## 一、nfs服务端\n```bash\n#所有节点安装nfs\nyum install -y nfs-utils rpcbind\n\n#创建nfs目录\nmkdir -p /nfs/data/\n\n#修改权限\nchmod -R 666 /nfs/d"
  },
  {
    "path": "components/external-storage/1、k8s的pv和pvc简述.md",
    "chars": 1490,
    "preview": "# 一、PV(PersistentVolume)\n\nPersistentVolume (PV) 是外部存储系统中的一块存储空间,由管理员创建和维护。与 Volume 一样,PV 具有持久性,生命周期独立于 Pod。\n\n1、PV和PVC是一一"
  },
  {
    "path": "components/external-storage/2、静态配置PV和PVC.md",
    "chars": 9113,
    "preview": "Table of Contents\n=================\n\n   * [一、环境介绍](#一环境介绍)\n   * [二、PV操作](#二pv操作)\n      * [01、创建PV卷](#01创建pv卷)\n      * [0"
  },
  {
    "path": "components/external-storage/3、动态申请PV卷.md",
    "chars": 9155,
    "preview": "Table of Contents\n=================\n\n   * [Kubernetes 中部署 NFS Provisioner 为 NFS 提供动态分配卷](#kubernetes-中部署-nfs-provisioner"
  },
  {
    "path": "components/external-storage/4、Kubernetes之MySQL持久存储和故障转移.md",
    "chars": 6031,
    "preview": "Table of Contents\n=================\n\n   * [一、MySQL持久化演练](#一mysql持久化演练)\n      * [1、数据库提供持久化存储,主要分为下面几个步骤:](#1数据库提供持久化存储主要"
  },
  {
    "path": "components/external-storage/5、Kubernetes之Nginx动静态PV持久存储.md",
    "chars": 14374,
    "preview": "Table of Contents\n=================\n\n   * [一、nginx使用nfs静态PV](#一nginx使用nfs静态pv)\n      * [1、静态nfs-static-nginx-rc.yaml](#1"
  },
  {
    "path": "components/external-storage/README.md",
    "chars": 825,
    "preview": "PersistenVolume(PV):对存储资源创建和使用的抽象,使得存储作为集群中的资源管理\n\nPV分为静态和动态,动态能够自动创建PV\n\nPersistentVolumeClaim(PVC):让用户不需要关心具体的Volume实现细节"
  },
  {
    "path": "components/heapster/README.md",
    "chars": 942,
    "preview": "# 一、问题现象\nheapster:  已经被k8s给舍弃掉了\n```bash\nheapster logs这个报错啥情况 \nE0918 16:56:05.022867       1 manager.go:101] Error in scr"
  },
  {
    "path": "components/ingress/0.通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系.md",
    "chars": 1758,
    "preview": "# 一、通俗的讲:\n\n- 1、Service 是后端真实服务的抽象,一个 Service 可以代表多个相同的后端服务\n\n- 2、Ingress 是反向代理规则,用来规定 HTTP/S 请求应该被转发到哪个 Service 上,比如根据请求中"
  },
  {
    "path": "components/ingress/1.kubernetes部署Ingress-nginx单点和高可用.md",
    "chars": 17193,
    "preview": "# 一、Ingress-nginx简介\n\n&#8195;Pod的IP以及service IP只能在集群内访问,如果想在集群外访问kubernetes提供的服务,可以使用nodeport、proxy、loadbalacer以及ingress等"
  },
  {
    "path": "components/ingress/1.外部服务发现之Ingress介绍.md",
    "chars": 11320,
    "preview": "# 一、ingress介绍\n\n&#8195;&#8195;K8s集群对外暴露服务的方式目前只有三种:`LoadBlancer`、`NodePort`、`Ingress`。前两种熟悉起来比较快,而且使用起来也比较方便,在此就不进行介绍了。\n\n"
  },
  {
    "path": "components/ingress/2.ingress tls配置.md",
    "chars": 6436,
    "preview": "# 1、Ingress tls\n\n上节课给大家展示了 traefik 的安装使用以及简单的 ingress 的配置方法,这节课我们来学习一下 ingress tls 以及 path 路径在 ingress 对象中的使用方法。\n\n# 2、TL"
  },
  {
    "path": "components/ingress/3.ingress-http使用示例.md",
    "chars": 4280,
    "preview": "# 一、ingress-http测试示例\n\n## 1、关键三个点:\n\n    注意这3个资源的namespace: kube-system需要一致\n\n    Deployment\n\n    Service\n\n    Ingress\n\n```"
  },
  {
    "path": "components/ingress/4.ingress-https使用示例.md",
    "chars": 8989,
    "preview": "# 一、ingress-https测试示例\n\n1、TLS 认证\n\n在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时"
  },
  {
    "path": "components/ingress/5.hello-tls.md",
    "chars": 7505,
    "preview": "# 证书文件\n\n1、生成证书\n```\nmkdir -p /ssl/{default,first,second}\ncd /ssl/default/\nopenssl req -x509 -nodes -days 165 -newkey rsa:"
  },
  {
    "path": "components/ingress/6.ingress-https使用示例.md",
    "chars": 9898,
    "preview": "# ingress-https测试示例\n\n# 一、证书文件\n\n## 1、TLS 认证\n\n在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样"
  },
  {
    "path": "components/ingress/README.md",
    "chars": 157,
    "preview": "参考资料:\n\nhttps://segmentfault.com/a/1190000019908991  k8s ingress原理及ingress-nginx部署测试\n\nhttps://www.cnblogs.com/tchua/p/111"
  },
  {
    "path": "components/ingress/nginx-ingress/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "components/ingress/traefik-ingress/1.traefik反向代理Deamonset模式.md",
    "chars": 7650,
    "preview": "# 一、Deamonset方式部署traefik-controller-ingress\n\nhttps://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yam"
  },
  {
    "path": "components/ingress/traefik-ingress/2.traefik反向代理Deamonset模式TLS.md",
    "chars": 10755,
    "preview": "# Ingress-Https测试示例\n\n# 一、证书文件\n\n## 1、TLS 认证\n\n&#8195;在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书"
  },
  {
    "path": "components/ingress/traefik-ingress/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "components/ingress/常用操作.md",
    "chars": 6694,
    "preview": "```\n[root@master ingress]# kubectl get ingress -A\nNAMESPACE     NAME                   HOSTS                 ADDRESS   P"
  },
  {
    "path": "components/initContainers/README.md",
    "chars": 142,
    "preview": "参考资料:\n\nhttps://www.cnblogs.com/yanh0606/p/11395920.html    Kubernetes的初始化容器initContainers\n\nhttps://www.jianshu.com/p/e57"
  },
  {
    "path": "components/job/README.md",
    "chars": 136,
    "preview": "参考资料:\n\nhttps://www.jianshu.com/p/bd6cd1b4e076  Kubernetes对象之Job\n\n\nhttps://www.cnblogs.com/lvcisco/p/9670100.html   k8s J"
  },
  {
    "path": "components/k8s-monitor/README.md",
    "chars": 2774,
    "preview": "```\n# 1、持久化监控数据\ncat > prometheus-class.yaml <<-EOF\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: fa"
  },
  {
    "path": "components/kube-proxy/README.md",
    "chars": 596,
    "preview": "# Kube-Proxy简述\n\n```\n运行在每个节点上,监听 API Server 中服务对象的变化,再通过管理 IPtables 来实现网络的转发\nKube-Proxy 目前支持三种模式:\n\nUserSpace\n    k8s v1.2"
  },
  {
    "path": "components/nfs/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "components/pressure/README.md",
    "chars": 614,
    "preview": "# 一、生产大规模集群,网络组件选择\n\n如果用calico-RR反射器这种模式,保证性能的情况下大概能支撑好多个节点?\n\nRR反射器还分为两种 可以由calico的节点服务承载 也可以是直接的物理路由器做RR\n\n超大规模Calico如果全以"
  },
  {
    "path": "components/pressure/calico bgp网络需要物理路由和交换机支持吗.md",
    "chars": 935,
    "preview": "![](https://github.com/Lancger/opsfull/blob/master/images/calico_bgp_01.png)\n\n![](https://github.com/Lancger/opsfull/blo"
  },
  {
    "path": "components/pressure/k8s集群更换网段方案.md",
    "chars": 677,
    "preview": "```\n1、服务器IP更换网段  有什么解决方案吗?不重新搭建集群的话?\n\n方案一:\n\n改监听地址,重做集群证书\n\n不然还真不好搞的\n\n方案二:\n\n如果etcd一开始是静态的 那就不好玩了\n\n得一开始就是基于dns discovery方式\n"
  },
  {
    "path": "docs/Envoy的架构与基本术语.md",
    "chars": 96,
    "preview": "参考文档:\n\nhttps://jimmysong.io/kubernetes-handbook/usecases/envoy-terminology.html  Envoy 的架构与基本术语\n"
  },
  {
    "path": "docs/Kubernetes学习笔记.md",
    "chars": 66,
    "preview": "参考文档:\n\nhttps://blog.gmem.cc/kubernetes-study-note  Kubernetes学习笔记\n"
  },
  {
    "path": "docs/Kubernetes架构介绍.md",
    "chars": 1939,
    "preview": "# Kubernetes架构介绍\n\n## Kubernetes架构\n\n![](https://github.com/Lancger/opsfull/blob/master/images/kubernetes%E6%9E%B6%E6%9E%8"
  },
  {
    "path": "docs/Kubernetes集群环境准备.md",
    "chars": 2894,
    "preview": "# 一、k8s集群实验环境准备\r\n\r\n  ![架构图](https://github.com/Lancger/opsfull/blob/master/images/K8S.png)\r\n\r\n<table border=\"0\">\r\n    <t"
  },
  {
    "path": "docs/app.md",
    "chars": 16981,
    "preview": "1.创建一个测试用的deployment\n```\n[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000\n\n[root@linu"
  },
  {
    "path": "docs/app2.md",
    "chars": 13944,
    "preview": "1.查询命名空间\n```\n[root@linux-node1 ~]# kubectl get namespace --all-namespaces\nNAME              STATUS   AGE\ndefault        "
  },
  {
    "path": "docs/ca.md",
    "chars": 2814,
    "preview": "# 手动制作CA证书\n\n```\nKubernetes 系统各组件需要使用 TLS 证书对通信进行加密。\n\nCA证书管理工具:\n• easyrsa       ---openvpn比较常用\n• openssl\n• cfssl         "
  },
  {
    "path": "docs/coredns.md",
    "chars": 2027,
    "preview": "# Kubernetes CoreDNS\n\nk8s集群内部服务发现是通过dns来实现的,其他pod之间的域名解析服务都是靠dns来实现的,目前支持2种dns,一种kubedns,一种coredns.\n\n## 创建CoreDNS\n\n```\n#"
  },
  {
    "path": "docs/dashboard.md",
    "chars": 2964,
    "preview": "# Kubernetes Dashboard\n\n## 创建Dashboard\n```\n[root@linux-node1 ~]# kubectl create -f /srv/addons/dashboard/\n[root@linux-no"
  },
  {
    "path": "docs/dashboard_op.md",
    "chars": 7395,
    "preview": "# Kubernetes Dashboard\n\n```\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/"
  },
  {
    "path": "docs/delete.md",
    "chars": 1130,
    "preview": "```\n#master\nsystemctl restart kube-scheduler\nsystemctl restart kube-controller-manager\nsystemctl restart kube-apiserver\n"
  },
  {
    "path": "docs/docker-install.md",
    "chars": 4095,
    "preview": "# study_docker\n\n## 0.卸载旧版本\n```bash\nyum remove -y docker \\\ndocker-client \\\ndocker-client-latest \\\ndocker-common \\\ndocker-"
  },
  {
    "path": "docs/etcd-install.md",
    "chars": 8160,
    "preview": "\n# 手动部署ETCD集群\n\n## 0.准备etcd软件包\n```\n[root@linux-node1 src]# wget https://github.com/coreos/etcd/releases/download/v3.2.18/"
  },
  {
    "path": "docs/flannel.md",
    "chars": 6219,
    "preview": "1.为Flannel生成证书\n```\n[root@linux-node1 ~]# cd /usr/local/src/ssl/\n[root@linux-node1 ssl]#\ncat > flanneld-csr.json <<EOF\n{\n"
  },
  {
    "path": "docs/k8s-error-resolution.md",
    "chars": 2536,
    "preview": "## 报错一:flanneld 启动不了\n```\nOct 10 10:42:19 linux-node1 flanneld: E1010 10:42:19.499080    1816 main.go:349] Couldn't fetch"
  },
  {
    "path": "docs/k8s_pv_local.md",
    "chars": 78,
    "preview": "参考文档:\n\nhttps://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/  \n"
  },
  {
    "path": "docs/k8s重启pod.md",
    "chars": 788,
    "preview": "通过kubectl delete批量删除全部Pod\n```\nkubectl delete pod --all\n```\n\n```\n在没有pod 的yaml文件时,强制重启某个pod\n\nkubectl get pod PODNAME -n NA"
  },
  {
    "path": "docs/master.md",
    "chars": 15595,
    "preview": "## 一.部署Kubernetes API服务部署\n### 0.准备软件包\n```\n[root@linux-node1 ~]# cd /usr/local/src/kubernetes\n[root@linux-node1 kubernete"
  },
  {
    "path": "docs/node.md",
    "chars": 8158,
    "preview": "## 部署kubelet\n\n1.二进制包准备\n将软件包从linux-node1复制到linux-node2中去。\n```\n[root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/b"
  },
  {
    "path": "docs/operational.md",
    "chars": 4733,
    "preview": "\n## 一、服务重启\n```\n#master\nsystemctl restart kube-scheduler\nsystemctl restart kube-controller-manager\nsystemctl restart kube"
  },
  {
    "path": "docs/外部访问K8s中Pod的几种方式.md",
    "chars": 530,
    "preview": "```\nIngress是个什么鬼,网上资料很多(推荐官方),大家自行研究。简单来讲,就是一个负载均衡的玩意,其主要用来解决使用NodePort暴露Service的端口时Node IP会漂移的问题。同时,若大量使用NodePort暴露主机端口"
  },
  {
    "path": "docs/虚拟机环境准备.md",
    "chars": 3528,
    "preview": "# 一、安装环境准备\n\n下载系统镜像:可以在阿里云镜像站点下载 CentOS \n\n镜像: http://mirrors.aliyun.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1804.iso"
  },
  {
    "path": "example/coredns/coredns.yaml",
    "chars": 3459,
    "preview": "apiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: coredns\n  namespace: kube-system\n  labels:\n      kubernetes.io/clu"
  },
  {
    "path": "example/nginx/nginx-daemonset.yaml",
    "chars": 326,
    "preview": "apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: nginx-daemonset\n  labels:\n    app: nginx\nspec:\n  selector:\n    mat"
  },
  {
    "path": "example/nginx/nginx-deployment.yaml",
    "chars": 342,
    "preview": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  labels:\n    app: nginx\nspec:\n  replicas: 3\n  s"
  },
  {
    "path": "example/nginx/nginx-ingress.yaml",
    "chars": 237,
    "preview": "apiVersion: extensions/v1beta1\nkind: Ingress\nmetadata:\n  name: nginx-ingress\nspec:\n  rules:\n  - host: www.example.com\n  "
  },
  {
    "path": "example/nginx/nginx-pod.yaml",
    "chars": 174,
    "preview": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: nginx-pod\n  labels:\n    app: nginx\nspec:\n  containers:\n  - name: nginx\n    im"
  },
  {
    "path": "example/nginx/nginx-rc.yaml",
    "chars": 314,
    "preview": "apiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: nginx-rc\nspec:\n  replicas: 3\n  selector:\n    app: nginx\n  t"
  },
  {
    "path": "example/nginx/nginx-rs.yaml",
    "chars": 334,
    "preview": "apiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n  name: nginx-rs\n  labels:\n    app: nginx\nspec:\n  replicas: 3\n  selector:"
  },
  {
    "path": "example/nginx/nginx-service-nodeport.yaml",
    "chars": 173,
    "preview": "kind: Service\napiVersion: v1\nmetadata:\n  name: nginx-service\nspec:\n  selector:\n    app: nginx\n  ports:\n  - protocol: TCP"
  },
  {
    "path": "example/nginx/nginx-service.yaml",
    "chars": 153,
    "preview": "kind: Service\napiVersion: v1\nmetadata:\n  name: nginx-service\nspec:\n  selector:\n    app: nginx\n  ports:\n  - protocol: TCP"
  },
  {
    "path": "helm/README.md",
    "chars": 6994,
    "preview": "# 一、Helm - K8S的包管理器\n\n类似Centos的yum\n\n## 1、Helm架构\n```bash\nhelm包括chart和release.\nhelm包含2个组件,Helm客户端和Tiller服务器.\n```\n\n## 2、Helm"
  },
  {
    "path": "kubeadm/K8S-HA-V1.13.4-关闭防火墙版.md",
    "chars": 32004,
    "preview": "# 环境介绍:\n```bash\nCentOS: 7.6\nDocker: 18.06.1-ce\nKubernetes: 1.13.4\nKuberadm: 1.13.4\nKuberlet: 1.13.4\nKuberctl: 1.13.4\n```"
  },
  {
    "path": "kubeadm/K8S-HA-V1.16.x-云环境-Calico.md",
    "chars": 28322,
    "preview": "# 环境介绍:\n```bash\nCentOS: 7.6\nDocker: docker-ce-18.09.9\nKubernetes: 1.16.2\n- calico 3.8.2\n- Kubeadm: 1.16.2\n- nginx-ingres"
  },
  {
    "path": "kubeadm/K8S-V1.16.2-开启防火墙-Flannel.md",
    "chars": 23163,
    "preview": "Table of Contents\n=================\n\n   * [一、防火墙配置](#一防火墙配置)\n   * [二、初始化](#二初始化)\n   * [三、初始化集群](#三初始化集群)\n      * [1、命令行初"
  },
  {
    "path": "kubeadm/Kubernetes 集群变更IP地址.md",
    "chars": 87,
    "preview": "参考资料:\n\nhttps://blog.csdn.net/whywhy0716/article/details/92658111   Kubernetes 集群变更IP地址\n"
  },
  {
    "path": "kubeadm/README.md",
    "chars": 25647,
    "preview": "# 一、防火墙配置\n\n```\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\n\nyum"
  },
  {
    "path": "kubeadm/k8S-HA-V1.15.3-Calico-开启防火墙版.md",
    "chars": 32125,
    "preview": "# 环境介绍:\n```bash\nCentOS: 7.6\nDocker: docker-ce-18.09.9\nKubernetes: 1.15.3\nKubeadm: 1.15.3\nKubelet: 1.15.3\nKubectl: 1.15.3"
  },
  {
    "path": "kubeadm/k8S-HA-V1.15.3-Flannel-开启防火墙版.md",
    "chars": 39899,
    "preview": "# 环境介绍:\n```bash\nCentOS: 7.6\nDocker: docker-ce-18.09.9\nKubernetes: 1.15.3\nKubeadm: 1.15.3\nKubelet: 1.15.3\nKubectl: 1.15.3"
  },
  {
    "path": "kubeadm/k8s清理.md",
    "chars": 2075,
    "preview": "# 一、清理资源\n```\nsystemctl stop kubelet\nsystemctl stop docker\n\nkubeadm reset\n#yum remove -y kubelet kubeadm kubectl --disabl"
  },
  {
    "path": "kubeadm/kubeadm.yaml",
    "chars": 983,
    "preview": "apiVersion: kubeadm.k8s.io/v1beta2\nbootstrapTokens:\n- groups:\n  - system:bootstrappers:kubeadm:default-node-token\n  toke"
  },
  {
    "path": "kubeadm/kubeadm初始化k8s集群延长证书过期时间.md",
    "chars": 2214,
    "preview": "# 一、前言\n\nkubeadm初始化k8s集群,签发的CA证书有效期默认是10年,签发的apiserver证书有效期默认是1年,到期之后请求apiserver会报错,使用openssl命令查询相关证书是否到期。\n以下延长证书过期的方法适合k"
  },
  {
    "path": "kubeadm/kubeadm无法下载镜像问题.md",
    "chars": 2569,
    "preview": "0、kubeadm镜像介绍\n```\nkubeadm 是kubernetes 的集群安装工具,能够快速安装kubernetes 集群。\nkubeadm init 命令默认使用的docker镜像仓库为k8s.gcr.io,国内无法直接访问,于是"
  },
  {
    "path": "manual/README.md",
    "chars": 990,
    "preview": "# 内核升级\n```\n# 载入公钥\nrpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org\n\n# 安装ELRepo\nrpm -Uvh http://www.elrepo.org/"
  },
  {
    "path": "manual/v1.14/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "manual/v1.15.3/README.md",
    "chars": 3648,
    "preview": "# 一、Kubernetes 1.15 二进制集群安装\n\n本系列文档将介绍如何使用二进制部署Kubernetes v1.15.3 集群的所有部署,而不是使用自动化部署(kubeadm)集群。在部署过程中,将详细列出各个组件启动参数,以及相关"
  },
  {
    "path": "mysql/README.md",
    "chars": 1,
    "preview": "\n"
  },
  {
    "path": "mysql/kubernetes访问外部mysql服务.md",
    "chars": 3718,
    "preview": "Table of Contents\n=================\n\n   * [Table of Contents](#table-of-contents)\n   * [一、创建endpoints](#一创建endpoints)\n  "
  },
  {
    "path": "redis/K8s上Redis集群动态扩容.md",
    "chars": 129,
    "preview": "参考资料:\n\nhttp://redisdoc.com/topic/cluster-tutorial.html#id10   Redis 命令参考\n\nhttps://cloud.tencent.com/developer/article/13"
  },
  {
    "path": "redis/K8s上运行Redis单实例.md",
    "chars": 5231,
    "preview": "Table of Contents\n=================\n\n   * [一、创建namespace](#一创建namespace)\n   * [二、创建一个 configmap](#二创建一个-configmap)\n   * "
  },
  {
    "path": "redis/K8s上运行Redis集群指南.md",
    "chars": 22978,
    "preview": "Table of Contents\n=================\n\n   * [一、前言](#一前言)\n   * [二、准备操作](#二准备操作)\n   * [三、StatefulSet简介](#三statefulset简介)\n   "
  },
  {
    "path": "redis/README.md",
    "chars": 92,
    "preview": "参考资料:\n\nhttps://mp.weixin.qq.com/s/noVUEO5tbdcdx8AzYNrsMw  Kubernetes上通过sts测试Redis Cluster集群\n"
  },
  {
    "path": "rke/README.md",
    "chars": 7993,
    "preview": "# 一、基础配置优化\n```\nchattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*\ngrou"
  },
  {
    "path": "rke/cluster.yml",
    "chars": 4682,
    "preview": "# If you intened to deploy Kubernetes in an air-gapped environment,\n# please consult the documentation on how to configu"
  },
  {
    "path": "tools/Linux Kernel 升级.md",
    "chars": 1864,
    "preview": "# Linux Kernel 升级\n\nk8s,docker,cilium等很多功能、特性需要较新的linux内核支持,所以有必要在集群部署前对内核进行升级;CentOS7 和 Ubuntu16.04可以很方便的完成内核升级。\n\n## Cen"
  },
  {
    "path": "tools/README.md",
    "chars": 1456,
    "preview": "# 同步工具\n\n1、同步主机host文件\n```\n[root@master01 ~]# ./ssh_copy.sh /etc/hosts\nspawn scp /etc/hosts root@master01:/etc/hosts\nhosts"
  },
  {
    "path": "tools/k8s域名解析coredns问题排查过程.md",
    "chars": 73,
    "preview": "参考资料:\n\nhttps://segmentfault.com/a/1190000019823091?utm_source=tag-newest\n"
  },
  {
    "path": "tools/kubernetes-node打标签.md",
    "chars": 147,
    "preview": "```\nkubectl get nodes -A --show-labels\n\n\nkubectl label nodes 10.199.1.159 node=10.199.1.159\nkubectl label nodes 10.199.1"
  },
  {
    "path": "tools/kubernetes-常用操作.md",
    "chars": 3365,
    "preview": "# 一、节点调度配置\n```\n[root@master01 ~]# kubectl get nodes -A       \nNAME          STATUS                     ROLES    AGE     "
  },
  {
    "path": "tools/kubernetes-批量删除Pods.md",
    "chars": 321,
    "preview": "# 一、批量删除处于Pending状态的pod\n```\nkubectl get pods | grep Pending | awk '{print $1}' | xargs kubectl delete pod\n```\n\n# 二、批量删除处"
  },
  {
    "path": "tools/kubernetes访问外部mysql服务.md",
    "chars": 1186,
    "preview": "`\nk8s访问集群外独立的服务最好的方式是采用Endpoint方式(可以看作是将k8s集群之外的服务抽象为内部服务),以mysql服务为例\n`\n\n# 一、创建endpoints\n```bash\n#创建 mysql-endpoints.yam"
  },
  {
    "path": "tools/ssh_copy.sh",
    "chars": 310,
    "preview": "#!/bin/bash\n\nfor i in `echo master01 master02 master03 slave01 slave02 slave03`;do\nexpect -c \"\nspawn scp $1 root@$i:$1\n "
  }
]

About this extraction

This page contains the full source code of the Lancger/opsfull GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 104 files (533.3 KB), approximately 194.5k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!