Repository: stacksimplify/google-kubernetes-engine Branch: main Commit: 2b2129b870d1 Files: 387 Total size: 16.2 MB Directory structure: gitextract_6wj2qqr2/ ├── .gitignore ├── 01-Create-GCP-Account/ │ └── README.md ├── 02-Create-GKE-Cluster/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ └── 02-kubernetes-loadbalancer-service.yaml ├── 03-gcloud-cli-install-macos/ │ └── README.md ├── 04-gcloud-cli-install-windowsos/ │ └── README.md ├── 05-Docker-For-Beginners/ │ └── README.md ├── 06-kubectl-imperative-k8s-pods/ │ └── README.md ├── 07-kubectl-declarative-k8s-ReplicaSets/ │ ├── README.md │ └── replicaset-demo.yml ├── 08-kubectl-imperative-k8s-deployment-CREATE/ │ └── README.md ├── 09-kubectl-imperative-k8s-deployment-UPDATE/ │ └── README.md ├── 10-kubectl-imperative-k8s-deployment-ROLLBACK/ │ └── README.md ├── 11-kubectl-imperative-k8s-deployment-PAUSE-RESUME/ │ └── README.md ├── 12-kubectl-imperative-k8s-services/ │ └── README.md ├── 13-YAML-Basics/ │ ├── README.md │ ├── sample-file.yml │ └── yaml-demo.yaml ├── 14-yaml-declarative-k8s-pods/ │ ├── README.md │ ├── kube-base-definition.yml │ └── kube-manifests/ │ ├── 01-pod-definition.yml │ └── 02-pod-LoadBalancer-service.yml ├── 15-yaml-declarative-k8s-replicasets/ │ ├── README.md │ ├── kube-base-definition.yml │ └── kube-manifests/ │ ├── 01-replicaset-definition.yml │ └── 02-replicaset-LoadBalancer-servie.yml ├── 16-yaml-declarative-k8s-deployments/ │ ├── README.md │ ├── kube-base-definition.yml │ └── kube-manifests/ │ ├── 01-deployment-definition.yml │ └── 02-deployment-LoadBalancer-servie.yml ├── 17-yaml-declarative-k8s-services/ │ ├── README.md │ ├── kube-base-definition.yml │ └── kube-manifests/ │ ├── 01-backend-deployment.yml │ ├── 02-backend-clusterip-service.yml │ ├── 03-frontend-deployment.yml │ └── 04-frontend-LoadBalancer-service.yml ├── 18-GKE-NodePort-Service/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ └── 02-kubernetes-nodeport-service.yaml ├── 19-GKE-Headless-Service/ │ ├── 01-kube-manifests/ │ │ ├── 01-kubernetes-deployment.yaml │ │ ├── 02-kubernetes-clusterip-service.yaml │ │ └── 03-kubernetes-headless-service.yaml │ ├── 02-kube-manifests-curl/ │ │ └── 01-curl-pod.yml │ └── README.md ├── 20-GKE-Private-Cluster/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ └── 02-kubernetes-loadbalancer-service.yaml ├── 21-GKE-PD-existing-SC-standard-rwo/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-persistent-volume-claim.yaml │ ├── 02-UserManagement-ConfigMap.yaml │ ├── 03-mysql-deployment.yaml │ ├── 04-mysql-clusterip-service.yaml │ ├── 05-UserMgmtWebApp-Deployment.yaml │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 22-GKE-PD-existing-SC-premium-rwo/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-persistent-volume-claim.yaml │ ├── 02-UserManagement-ConfigMap.yaml │ ├── 03-mysql-deployment.yaml │ ├── 04-mysql-clusterip-service.yaml │ ├── 05-UserMgmtWebApp-Deployment.yaml │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 23-GKE-PD-Custom-StorageClass/ │ ├── README.md │ └── kube-manifests/ │ ├── 00-storage-class.yaml │ ├── 01-persistent-volume-claim.yaml │ ├── 02-UserManagement-ConfigMap.yaml │ ├── 03-mysql-deployment.yaml │ ├── 04-mysql-clusterip-service.yaml │ ├── 05-UserMgmtWebApp-Deployment.yaml │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 24-GKE-PD-preexisting-PD/ │ ├── README.md │ └── kube-manifests/ │ ├── 00-persistent-volume.yaml │ ├── 01-persistent-volume-claim.yaml │ ├── 02-UserManagement-ConfigMap.yaml │ ├── 03-mysql-deployment.yaml │ ├── 04-mysql-clusterip-service.yaml │ ├── 05-UserMgmtWebApp-Deployment.yaml │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 25-GKE-PD-Regional-PD/ │ ├── README.md │ └── kube-manifests/ │ ├── 00-storage-class.yaml │ ├── 01-persistent-volume-claim.yaml │ ├── 02-UserManagement-ConfigMap.yaml │ ├── 03-mysql-deployment.yaml │ ├── 04-mysql-clusterip-service.yaml │ ├── 05-UserMgmtWebApp-Deployment.yaml │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 26-GKE-PD-Volume-Snapshots-and-Restore/ │ ├── 01-kube-manifests/ │ │ ├── 01-persistent-volume-claim.yaml │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ ├── 03-mysql-deployment.yaml │ │ ├── 04-mysql-clusterip-service.yaml │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ ├── 02-Volume-Snapshot/ │ │ ├── 01-VolumeSnapshotClass.yaml │ │ └── 02-VolumeSnapshot.yaml │ ├── 03-Volume-Restore/ │ │ ├── 01-restore-pvc.yaml │ │ └── 02-mysql-deployment.yaml │ └── README.md ├── 27-GKE-PD-Volume-Clone/ │ ├── 01-kube-manifests/ │ │ ├── 01-persistent-volume-claim.yaml │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ ├── 03-mysql-deployment.yaml │ │ ├── 04-mysql-clusterip-service.yaml │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ ├── 02-Use-Cloned-Volume-kube-manifests/ │ │ ├── 01-podpvc-clone.yaml │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ ├── 03-mysql-deployment.yaml │ │ ├── 04-mysql-clusterip-service.yaml │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ ├── 03-With-NodeSelectors/ │ │ ├── 01-kube-manifests/ │ │ │ ├── 01-persistent-volume-claim.yaml │ │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ │ ├── 03-mysql-deployment.yaml │ │ │ ├── 04-mysql-clusterip-service.yaml │ │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ │ └── 02-Use-Cloned-Volume-kube-manifests/ │ │ ├── 01-podpvc-clone.yaml │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ ├── 03-mysql-deployment.yaml │ │ ├── 04-mysql-clusterip-service.yaml │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ └── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ └── README.md ├── 28-GKE-Storage-with-GCP-CloudSQL-Public/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-MySQL-externalName-Service.yaml │ ├── 02-Kubernetes-Secrets.yaml │ ├── 03-UserMgmtWebApp-Deployment.yaml │ └── 04-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 29-GKE-Storage-with-GCP-CloudSQL-Private/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-MySQL-externalName-Service.yaml │ ├── 02-Kubernetes-Secrets.yaml │ ├── 03-UserMgmtWebApp-Deployment.yaml │ └── 04-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 30-GCP-CloudSQL-Private-NO-ExternalNameService/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Kubernetes-Secrets.yaml │ ├── 02-UserMgmtWebApp-Deployment.yaml │ └── 03-UserMgmtWebApp-LoadBalancer-Service.yaml ├── 31-GKE-FileStore-default-StorageClass/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-filestore-pvc.yaml │ ├── 02-write-to-filestore-pod.yaml │ ├── 03-myapp1-deployment.yaml │ └── 04-loadBalancer-service.yaml ├── 32-GKE-FileStore-custom-StorageClass/ │ ├── README.md │ └── kube-manifests/ │ ├── 00-filestore-storage-class.yaml │ ├── 01-filestore-pvc.yaml │ ├── 02-write-to-filestore-pod.yaml │ ├── 03-myapp1-deployment.yaml │ └── 04-loadBalancer-service.yaml ├── 33-GKE-FileStore-Backup-and-Restore/ │ ├── 01-myapp1-kube-manifests/ │ │ ├── 01-filestore-pvc.yaml │ │ ├── 02-write-to-filestore-pod.yaml │ │ ├── 03-myapp1-deployment.yaml │ │ └── 04-loadBalancer-service.yaml │ ├── 02-volume-backup-kube-manifests/ │ │ ├── 01-VolumeSnapshotClass.yaml │ │ └── 02-VolumeSnapshot.yaml │ ├── 03-volume-restore-myapp2-kube-manifests/ │ │ ├── 01-filestore-pvc.yaml │ │ ├── 02-myapp2-deployment.yaml │ │ └── 03-myapp2-loadBalancer-service.yaml │ └── README.md ├── 34-GKE-Ingress-Basics/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App3-Deployment-and-NodePortService.yaml │ └── 02-ingress-basic.yaml ├── 35-GKE-Ingress-Context-Path-Routing/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ └── 04-Ingress-ContextPath-Based-Routing.yaml ├── 36-GKE-Ingress-Custom-Health-Check/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ └── 04-Ingress-Custom-Healthcheck.yaml ├── 37-Google-Cloud-Domains/ │ └── README.md ├── 38-GKE-Ingress-ExternalIP/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ └── 04-Ingress-external-ip.yaml ├── 39-GKE-Ingress-Google-Managed-SSL/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ ├── 04-Ingress-SSL.yaml │ └── 05-Managed-Certificate.yaml ├── 40-GKE-Ingress-Google-Managed-SSL-Redirect/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ ├── 04-Ingress-SSL.yaml │ ├── 05-Managed-Certificate.yaml │ └── 06-frontendconfig.yaml ├── 41-GKE-Workload-Identity/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-wid-demo-pod-without-sa.yaml │ └── 02-wid-demo-pod-with-sa.yaml ├── 42-GKE-ExternalDNS-Install/ │ └── README.md ├── 43-GKE-ExternalDNS-Ingress-Demo/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App3-Deployment-and-NodePortService.yaml │ └── 02-ingress-external-dns.yaml ├── 44-GKE-ExternalDNS-Service-Demo/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ └── 02-kubernetes-loadbalancer-service.yaml ├── 45-GKE-Ingress-NameBasedVhost-Routing/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ ├── 04-Ingress-NameBasedVHost-Routing.yaml │ ├── 05-Managed-Certificate.yaml │ └── 06-frontendconfig.yaml ├── 46-GKE-Ingress-SSL-Policy/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ ├── 04-Ingress-NameBasedVHost-Routing.yaml │ ├── 05-Managed-Certificate.yaml │ └── 06-frontendconfig.yaml ├── 47-GKE-Ingress-with-Identity-Aware-Proxy/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ ├── 04-Ingress-NameBasedVHost-Routing.yaml │ ├── 05-Managed-Certificate.yaml │ ├── 06-frontendconfig.yaml │ └── 07-backendconfig.yaml ├── 48-GKE-Ingress-SelfSigned-SSL/ │ ├── README.md │ ├── SSL-SelfSigned-Certs/ │ │ ├── app1-ingress.crt │ │ ├── app1-ingress.csr │ │ ├── app1-ingress.key │ │ ├── app2-ingress.crt │ │ ├── app2-ingress.csr │ │ ├── app2-ingress.key │ │ ├── app3-ingress.crt │ │ ├── app3-ingress.csr │ │ └── app3-ingress.key │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ ├── 04-ingress-self-signed-ssl.yaml │ └── 05-frontendconfig.yaml ├── 49-GKE-Ingress-Preshared-SSL/ │ ├── README.md │ ├── SSL-SelfSigned-Certs/ │ │ ├── app1-ingress.crt │ │ ├── app1-ingress.csr │ │ ├── app1-ingress.key │ │ ├── app2-ingress.crt │ │ ├── app2-ingress.csr │ │ ├── app2-ingress.key │ │ ├── app3-ingress.crt │ │ ├── app3-ingress.csr │ │ └── app3-ingress.key │ └── kube-manifests/ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ ├── 04-ingress-preshared-ssl.yaml │ └── 05-frontendconfig.yaml ├── 50-GKE-Ingress-Cloud-CDN/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ ├── 02-kubernetes-NodePort-service.yaml │ ├── 03-ingress.yaml │ └── 04-backendconfig.yaml ├── 51-GKE-Ingress-ClientIP-Affinity/ │ ├── 01-kube-manifests-with-clientip-affinity/ │ │ ├── 01-kubernetes-deployment.yaml │ │ ├── 02-kubernetes-NodePort-service.yaml │ │ ├── 03-ingress.yaml │ │ └── 04-backendconfig.yaml │ ├── 02-kube-manifests-without-clientip-affinity/ │ │ ├── 01-kubernetes-deployment.yaml │ │ ├── 02-kubernetes-NodePort-service.yaml │ │ ├── 03-ingress.yaml │ │ └── 04-backendconfig.yaml │ └── README.md ├── 52-GKE-Ingress-Cookie-Affinity/ │ ├── 01-kube-manifests-with-cookie-affinity/ │ │ ├── 01-kubernetes-deployment.yaml │ │ ├── 02-kubernetes-NodePort-service.yaml │ │ ├── 03-ingress.yaml │ │ └── 04-backendconfig.yaml │ ├── 02-kube-manifests-without-cookie-affinity/ │ │ ├── 01-kubernetes-deployment.yaml │ │ ├── 02-kubernetes-NodePort-service.yaml │ │ ├── 03-ingress.yaml │ │ └── 04-backendconfig.yaml │ └── README.md ├── 53-GKE-Ingress-HealthCheck-with-backendConfig/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ ├── 02-kubernetes-NodePort-service.yaml │ ├── 03-ingress.yaml │ └── 04-backendconfig.yaml ├── 54-GKE-Ingress-InternalLB/ │ ├── 01-kube-manifests/ │ │ ├── 01-Nginx-App1-Deployment-and-NodePortService.yaml │ │ ├── 02-Nginx-App2-Deployment-and-NodePortService.yaml │ │ ├── 03-Nginx-App3-Deployment-and-NodePortService.yaml │ │ └── 04-Ingress-internal-lb.yaml │ ├── 02-kube-manifests-curl/ │ │ └── 01-curl-pod.yml │ └── README.md ├── 55-GKE-Ingress-Cloud-Armor/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ ├── 02-kubernetes-NodePort-service.yaml │ ├── 03-ingress.yaml │ └── 04-backendconfig.yaml ├── 56-GKE-Artifact-Registry/ │ ├── 01-Docker-Image/ │ │ ├── Dockerfile │ │ └── index.html │ ├── 02-kube-manifests/ │ │ ├── 01-kubernetes-deployment.yaml │ │ └── 02-kubernetes-loadBalancer-service.yaml │ └── README.md ├── 57-GKE-Continuous-Integration/ │ ├── 01-SSH-Keys/ │ │ ├── id_gcp_cloud_source │ │ └── id_gcp_cloud_source.pub │ ├── 02-Docker-Image/ │ │ ├── Dockerfile │ │ └── index.html │ ├── 03-cloudbuild-yaml/ │ │ └── cloudbuild.yaml │ ├── 04-kube-manifests/ │ │ ├── 01-kubernetes-deployment.yaml │ │ └── 02-kubernetes-loadBalancer-service.yaml │ └── README.md ├── 58-GKE-Continuous-Delivery-with-CloudBuild/ │ ├── 01-myapp1-k8s-repo/ │ │ └── cloudbuild-delivery.yaml │ ├── 02-Source-Writer-IAM-Role/ │ │ └── myapp1-k8s-repo-policy.yaml │ ├── 03-myapp1-app-repo/ │ │ ├── Dockerfile │ │ ├── README.md │ │ ├── cloudbuild-trigger-cd.yaml │ │ ├── cloudbuild.yaml │ │ ├── index.html │ │ └── kubernetes.yaml.tpl │ └── README.md ├── 59-Kubernetes-liveness-probe/ │ ├── 01-liveness-probe-linux-command/ │ │ ├── 01-persistent-volume-claim.yaml │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ ├── 03-mysql-deployment.yaml │ │ ├── 04-mysql-clusterip-service.yaml │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ ├── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ │ └── 07-kubernetes-secret.yaml │ ├── 02-liveness-probe-HTTP-Request/ │ │ ├── 01-persistent-volume-claim.yaml │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ ├── 03-mysql-deployment.yaml │ │ ├── 04-mysql-clusterip-service.yaml │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ ├── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ │ └── 07-kubernetes-secret.yaml │ ├── 03-liveness-probe-TCP-Request/ │ │ ├── 01-persistent-volume-claim.yaml │ │ ├── 02-UserManagement-ConfigMap.yaml │ │ ├── 03-mysql-deployment.yaml │ │ ├── 04-mysql-clusterip-service.yaml │ │ ├── 05-UserMgmtWebApp-Deployment.yaml │ │ ├── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ │ └── 07-kubernetes-secret.yaml │ └── README.md ├── 60-Kubernetes-Startup-Probe/ │ ├── README.md │ └── kube-manifests-startup-probe/ │ ├── 01-persistent-volume-claim.yaml │ ├── 02-UserManagement-ConfigMap.yaml │ ├── 03-mysql-deployment.yaml │ ├── 04-mysql-clusterip-service.yaml │ ├── 05-UserMgmtWebApp-Deployment.yaml │ ├── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ └── 07-kubernetes-secret.yaml ├── 61-Kubernetes-Readiness-Probe/ │ ├── README.md │ └── kube-manifests-readiness-probe/ │ ├── 01-persistent-volume-claim.yaml │ ├── 02-UserManagement-ConfigMap.yaml │ ├── 03-mysql-deployment.yaml │ ├── 04-mysql-clusterip-service.yaml │ ├── 05-UserMgmtWebApp-Deployment.yaml │ ├── 06-UserMgmtWebApp-LoadBalancer-Service.yaml │ └── 07-kubernetes-secret.yaml ├── 62-Kubernetes-Requests-and-Limits/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ └── 02-kubernetes-loadbalancer-service.yaml ├── 63-GKE-Cluster-Autoscaling/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ └── 02-kubernetes-loadbalancer-service.yaml ├── 64-Kubernetes-Namespaces/ │ ├── 01-kube-manifests-imperative/ │ │ ├── 01-kubernetes-deployment.yaml │ │ └── 02-kubernetes-loadbalancer-service.yaml │ ├── 02-kube-manifests-declarative/ │ │ ├── 00-kubernetes-namespace.yaml │ │ ├── 01-kubernetes-deployment.yaml │ │ └── 02-kubernetes-loadbalancer-service.yaml │ └── README.md ├── 65-Kubernetes-Namespaces-ResourceQuota/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-namespace.yaml │ ├── 02-kubernetes-resourcequota.yaml │ ├── 03-kubernetes-deployment.yaml │ └── 04-kubernetes-loadbalancer-service.yaml ├── 66-Kubernetes-Namespaces-LimitRange/ │ ├── 01-kube-manifests-LimitRange-defaults/ │ │ ├── 01-kubernetes-namespace.yaml │ │ ├── 02-kubernetes-resourcequota-limitrange.yaml │ │ ├── 03-kubernetes-deployment.yaml │ │ └── 04-kubernetes-loadbalancer-service.yaml │ ├── 02-kube-manifests-LimitRange-MinMax/ │ │ ├── 01-kubernetes-namespace.yaml │ │ ├── 02-kubernetes-resourcequota-limitrange.yaml │ │ ├── 03-kubernetes-deployment.yaml │ │ └── 04-kubernetes-loadbalancer-service.yaml │ └── README.md ├── 67-GKE-Horizontal-Pod-Autoscaler/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ ├── 02-kubernetes-cip-service.yaml │ └── 03-kubernetes-hpa.yaml ├── 68-GKE-AutoPilot-Cluster/ │ ├── README.md │ └── kube-manifests/ │ ├── 01-kubernetes-deployment.yaml │ └── 02-kubernetes-loadbalancer-service.yaml ├── 69-Access-To-Multiple-Clusters/ │ └── README.md ├── README.md ├── course-presentation/ │ └── Google-Kubernetes-Engine-GKE-GCP-v3R.pptx └── git-deploy.sh ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ # Local .terraform directories **/.terraform/* .DS_Store # .tfstate files *.tfstate *.tfstate.* # Crash log files crash.log # Ignore any .tfvars files that are generated automatically for each Terraform run. Most # .tfvars files are managed as part of configuration and so should be included in # version control. # # example.tfvars # Ignore override files as they are usually used to override resources locally and so # are not checked in override.tf override.tf.json *_override.tf *_override.tf.json # Include override files you do wish to add to version control using negated pattern # # !example_override.tf # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan # example: *tfplan* ================================================ FILE: 01-Create-GCP-Account/README.md ================================================ --- title: Create GCP Cloud Account description: Learn to create GCP Cloud Account --- ## Step-01: Introduction - Create GCP Cloud Account ## Step-02: Create a Google Account - We should have a google account (gmail account) before creating GCP cloud Account - Create one Google Account if not having one. ## Step-03: Create GCP Account - Go to https://cloud.google.com - Follow presentation slides to create the GCP Account ## Step-04: Create Budget Alerts - Go to Billing and Create Budget Alerts ================================================ FILE: 02-Create-GKE-Cluster/README.md ================================================ --- title: GCP Google Kubernetes Engine - Create GKE Cluster description: Learn to create Google Kubernetes Engine GKE Cluster --- ## Step-01: Introduction - Create GKE Standard GKE Cluster - Configure Google CloudShell to access GKE Cluster - Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test - Clean-Up ## Step-02: Create Standard GKE Cluster - Go to Kubernetes Engine -> Clusters -> CREATE - Select **GKE Standard -> CONFIGURE** - **Cluster Basics** - **Name:** standard-public-cluster-1 - **Location type:** Regional - **Region:** us-central1 - **Specify default node locations:** us-central1-a, us-central1-b, us-central1-c - **Release Channel** - **Release Channel:** Rapid Channel - **Version:** LATEST AVAIALABLE ON THAT DAY - REST ALL LEAVE TO DEFAULTS - **NODE POOLS: default-pool** - **Node pool details** - **Name:** default-pool - **Number of Nodes (per zone):** 1 - **Node Pool Upgrade Strategy:** Surge Upgrade - **Nodes: Configure node settings** - **Image type:** Containerized Optimized OS - **Machine configuration** - **GENERAL PURPOSE SERIES:** E2 - **Machine Type:** e2-small - **Boot disk type:** Balanced persistent disk - **Boot disk size(GB):** 20 - **Boot disk encryption:** Google-managed encryption key (default ) - **Enable Node on Spot VMs:** CHECKED - **Node Networking:** LEAVE TO DEFAULTS - **Node Security:** - **Access scopes:** Allow default access (LEAVE TO DEFAULT) - REST ALL REVIEW AND LEAVE TO DEFAULTS - **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS - **CLUSTER** - **Automation:** REVIEW AND LEAVE TO DEFAULTS - **Networking:** REVIEW AND LEAVE TO DEFAULTS - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Security:** REVIEW AND LEAVE TO DEFAULTS - **CHECK THIS BOX: Enable Workload Identity** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Metadata:** REVIEW AND LEAVE TO DEFAULTS - **Features:** REVIEW AND LEAVE TO DEFAULTS - CLICK ON **CREATE** ## Step-03: Verify Cluster Details - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Review - Details Tab - Nodes Tab - Review same nodes **Compute Engine** - Storage Tab - Review Storage Classes - Logs Tab - Review Cluster Logs - Review Cluster Logs **Filter By Severity** ## Step-04: Verify Additional Features in GKE on a High-Level ### Step-04-01: Verify Workloads Tab - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Workloads -> **SHOW SYSTEM WORKLOADS** ### Step-04-02: Verify Services & Ingress - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Services & Ingress -> **SHOW SYSTEM OBJECTS** ### Step-04-03: Verify Applications, Secrets & ConfigMaps - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Applications - Secrets & ConfigMaps ### Step-04-04: Verify Storage - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Storage Classes - premium-rwo - standard - standard-rwo ### Step-04-05: Verify the below 1. Object Browser 2. Migrate to Containers 3. Backup for GKE 4. Config Management 5. Protect ## Step-05: Google CloudShell: Connect to GKE Cluster using kubectl - [kubectl Authentication in GKE](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke) ```t # Verify gke-gcloud-auth-plugin Installation (if not installed, install it) gke-gcloud-auth-plugin --version # Install Kubectl authentication plugin for GKE sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin # Verify gke-gcloud-auth-plugin Installation gke-gcloud-auth-plugin --version # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Run kubectl with the new plugin prior to the release of v1.25 vi ~/.bashrc USE_GKE_GCLOUD_AUTH_PLUGIN=True # Reload the environment value source ~/.bashrc # Check if Environment variable loaded in Terminal echo $USE_GKE_GCLOUD_AUTH_PLUGIN # Verify kubectl version kubectl version --short # Install kubectl (if not installed) gcloud components install kubectl # Configure kubectl gcloud container clusters get-credentials --zone --project gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123 # Verify Kubernetes Worker Nodes kubectl get nodes # Verify System Pod in kube-system Namespace kubectl -n kube-system get pods # Verify kubeconfig file cat $HOME/.kube/config kubectl config view ``` ## Step-06: Review Sample Application: 01-kubernetes-deployment.yaml - **Folder:** kube-manifests ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-07: Review Sample Application: 02-kubernetes-loadbalancer-service.yaml - **Folder:** kube-manifests ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-08: Upload Sample App to Google CloudShell ```t # Upload Sample App to Google CloudShell Go to Google CloudShell -> 3 Dots -> Upload -> Folder -> google-kubernetes-engine # Change Directory cd google-kubernetes-engine/02-Create-GKE-Cluster # Verify folder uploaded ls kube-manifests/ # Verify Files cat kube-manifests/01-kubernetes-deployment.yaml cat kube-manifests/02-kubernetes-loadbalancer-service.yaml ``` ## Step-09: Deploy Sample Application and Verify ```t # Change Directory cd google-kubernetes-engine/02-Create-GKE-Cluster # Deploy Sample App using kubectl kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pod # List Services kubectl get svc # Access Sample Application http:// ``` ## Step-10: Verify Workloads in GKE Dashboard - Go to GCP Console -> Kubernetes Engine -> Workloads - Click on **myapp1-deployment** - Review all tabs ## Step-11: Verify Services in GKE Dashboard - Go to GCP Console -> Kubernetes Engine -> Services & Ingress - Click on **myapp1-lb-service** - Review all tabs ## Step-13: Verify Load Balancer - Go to GCP Console -> Networking Services -> Load Balancing - Review all tabs ## Step-14: Clean-Up - Go to Google Cloud Shell ```t # Change Directory cd google-kubernetes-engine/02-Create-GKE-Cluster # Delete Kubernetes Deployment and Service kubectl delete -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pod # List Services kubectl get svc ``` ================================================ FILE: 02-Create-GKE-Cluster/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0 ports: - containerPort: 8080 ================================================ FILE: 02-Create-GKE-Cluster/kube-manifests/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 03-gcloud-cli-install-macos/README.md ================================================ --- title: gcloud cli install on macOS description: Learn to install gcloud cli on MacOS --- ## Step-01: Introduction - Install gcloud CLI on MacOS - Configure kubeconfig for kubectl on your local terminal - Verify if you are able to reach GKE Cluster using kubectl from your local terminal ## Step-02: Install gcloud cli on MacOS - [Install gcloud cli](https://cloud.google.com/sdk/docs/install-sdk#mac) ```t # Verify Python Version (Supported versions are Python 3 (3.5 to 3.8, 3.7 recommended) python3 -V # Determine your machine hardware uname -m # Create Folder mkdir gcloud-cli-software # Download gcloud cli based on machine hardware ## Important Note: Download the latest version available on that respective day Dowload Link: https://cloud.google.com/sdk/docs/install-sdk#mac ## As on today the below is the latest version (x86_64 bit) curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-418.0.0-darwin-x86_64.tar.gz # Unzip binary ls -lrta tar -zxf google-cloud-cli-418.0.0-darwin-x86_64.tar.gz # Run the install script with screen reader mode on: ./google-cloud-sdk/install.sh --screen-reader=true ``` ## Step-03: Verify gcloud cli version ```t # Open new terminal AS PATH is updated, open new terminal # gcloud cli version gcloud version ## Sample Output Kalyans-Mac-mini:gcloud-cli-software kalyanreddy$ gcloud version Google Cloud SDK 418.0.0 bq 2.0.85 core 2023.02.13 gcloud-crc32c 1.0.0 gsutil 5.20 Kalyans-Mac-mini:gcloud-cli-software kalyanreddy$ ``` ## Step-04: Intialize gcloud CLI in local Terminal ```t # Initialize gcloud CLI ./google-cloud-sdk/bin/gcloud init # gcloud config Configurations Commands (For Reference) gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename ``` ## Step-05: Verify gke-gcloud-auth-plugin ```t # Change Directroy gcloud-cli-software ## Important Note about gke-gcloud-auth-plugin: 1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version # Install gke-gcloud-auth-plugin gcloud components install gke-gcloud-auth-plugin # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version ``` ## Step-06: Remove any existing kubectl clients ```t # Verify kubectl version kubectl version --short which kubectl Observation: 1. We are not using kubectl from gcloud CLI and we need to fix that. # Removing existing kubectl which kubectl rm /usr/local/bin/kubectl ``` ## Step-07: Install kubectl client from gcloud CLI ```t # List gcloud components gcloud components list ## SAMPLE OUTPUT Status: Not Installed Name: kubectl ID: kubectl Size: < 1 MiB # Install kubectl client gcloud components install kubectl # Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version --short which kubectl ``` ## Step-08: Fix kubectl client version equal to GKE Cluster version - **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. - For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters. - As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26 ```t # Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version --short which kubectl # Change Directroy cd /Users/kalyanreddy/Documents/course-repos/gcloud-cli-software/google-cloud-sdk/bin/ # List files ls -lrta # Backup existing kubectl cp kubectl kubectl_bkup_1.24 # Copy latest kubectl cp kubectl.1.26 kubectl # Verify kubectl version kubectl version --short which kubectl ``` ## Step-09: Configure kubeconfig for kubectl in local desktop terminal ```t # Clean-Up kubeconfig file (if any older configs exists) rm $HOME/.kube/config # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Verify Kubernetes Worker Nodes kubectl get nodes # Verify System Pod in kube-system Namespace kubectl -n kube-system get pods # Verify kubeconfig file cat $HOME/.kube/config kubectl config view ``` ## References - [gcloud CLI](https://cloud.google.com/sdk/gcloud) - [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk#mac) ================================================ FILE: 04-gcloud-cli-install-windowsos/README.md ================================================ --- title: gcloud cli install on macOS description: Learn to install gcloud cli on WindowsOS --- ## Step-01: Introduction - Install gcloud CLI on WindowsOS - Configure kubeconfig for kubectl on your local terminal - Verify if you are able to reach GKE Cluster using kubectl from your local terminal - Fix kubectl version to match with GKE Cluster Server Version. ## Step-02: Install gcloud cli on WindowsOS - [Install gcloud cli on WindowsOS](https://cloud.google.com/sdk/docs/install-sdk#windows) ```t ## Important Note: Download the latest version available on that respective day Dowload Link: https://cloud.google.com/sdk/docs/install-sdk#windows ## Run the Installer GoogleCloudSDKInstaller.exe ``` ## Step-03: Verify gcloud cli version ```t # gcloud cli version gcloud version ``` ## Step-04: Intialize gcloud CLI in local Terminal ```t # Initialize gcloud CLI gcloud init # List accounts whose credentials are stored on the local system: gcloud auth list # List the properties in your active gcloud CLI configuration gcloud config list # View information about your gcloud CLI installation and the active configuration gcloud info # gcloud config Configurations Commands (For Reference) gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename ``` ## Step-05: Verify gke-gcloud-auth-plugin ```t ## Important Note about gke-gcloud-auth-plugin: 1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version # Install gke-gcloud-auth-plugin gcloud components install gke-gcloud-auth-plugin # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version ``` ## Step-06: Remove any existing kubectl clients ```t # Verify kubectl version kubectl version --output=yaml Observation: 1. If any kubectl exists before installing it from gcloud then uninstall it. 2. Usually if docker is installed on our desktop, its equivalent kubectl package mostly will be installed and set on PATH. If exists please remove it. ``` ## Step-07: Install kubectl client from gcloud CLI ```t # List gcloud components gcloud components list ## SAMPLE OUTPUT Status: Not Installed Name: kubectl ID: kubectl Size: < 1 MiB # Install kubectl client gcloud components install kubectl # Verify kubectl version kubectl version --output=yaml ``` ## Step-08: Configure kubeconfig for kubectl in local desktop terminal ```t # Verify kubeconfig file kubectl config view # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Verify kubeconfig file kubectl config view # Verify Kubernetes Worker Nodes kubectl get nodes Observation: 1. It should throw warning at the end about huge difference in kubectl client version and GKE Cluster Server Version 2. Lets fix that in next step. ``` ## Step-09: Fix kubectl client version equal to GKE Cluster version - **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. - For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters. - As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26 ```t # Verify kubectl version kubectl version --output=yaml # Change Directroy Go to Google Cloud SDK "bin" directory # Backup existing kubectl Backup "kubectl" to "kubectl_bkup_1.24" # Copy latest kubectl COPY "kubectl.1.26" as "kubectl" # Verify kubectl version kubectl version --output=yaml ``` ## References - [gcloud CLI](https://cloud.google.com/sdk/gcloud) - [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk#mac) ================================================ FILE: 05-Docker-For-Beginners/README.md ================================================ --- title: Docker Fundamentals description: Learn Docker Fundamentals --- ## Docker Fundamentals - For Docker Fundamentals github repository, please click on below link - https://github.com/stacksimplify/docker-fundamentals ================================================ FILE: 06-kubectl-imperative-k8s-pods/README.md ================================================ --- title: Kubernetes PODs description: Learn about Kubernetes Pods --- ## Step-01: PODs Introduction - What is a POD ? - What is a Multi-Container POD? ## Step-02: PODs Demo ### Step-02-01: Get Worker Nodes Status - Verify if kubernetes worker nodes are ready. ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Get Worker Node Status kubectl get nodes # Get Worker Node Status with wide option kubectl get nodes -o wide ``` ### Step-02-02: Create a Pod - Create a Pod ```t # Template kubectl run --image # Replace Pod Name, Container Image kubectl run my-first-pod --image stacksimplify/kubenginx:1.0.0 ``` ### Step-02-03: List Pods - Get the list of pods ```t # List Pods kubectl get pods # Alias name for pods is po kubectl get po ``` ### Step-02-04: List Pods with wide option - List pods with wide option which also provide Node information on which Pod is running ```t # List Pods with Wide Option kubectl get pods -o wide ``` ### Step-02-05: What happened in the backgroup when above command is run? 1. Kubernetes created a pod 2. Pulled the docker image from docker hub 3. Created the container in the pod 4. Started the container present in the pod ### Step-02-06: Describe Pod - Describe the POD, primarily required during troubleshooting. - Events shown will be of a great help during troubleshooting. ```t # To get list of pod names kubectl get pods # Describe the Pod kubectl describe pod kubectl describe pod my-first-pod Observation: 1. Review Events - thats the key for troubleshooting, understanding what happened ``` ### Step-02-07: Access Application - Currently we can access this application only inside worker nodes. - To access it externally, we need to create a **NodePort or Load Balancer Service**. - **Services** is one very very important concept in Kubernetes. ### Step-02-08: Delete Pod ```t # To get list of pod names kubectl get pods # Delete Pod kubectl delete pod kubectl delete pod my-first-pod ``` ## Step-03: Load Balancer Service Introduction - What are Services in k8s? - What is a Load Balancer Service? - How it works? ## Step-04: Demo - Expose Pod with a Service - Expose pod with a service (Load Balancer Service) to access the application externally (from internet) - **Ports** - **port:** Port on which node port service listens in Kubernetes cluster internally - **targetPort:** We define container port here on which our application is running. - Verify the following before LB Service creation - Azure Standard Load Balancer created for Azure AKS Cluster - Frontend IP Configuration - Load Balancing Rules - Azure Public IP ```t # Create a Pod kubectl run --image kubectl run my-first-pod --image stacksimplify/kubenginx:1.0.0 # Expose Pod as a Service kubectl expose pod --type=LoadBalancer --port=80 --name= kubectl expose pod my-first-pod --type=LoadBalancer --port=80 --name=my-first-service # Get Service Info kubectl get service kubectl get svc Observation: 1. Initially External-IP will show as pending and slowly it will get the external-ip assigned and displayed. 2. It will take 2 to 3 minutes to get the external-ip listed # Describe Service kubectl describe service my-first-service # Access Application http:// curl http:// ``` - Verify the following after LB Service creation - Google Load Balancer created, verify it. - Verify Backends - Verify Frontends - Verify **Workloads and Services** on Google GKE Dashboard GCP Console ## Step-05: Interact with a Pod ### Step-05-01: Verify Pod Logs ```t # Get Pod Name kubectl get po # Dump Pod logs kubectl logs kubectl logs my-first-pod # Stream pod logs with -f option and access application to see logs kubectl logs kubectl logs -f my-first-pod ``` - **Important Notes** - Refer below link and search for **Interacting with running Pods** for additional log options - Troubleshooting skills are very important. So please go through all logging options available and master them. - **Reference:** https://kubernetes.io/docs/reference/kubectl/cheatsheet/ ### Step-05-02: Connect to a Container in POD and execute command ```t # Connect to Nginx Container in a POD kubectl exec -it -- /bin/bash kubectl exec -it my-first-pod -- /bin/bash # Execute some commands in Nginx container ls cd /usr/share/nginx/html cat index.html exit ``` ### Step-05-03: Running individual commands in a Container ```t # Template kubectl exec -it -- # Sample Commands kubectl exec -it my-first-pod -- env kubectl exec -it my-first-pod -- ls kubectl exec -it my-first-pod -- cat /usr/share/nginx/html/index.html ``` ## Step-06: Get YAML Output of Pod & Service ### Get YAML Output ```t # Get pod definition YAML output kubectl get pod my-first-pod -o yaml # Get service definition YAML output kubectl get service my-first-service -o yaml ``` ## Step-07: Clean-Up ```t # Get all Objects in default namespace kubectl get all # Delete Services kubectl delete svc my-first-service # Delete Pod kubectl delete pod my-first-pod # Get all Objects in default namespace kubectl get all ``` ## LOGS - More Options ```t # Return snapshot logs from pod nginx with only one container kubectl logs nginx # Return snapshot of previous terminated ruby container logs from pod web-1 kubectl logs -p -c ruby web-1 # Begin streaming the logs of the ruby container in pod web-1 kubectl logs -f -c ruby web-1 # Display only the most recent 20 lines of output in pod nginx kubectl logs --tail=20 nginx # Show all logs from pod nginx written in the last hour kubectl logs --since=1h nginx ``` ================================================ FILE: 07-kubectl-declarative-k8s-ReplicaSets/README.md ================================================ --- title: Kubernetes ReplicaSets description: Learn about Kubernetes ReplicaSets --- ## Step-01: Introduction to ReplicaSets - What are ReplicaSets? - What is the advantage of using ReplicaSets? ## Step-02: Create ReplicaSet ### Step-02-01: Create ReplicaSet - Create ReplicaSet ```t # Kubernetes ReplicaSet kubectl create -f replicaset-demo.yml ``` - **replicaset-demo.yml** ```yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: my-helloworld-rs labels: app: my-helloworld spec: replicas: 3 selector: matchLabels: app: my-helloworld template: metadata: labels: app: my-helloworld spec: containers: - name: my-helloworld-app image: stacksimplify/kube-helloworld:1.0.0 ``` ### Step-02-02: List ReplicaSets - Get list of ReplicaSets ```t # List ReplicaSets kubectl get replicaset kubectl get rs ``` ### Step-02-03: Describe ReplicaSet - Describe the newly created ReplicaSet ```t # Describe ReplicaSet kubectl describe rs/ kubectl describe rs/my-helloworld-rs [or] kubectl describe rs my-helloworld-rs ``` ### Step-02-04: List of Pods - Get list of Pods ```t # Get list of Pods kubectl get pods kubectl describe pod # Get list of Pods with Pod IP and Node in which it is running kubectl get pods -o wide ``` ### Step-02-05: Verify the Owner of the Pod - Verify the owner reference of the pod. - Verify under **"name"** tag under **"ownerReferences"**. We will find the replicaset name to which this pod belongs to. ```t # List Pod with Output as YAML kubectl get pods -o yaml kubectl get pods my-helloworld-rs-c8rrj -o yaml ``` ## Step-03: Expose ReplicaSet as a Service - Expose ReplicaSet with a service (Load Balancer Service) to access the application externally (from internet) ```t # Expose ReplicaSet as a Service kubectl expose rs --type=LoadBalancer --port=80 --target-port=8080 --name= kubectl expose rs my-helloworld-rs --type=LoadBalancer --port=80 --target-port=8080 --name=my-helloworld-rs-service # List Services kubectl get service kubectl get svc ``` - **Access the Application using External or Public IP** ```t # Access Application http:///hello curl http:///hello # Observation 1. Each time we access the application, request will be sent to different pod and pods id will be displayed for us. ``` ## Step-04: Test Replicaset Reliability or High Availability - Test how the high availability or reliability concept is achieved automatically in Kubernetes - Whenever a POD is accidentally terminated due to some application issue, ReplicaSet should auto-create that Pod to maintain desired number of Replicas configured to achive High Availability. ```t # To get Pod Name kubectl get pods # Delete the Pod kubectl delete pod # Verify the new pod got created automatically kubectl get pods (Verify Age and name of new pod) ``` ## Step-05: Test ReplicaSet Scalability feature - Test how scalability is going to seamless & quick - Update the **replicas** field in **replicaset-demo.yml** from 3 to 6. ```yaml # Before change spec: replicas: 3 # After change spec: replicas: 6 ``` - Update the ReplicaSet ```t # Apply latest changes to ReplicaSet kubectl replace -f replicaset-demo.yml # Verify if new pods got created kubectl get pods -o wide ``` ## Step-06: Delete ReplicaSet & Service ### Step-06-01: Delete ReplicaSet ```t # Delete ReplicaSet kubectl delete rs # Sample Commands kubectl delete rs/my-helloworld-rs [or] kubectl delete rs my-helloworld-rs # Verify if ReplicaSet got deleted kubectl get rs ``` ### Step-06-02: Delete Service created for ReplicaSet ```t # Delete Service kubectl delete svc # Sample Commands kubectl delete svc my-helloworld-rs-service [or] kubectl delete svc/my-helloworld-rs-service # Verify if Service got deleted kubectl get svc ``` ================================================ FILE: 07-kubectl-declarative-k8s-ReplicaSets/replicaset-demo.yml ================================================ apiVersion: apps/v1 kind: ReplicaSet metadata: name: my-helloworld-rs labels: app: my-helloworld spec: replicas: 3 selector: matchLabels: app: my-helloworld template: metadata: labels: app: my-helloworld spec: containers: - name: my-helloworld-app image: stacksimplify/kube-helloworld:1.0.0 ================================================ FILE: 08-kubectl-imperative-k8s-deployment-CREATE/README.md ================================================ --- title: Kubernetes - Deployment description: Learn and Implement Kubernetes Deployment --- ## Kubernetes Deployment - Topics 1. Create Deployment 2. Scale the Deployment 3. Expose Deployment as a Service 4. Update Deployment 5. Rollback Deployment 6. Rolling Restarts 7. Pause & Resume Deployments 8. Canary Deployments (Will be covered at Declarative section of Deployments) ## Step-01: Introduction to Deployments - What is a Deployment? - What all we can do using Deployment? - Create a Deployment - Scale the Deployment - Expose the Deployment as a Service ## Step-02: Create Deployment - Create Deployment to rollout a ReplicaSet - Verify Deployment, ReplicaSet & Pods - **Docker Image Location:** https://hub.docker.com/repository/docker/stacksimplify/kubenginx ```t # Create Deployment kubectl create deployment --image= kubectl create deployment my-first-deployment --image=stacksimplify/kubenginx:1.0.0 # Verify Deployment kubectl get deployments kubectl get deploy # Describe Deployment kubectl describe deployment kubectl describe deployment my-first-deployment # Verify ReplicaSet kubectl get rs # Verify Pod kubectl get po ``` ### Update Change-Cause for the Kubernetes Deployment - Rollout History - **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us ```t # Verify Rollout History kubectl rollout history deployment/my-first-deployment # Update REVISION CHANGE-CAUSE for Kubernetes Deployment kubectl annotate deployment/my-first-deployment kubernetes.io/change-cause="Deployment CREATE - App Version 1.0.0" # Verify Rollout History kubectl rollout history deployment/my-first-deployment ``` ## Step-03: Scaling a Deployment - Scale the deployment to increase the number of replicas (pods) ```t # Scale Up the Deployment kubectl scale --replicas=10 deployment/ kubectl scale --replicas=10 deployment/my-first-deployment # Verify Deployment kubectl get deploy # Verify ReplicaSet kubectl get rs # Verify Pods kubectl get po # Scale Down the Deployment kubectl scale --replicas=2 deployment/my-first-deployment kubectl get deploy ``` ## Step-04: Expose Deployment as a Service - Expose **Deployment** with a service (LoadBalancer Service) to access the application externally (from internet) ```t # Expose Deployment as a Service kubectl expose deployment --type=LoadBalancer --port=80 --target-port=80 --name= kubectl expose deployment my-first-deployment --type=LoadBalancer --port=80 --target-port=80 --name=my-first-deployment-service # Get Service Info kubectl get svc ``` - **Access the Application using Public IP** ```t # Access Application http:// curl http:// ``` ================================================ FILE: 09-kubectl-imperative-k8s-deployment-UPDATE/README.md ================================================ --- title: Kubernetes - Update Deployment description: Learn and Implement Kubernetes Update Deployment --- ## Step-00: Introduction - We can update deployments using two options - Set Image - Edit Deployment ## Step-01: Updating Application version V1 to V2 using "Set Image" Option ### Update Deployment - **Observation:** Please Check the container name in `spec.container.name` yaml output and make a note of it and replace in `kubectl set image` command ```t # Get Container Name from current deployment kubectl get deployment my-first-deployment -o yaml # Update Deployment - SHOULD WORK NOW kubectl set image deployment/ = kubectl set image deployment/my-first-deployment kubenginx=stacksimplify/kubenginx:2.0.0 ``` ### Verify Rollout Status (Deployment Status) - **Observation:** By default, rollout happens in a rolling update model, so no downtime. ```t # Verify Rollout Status kubectl rollout status deployment/my-first-deployment # Verify Deployment kubectl get deploy ``` ### Describe Deployment - **Observation:** - Verify the Events and understand that Kubernetes by default do "Rolling Update" for new application releases. - With that said, we will not have downtime for our application. ```t # Descibe Deployment kubectl describe deployment my-first-deployment ``` ### Verify ReplicaSet - **Observation:** New ReplicaSet will be created for new version ```t # Verify ReplicaSet kubectl get rs ``` ### Verify Pods - **Observation:** Pod template hash label of new replicaset should be present for PODs letting us know these pods belong to new ReplicaSet. ```t # List Pods kubectl get po ``` ### Access the Application using Public IP - We should see `Application Version:V2` whenever we access the application in browser ```t # Get Load Balancer IP kubectl get svc # Application URL http:// ``` ### Update Change-Cause for the Kubernetes Deployment - Rollout History - **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us. ```t # Verify Rollout History kubectl rollout history deployment/my-first-deployment # Update REVISION CHANGE-CAUSE kubectl annotate deployment/my-first-deployment kubernetes.io/change-cause="Deployment UPDATE - App Version 2.0.0 - SET IMAGE OPTION" # Verify Rollout History kubectl rollout history deployment/my-first-deployment ``` ## Step-02: Update the Application from V2 to V3 using "Edit Deployment" Option ### Edit Deployment ```t # Edit Deployment kubectl edit deployment/ kubectl edit deployment/my-first-deployment ``` ```yaml # Change From 2.0.0 spec: containers: - image: stacksimplify/kubenginx:2.0.0 # Change To 3.0.0 spec: containers: - image: stacksimplify/kubenginx:3.0.0 ``` ### Verify Rollout Status - **Observation:** Rollout happens in a rolling update model, so no downtime. ```t # Verify Rollout Status kubectl rollout status deployment/my-first-deployment # Describe Deployment kubectl describe deployment/my-first-deployment ``` ### Verify Replicasets - **Observation:** We should see 3 ReplicaSets now, as we have updated our application to 3rd version 3.0.0 ```t # Verify ReplicaSet and Pods kubectl get rs kubectl get po ``` ### Access the Application using Public IP - We should see `Application Version:V3` whenever we access the application in browser ```t # Get Load Balancer IP kubectl get svc # Application URL http:// ``` ### Update Change-Cause for the Kubernetes Deployment - Rollout History - **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us. ```t # Verify Rollout History kubectl rollout history deployment/my-first-deployment # Update REVISION CHANGE-CAUSE kubectl annotate deployment/my-first-deployment kubernetes.io/change-cause="Deployment UPDATE - App Version 3.0.0 - EDIT DEPLOYMENT OPTION" # Verify Rollout History kubectl rollout history deployment/my-first-deployment ``` ================================================ FILE: 10-kubectl-imperative-k8s-deployment-ROLLBACK/README.md ================================================ --- title: Kubernetes - Rollback Deployment description: Learn and Implement Kubernetes Rollback Deployment --- ## Step-00: Introduction - We can rollback a deployment in two ways. - Previous Version - Specific Version ## Step-01: Rollback a Deployment to previous version ### Check the Rollout History of a Deployment ```t # List Deployment Rollout History kubectl rollout history deployment/ kubectl rollout history deployment/my-first-deployment ``` ### Verify changes in each revision - **Observation:** Review the "Annotations" and "Image" tags for clear understanding about changes. ```t # List Deployment History with revision information kubectl rollout history deployment/my-first-deployment --revision=1 kubectl rollout history deployment/my-first-deployment --revision=2 kubectl rollout history deployment/my-first-deployment --revision=3 ``` ### Rollback to previous version - **Observation:** If we rollback, it will go back to revision-2 and its number increases to revision-4 ```t # Undo Deployment kubectl rollout undo deployment/my-first-deployment # List Deployment Rollout History kubectl rollout history deployment/my-first-deployment ``` ### Verify Deployment, Pods, ReplicaSets ```t # Verify Deployment, Pods, ReplicaSets kubectl get deploy kubectl get rs kubectl get po kubectl describe deploy my-first-deployment ``` ### Access the Application using Public IP - We should see `Application Version:V2` whenever we access the application in browser ```t # Get Load Balancer IP kubectl get svc # Application URL http:// ``` ## Step-02: Rollback to specific revision ### Check the Rollout History of a Deployment ```t # List Deployment Rollout History kubectl rollout history deployment/ kubectl rollout history deployment/my-first-deployment ``` ### Rollback to specific revision ```t # Rollback Deployment to Specific Revision kubectl rollout undo deployment/my-first-deployment --to-revision=3 ``` ### List Deployment History - **Observation:** If we rollback to revision 3, it will go back to revision-3 and its number increases to revision-5 in rollout history ```t # List Deployment Rollout History kubectl rollout history deployment/my-first-deployment ``` ### Access the Application using Public IP - We should see `Application Version:V3` whenever we access the application in browser ```t # Get Load Balancer IP kubectl get svc # Application URL http:// ``` ## Step-03: Rolling Restarts of Application - Rolling restarts will kill the existing pods and recreate new pods in a rolling fashion. ```t # Rolling Restarts kubectl rollout restart deployment/ kubectl rollout restart deployment/my-first-deployment # Get list of Pods kubectl get po ``` ================================================ FILE: 11-kubectl-imperative-k8s-deployment-PAUSE-RESUME/README.md ================================================ --- title: Kubernetes - Pause & Resume Deployments description: Implement Kubernetes - Pause & Resume Deployments --- ## Step-00: Introduction - Why do we need Pausing & Resuming Deployments? - If we want to make multiple changes to our Deployment, we can pause the deployment make all changes and resume it. - We are going to update our Application Version from **V3 to V4** as part of learning "Pause and Resume Deployments" ## Step-01: Pausing & Resuming Deployments ### Check current State of Deployment & Application ```t # Check the Rollout History of a Deployment kubectl rollout history deployment/my-first-deployment Observation: Make a note of last version number # Get list of ReplicaSets kubectl get rs Observation: Make a note of number of replicaSets present. # Access the Application http:// Observation: Make a note of application version ``` ### Pause Deployment and Two Changes ```t # Pause the Deployment kubectl rollout pause deployment/ kubectl rollout pause deployment/my-first-deployment # Update Deployment - Application Version from V3 to V4 kubectl set image deployment/my-first-deployment kubenginx=stacksimplify/kubenginx:4.0.0 # Check the Rollout History of a Deployment kubectl rollout history deployment/my-first-deployment Observation: No new rollout should start, we should see same number of versions as we check earlier with last version number matches which we have noted earlier. # Get list of ReplicaSets kubectl get rs Observation: No new replicaSet created. We should have same number of replicaSets as earlier when we took note. # Make one more change: set limits to our container kubectl set resources deployment/my-first-deployment -c=kubenginx --limits=cpu=20m,memory=30Mi ``` ### Resume Deployment ```t # Resume the Deployment kubectl rollout resume deployment/my-first-deployment # Check the Rollout History of a Deployment kubectl rollout history deployment/my-first-deployment Observation: You should see a new version got created # Update REVISION CHANGE-CAUSE kubectl annotate deployment/my-first-deployment kubernetes.io/change-cause="Deployment PAUSE RESUME Demo - App Version 4.0.0 " # Check the Rollout History of a Deployment kubectl rollout history deployment/my-first-deployment # Get list of ReplicaSets kubectl get rs Observation: You should see new ReplicaSet. # Get Load Balancer IP kubectl get svc ``` ### Access Application ```t # Access the Application http:// Observation: You should see Application V4 version ``` ## Step-02: Clean-Up ```t # Delete Deployment kubectl delete deployment my-first-deployment # Delete Service kubectl delete svc my-first-deployment-service # Get all Objects from Kubernetes default namespace kubectl get all ``` ================================================ FILE: 12-kubectl-imperative-k8s-services/README.md ================================================ --- title: Kubernetes Services description: Learn about Kubernetes ClusterIP and Load Balancer Services --- ## Step-01: Introduction to Services - **Service Types** 1. ClusterIp 2. NodePort 3. LoadBalancer 4. ExternalName 5. Ingress - We are going to look in to ClusterIP and LoadBalancer Service in this section with a detailed example. - LoadBalancer Type is primarily for cloud providers and it will differ cloud to cloud, so we will do it accordingly (per cloud basis) - ExternalName doesn't have Imperative commands and we need to write YAML definition for the same, so we will look in to it as and when it is required in our course. ## Step-02: ClusterIP Service - Backend Application Setup - Create a deployment for Backend Application (Spring Boot REST Application) - Create a ClusterIP service for load balancing backend application. ```t # Create Deployment for Backend Rest App kubectl create deployment my-backend-rest-app --image=stacksimplify/kube-helloworld:1.0.0 kubectl get deploy # Create ClusterIp Service for Backend Rest App kubectl expose deployment my-backend-rest-app --port=8080 --target-port=8080 --name=my-backend-service kubectl get svc Observation: We don't need to specify "--type=ClusterIp" because default setting is to create ClusterIp Service. ``` - **Important Note:** If backend application port (Container Port: 8080) and Service Port (8080) are same we don't need to use **--target-port=8080** but for avoiding the confusion i have added it. Same case applies to frontend application and service. - **Backend HelloWorld Application Source** [kube-helloworld](https://github.com/stacksimplify/kubernetes-fundamentals/tree/master/00-Docker-Images/02-kube-backend-helloworld-springboot/kube-helloworld) ## Step-03: LoadBalancer Service - Frontend Application Setup - We have implemented **LoadBalancer Service** multiple times so far (in pods, replicasets and deployments), even then we are going to implement one more time to get a full architectural view in relation with ClusterIp service. - Create a deployment for Frontend Application (Nginx acting as Reverse Proxy) - Create a LoadBalancer service for load balancing frontend application. - **Important Note:** In Nginx reverse proxy, ensure backend service name `my-backend-service` is updated when you are building the frontend container. We already built it and put ready for this demo (stacksimplify/kube-frontend-nginx:1.0.0) - **Nginx Conf File** ```conf server { listen 80; server_name localhost; location / { # Update your backend application Kubernetes Cluster-IP Service name and port below # proxy_pass http://:; proxy_pass http://my-backend-service:8080; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } ``` - **Docker Image Location:** https://hub.docker.com/repository/docker/stacksimplify/kube-frontend-nginx - **Frontend Nginx Reverse Proxy Application Source** [kube-frontend-nginx](https://github.com/stacksimplify/kubernetes-fundamentals/tree/master/00-Docker-Images/03-kube-frontend-nginx) ```t # Create Deployment for Frontend Nginx Proxy kubectl create deployment my-frontend-nginx-app --image=stacksimplify/kube-frontend-nginx:1.0.0 kubectl get deploy # Create LoadBalancer Service for Frontend Nginx Proxy kubectl expose deployment my-frontend-nginx-app --type=LoadBalancer --port=80 --target-port=80 --name=my-frontend-service kubectl get svc # Get Load Balancer IP kubectl get svc http:///hello curl http:///hello # Scale backend with 10 replicas kubectl scale --replicas=10 deployment/my-backend-rest-app # Test again to view the backend service Load Balancing http:///hello curl http:///hello ``` ## Step-04: Clean-Up Kubernetes Deployment and Services ```t # List Services kubectl get svc # Delete Services kubectl delete service my-backend-service kubectl delete service my-frontend-service # List Deployments kubectl get deploy # Delete Deployments kubectl delete deployment my-backend-rest-app kubectl delete deployment my-frontend-nginx-app ``` ================================================ FILE: 13-YAML-Basics/README.md ================================================ --- title: YAML Basics for Kubernetes description: Learn YAML Basics --- ## Step-01: Comments & Key Value Pairs - Space after colon is mandatory to differentiate key and value ```yml # Defining simple key value pairs name: kalyan age: 23 city: Hyderabad ``` ## Step-02: Dictionary / Map - Set of properties grouped together after an item - Equal amount of blank space required for all the items under a dictionary ```yml person: name: kalyan age: 23 city: Hyderabad ``` ## Step-03: Array / Lists - Dash indicates an element of an array ```yml person: # Dictionary name: kalyan age: 23 city: Hyderabad hobbies: # List - cycling - cookines hobbies: [cycling, cooking] # List with a differnt notation ``` ## Step-04: Multiple Lists - Dash indicates an element of an array ```yml person: # Dictionary name: kalyan age: 23 city: Hyderabad hobbies: # List - cycling - cooking hobbies: [cycling, cooking] # List with a differnt notation friends: # Multiple Lists - name: friend1 age: 22 - name: friend2 age: 25 ``` ## Step-05: Sample Pod Tempalte for Reference ```yml apiVersion: v1 # String kind: Pod # String metadata: # Dictionary name: myapp-pod labels: # Dictionary app: myapp spec: containers: # List - name: myapp image: stacksimplify/kubenginx:1.0.0 ports: # Multiple Lists - containerPort: 80 protocol: "TCP" - containerPort: 81 protocol: "TCP" ``` ================================================ FILE: 13-YAML-Basics/sample-file.yml ================================================ # Simple Key value Pairs person: # Dictionary name: kalyan age: 23 city: Hyderabd hobbies: # List - cooking - cycling friends: # Multiple lists - name: friend1 age: 23 - name: friend2 age: 22 --- # YAML Document Separator apiVersion: v1 # String kind: Pod # String metadata: # Dictionary name: myapp-pod labels: # Dictionary app: myapp tier: frontend spec: containers: # List - name: myapp image: stacksimplify/kubenginx:1.0.0 ports: # Multiple Lists - containerPort: 80 protocol: "TCP" - containerPort: 81 protocol: "TCP" ================================================ FILE: 13-YAML-Basics/yaml-demo.yaml ================================================ # Simple Key Value Pairs person: # Dictionary name: kalyan age: 23 city: Hyderabad hobbies: # List - cooking - cycling hobbies: [cooking, cycling] # Another Notation for Lists friends: # Multiple Lists - name: friend1 age: 23 - name: friend2 age: 22 --- # YAML Document Separator apiVersion: v1 # String kind: Pod # String metadata: # Dictionary name: myapp-pod labels: # Dictionary app: myapp spec: containers: # List - name: myapp image: stacksimplify/kubenginx:1.0.0 ports: # Multiple Lists - containerPort: 80 protocol: "TCP" - containerPort: 81 protocol: "TCP" ================================================ FILE: 14-yaml-declarative-k8s-pods/README.md ================================================ --- title: Kubernetes Pods with YAML description: Learn to write and test Kubernetes Pods with YAML --- ## Step-01: Kubernetes YAML Top level Objects - Discuss about the k8s YAML top level objects - **kube-base-definition.yml** ```yml apiVersion: kind: metadata: spec: ``` - [Kubernetes Reference](https://kubernetes.io/docs/reference/) - [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) - [Pod API Objects Reference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core) ## Step-02: Create Simple Pod Definition using YAML - We are going to create a very basic pod definition - **01-pod-definition.yml** ```yaml apiVersion: v1 # String kind: Pod # String metadata: # Dictionary name: myapp-pod labels: # Dictionary app: myapp spec: containers: # List - name: myapp image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` - **Create Pod** ```t # Change Directory cd kube-manifests # Create Pod kubectl create -f 01-pod-definition.yml [or] kubectl apply -f 01-pod-definition.yml # List Pods kubectl get pods ``` ## Step-03: Create a LoadBalancer Service - **02-pod-LoadBalancer-service.yml** ```yaml apiVersion: v1 kind: Service metadata: name: myapp-pod-loadbalancer-service # Name of the Service spec: type: LoadBalancer selector: # Loadbalance traffic across Pods matching this label selector app: myapp # Accept traffic sent to port 80 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` - **Create LoadBalancer Service for Pod** ```t # Create Service kubectl apply -f 02-pod-LoadBalancer-service.yml # List Service kubectl get svc # Access Application http:// curl http:// ``` ## Step-04: Clean-Up Kubernetes Pod and Service ```t # Change Directory cd kube-manifests # Delete Pod kubectl delete -f 01-pod-definition.yml # Delete Service kubectl delete -f 02-pod-LoadBalancer-service.yml ``` ## API Object References - [Kubernetes API Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/) - [Pod Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core) - [Service Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core) - [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) ================================================ FILE: 14-yaml-declarative-k8s-pods/kube-base-definition.yml ================================================ apiVersion: kind: metadata: spec: # Types of Kubernetes Objects # Pod, ReplicaSet, Deployment, Service and many more # apiVersion: version of k8s objects # kind: k8s objects # metadata: define name and labels for k8s objects # spec: specification or real definition for k8s objects ================================================ FILE: 14-yaml-declarative-k8s-pods/kube-manifests/01-pod-definition.yml ================================================ apiVersion: v1 # String kind: Pod # String metadata: # Dictionary name: myapp-pod labels: # Dictionary app: myapp # Key Value Pairs spec: containers: # List - name: myapp image: stacksimplify/kubenginx:1.0.0 ports: # List - containerPort: 80 ================================================ FILE: 14-yaml-declarative-k8s-pods/kube-manifests/02-pod-LoadBalancer-service.yml ================================================ apiVersion: v1 kind: Service metadata: name: myapp-pod-loadbalancer-service spec: type: LoadBalancer # Loadbalance traffic across Pods matching this label selector selector: app: myapp ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 15-yaml-declarative-k8s-replicasets/README.md ================================================ --- title: Kubernetes ReplicaSets with YAML description: Learn to write and test Kubernetes ReplicaSets with YAML --- ## Step-01: Create ReplicaSet Definition - **01-replicaset-definition.yml** ```yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp2-rs spec: replicas: 3 # 3 Pods should exist at all times. selector: # Pods label should be defined in ReplicaSet label selector matchLabels: app: myapp2 template: metadata: name: myapp2-pod labels: app: myapp2 # Atleast 1 Pod label should match with ReplicaSet Label Selector spec: containers: - name: myapp2 image: stacksimplify/kubenginx:2.0.0 ports: - containerPort: 80 ``` ## Step-02: Create ReplicaSet - Create ReplicaSet with 3 Replicas ```t # Create ReplicaSet kubectl apply -f 01-replicaset-definition.yml # List Replicasets kubectl get rs ``` - Delete a pod - ReplicaSet immediately creates the pod. ```t # List Pods kubectl get pods # Delete Pod kubectl delete pod ``` ## Step-03: Create LoadBalancer Service for ReplicaSet ```yaml apiVersion: v1 kind: Service metadata: name: replicaset-loadbalancer-service spec: type: LoadBalancer selector: app: myapp2 ports: - name: http port: 80 targetPort: 80 ``` - **Create LoadBalancer Service for ReplicaSet & Test** ```t # Create LoadBalancer Service kubectl apply -f 02-replicaset-LoadBalancer-servie.yml # List LoadBalancer Service kubectl get svc # Access Application http:// ``` ## Step-04: Clean-Up Kubernetes ReplicaSet and Service ```t # Change Directory cd kube-manifests # Delete Pod kubectl delete -f 01-replicaset-definition.yml # Delete Service kubectl delete -f 02-replicaset-LoadBalancer-servie.yml ``` ## API References - [ReplicaSet](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#replicaset-v1-apps) ================================================ FILE: 15-yaml-declarative-k8s-replicasets/kube-base-definition.yml ================================================ apiVersion: kind: metadata: spec: ================================================ FILE: 15-yaml-declarative-k8s-replicasets/kube-manifests/01-replicaset-definition.yml ================================================ apiVersion: apps/v1 kind: ReplicaSet metadata: # Dictionary name: myapp2-rs spec: # Dictionary replicas: 3 selector: matchLabels: app: myapp2 template: metadata: # Dictionary name: myapp2-pod labels: app: myapp2 # Key Value Pairs spec: containers: # List - name: myapp2-container image: stacksimplify/kubenginx:2.0.0 ports: - containerPort: 80 ================================================ FILE: 15-yaml-declarative-k8s-replicasets/kube-manifests/02-replicaset-LoadBalancer-servie.yml ================================================ apiVersion: v1 kind: Service metadata: name: replicaset-loadbalancer-service spec: type: LoadBalancer # Loadbalance traffic across Pods matching this label selector selector: app: myapp2 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 16-yaml-declarative-k8s-deployments/README.md ================================================ --- title: Kubernetes Deployments with YAML description: Learn to write and test Kubernetes Deployments with YAML --- ## Step-01: Copy templates from ReplicaSet - Copy templates from ReplicaSet and change the `kind: Deployment` - Update Container Image version to `3.0.0` - Change all names to Deployment - Change all labels and selectors to `myapp3` ```t # Change Directory cd kube-manifests # Create Deployment kubectl apply -f 01-deployment-definition.yml kubectl get deploy kubectl get rs kubectl get po # Create LoadBalancer Service kubectl apply -f 02-deployment-LoadBalancer-service.yml # List Service kubectl get svc # Get Public IP kubectl get nodes -o wide # Access Application http:// ``` ## Step-02: Clean-Up Kubernetes Deployment and Service ```t # Change Directory cd kube-manifests # Delete Deployment kubectl delete -f 01-deployment-definition.yml # Delete LoadBalancer Service kubectl delete -f 02-deployment-LoadBalancer-service.yml ``` ## API References - [Deployment](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#deployment-v1-apps) ================================================ FILE: 16-yaml-declarative-k8s-deployments/kube-base-definition.yml ================================================ apiVersion: kind: metadata: spec: ================================================ FILE: 16-yaml-declarative-k8s-deployments/kube-manifests/01-deployment-definition.yml ================================================ apiVersion: apps/v1 kind: Deployment metadata: # Dictionary name: myapp3-deployment spec: # Dictionary replicas: 3 selector: matchLabels: app: myapp3 template: metadata: # Dictionary name: myapp3-pod labels: app: myapp3 # Key Value Pairs spec: containers: # List - name: myapp3-container image: stacksimplify/kubenginx:3.0.0 ports: - containerPort: 80 ================================================ FILE: 16-yaml-declarative-k8s-deployments/kube-manifests/02-deployment-LoadBalancer-servie.yml ================================================ apiVersion: v1 kind: Service metadata: name: deployment-loadbalancer-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp3 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 17-yaml-declarative-k8s-services/README.md ================================================ --- title: Kubernetes Services with YAML description: Learn to write and test Kubernetes Services with YAML --- ## Step-01: Introduction to Services - We are going to look in to below two services in detail with a frotnend and backend example - LoadBalancer Service - ClusterIP Service ## Step-02: Create Backend Deployment & Cluster IP Service - Write the Deployment template for backend REST application. - Write the Cluster IP service template for backend REST application. - **Important Notes:** - Name of Cluster IP service should be `name: my-backend-service` because same is configured in frontend nginx reverse proxy `default.conf`. - Test with different name and understand the issue we face - We have also discussed about in our [Section-12](https://github.com/stacksimplify/google-kubernetes-engine/tree/main/12-kubectl-imperative-k8s-services) ```t # Change Directory cd kube-manifests # Deploy Backend Kubernetes Deployment and ClusterIP Service kubectl get all kubectl apply -f 01-backend-deployment.yml -f 02-backend-clusterip-service.yml kubectl get all ``` ## Step-03: Create Frontend Deployment & LoadBalancer Service - Write the Deployment template for frontend Nginx Application - Write the LoadBalancer service template for frontend Nginx Application ```t # Change Directory cd kube-manifests # Deploy Frontend Kubernetes Deployment and LoadBalancer Service kubectl get all kubectl apply -f 03-frontend-deployment.yml -f 04-frontend-LoadBalancer-service.yml kubectl get all ``` - **Access REST Application** ```t # Get Service IP kubectl get svc # Access REST Application http:///hello curl http:///hello ``` ## Step-04: Delete & Recreate Objects using kubectl apply ### Delete Objects (file by file) ```t # Change Directory cd kube-manifests/ # Delete Objects File by file kubectl delete -f 01-backend-deployment.yml -f 02-backend-clusterip-service.yml -f 03-frontend-deployment.yml -f 04-frontend-LoadBalancer-service.yml kubectl get all ``` ### Recreate Objects using YAML files in a folder ```t # Change Directory cd 17-yaml-declarative-k8s-services/ # Recreate Objects by referencing a folder kubectl apply -f kube-manifests/ kubectl get all ``` ### Delete Objects using YAML files in folder ```t # Change Directory cd 17-yaml-declarative-k8s-services/ # Delete Objects by just referencing a folder kubectl delete -f kube-manifests/ kubectl get all ``` ## Additional References - Use Label Selectors for get and delete - [Labels](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively) - [Labels-Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors) ================================================ FILE: 17-yaml-declarative-k8s-services/kube-base-definition.yml ================================================ apiVersion: kind: metadata: spec: ================================================ FILE: 17-yaml-declarative-k8s-services/kube-manifests/01-backend-deployment.yml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: backend-restapp labels: app: backend-restapp tier: backend spec: replicas: 3 selector: matchLabels: app: backend-restapp template: metadata: labels: app: backend-restapp tier: backend spec: containers: - name: backend-restapp image: stacksimplify/kube-helloworld:1.0.0 ports: - containerPort: 8080 ================================================ FILE: 17-yaml-declarative-k8s-services/kube-manifests/02-backend-clusterip-service.yml ================================================ apiVersion: v1 kind: Service metadata: name: my-backend-service ## VERY VERY IMPORTANT - NGINX PROXYPASS needs this name labels: app: backend-restapp tier: backend spec: #type: ClusterIP is a default service in k8s selector: app: backend-restapp ports: - name: http port: 8080 # ClusterIP Service Port targetPort: 8080 # Container Port ================================================ FILE: 17-yaml-declarative-k8s-services/kube-manifests/03-frontend-deployment.yml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: frontend-nginxapp labels: app: frontend-nginxapp tier: frontend spec: replicas: 3 selector: matchLabels: app: frontend-nginxapp template: metadata: labels: app: frontend-nginxapp tier: frontend spec: containers: - name: frontend-nginxapp image: stacksimplify/kube-frontend-nginx:1.0.0 ports: - containerPort: 80 ================================================ FILE: 17-yaml-declarative-k8s-services/kube-manifests/04-frontend-LoadBalancer-service.yml ================================================ apiVersion: v1 kind: Service metadata: name: frontend-nginxapp-loadbalancer-service labels: app: frontend-nginxapp tier: frontend spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: frontend-nginxapp ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 18-GKE-NodePort-Service/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE NodePort Service description: Implement GCP Google Kubernetes Engine GKE NodePort Service --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # List GKE Kubernetes Worker Nodes kubectl get nodes # List GKE Kubernetes Worker Nodes with -o wide option kubectl get nodes -o wide Observation: 1. You should see External-IP Address (Public IP accesible via internet) 2. That is the key thing for testing the Kubernetes NodePort Service on GKE Cluster ``` ## Step-01: Introduction - Implement Kubernetes NodePort Service ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-03: 02-kubernetes-nodeport-service.yaml - If you don't speciy `nodePort: 30080` it will dynamically assign one port from range `30000-32768` ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-nodeport-service spec: type: NodePort # clusterIP, # NodePort, # LoadBalancer, # ExternalName selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port nodePort: 30080 # NodePort (Optional)(Node Port Range: 30000-32768) ``` ## Step-04: Deply Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get po # List Services kubectl get svc ``` ## Step-05: Access Application ```t # List Kubernetes Worker Node with -0 wide kubectl get nodes -o wide Observation: 1. Make a note of any one Node External-IP (Public IP Address) # Access Application http://: http://104.154.52.12:30080 Observation: 1. This should fail ``` ## Step-06: Create Firewall Rule ```t # Create Firewall Rule gcloud compute firewall-rules create fw-rule-gke-node-port \ --allow tcp:NODE_PORT # Replace NODE_PORT gcloud compute firewall-rules create fw-rule-gke-node-port \ --allow tcp:30080 # List Firewall Rules gcloud compute firewall-rules list ``` ## Step-07:Access Application ```t # List Kubernetes Worker Node with -0 wide kubectl get nodes -o wide Observation: 1. Make a note of any one Node External-IP (Public IP Address) # Access Application http://: http://104.154.52.12:30080 Observation: 1. This should Pass ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete NodePort Service Firewall Rule gcloud compute firewall-rules delete fw-rule-gke-node-port # List Firewall Rules gcloud compute firewall-rules list ``` ================================================ FILE: 18-GKE-NodePort-Service/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ================================================ FILE: 18-GKE-NodePort-Service/kube-manifests/02-kubernetes-nodeport-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-nodeport-service spec: type: NodePort # ClusterIP, # NodePort, # LoadBalancer, # ExternalName selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port nodePort: 30080 # NodePort (Optional)(Node Port Range: 30000-32768) ================================================ FILE: 19-GKE-Headless-Service/01-kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 4 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container #image: stacksimplify/kubenginx:1.0.0 image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0 ports: - containerPort: 8080 ================================================ FILE: 19-GKE-Headless-Service/01-kube-manifests/02-kubernetes-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-cip-service spec: type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 19-GKE-Headless-Service/01-kube-manifests/03-kubernetes-headless-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-headless-service spec: #type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName clusterIP: None selector: app: myapp1 ports: - name: http port: 8080 # Service Port targetPort: 8080 # Container Port ## VERY IMPORTANT NODE # 1. When using Headless Service, we should use both the "Service Port and Target Port" same. # 2. Headless Service directly sends traffic to Pod with Pod IP and Container Port. # 3. DNS resolution directly happens from headless service to Pod IP. ================================================ FILE: 19-GKE-Headless-Service/02-kube-manifests-curl/01-curl-pod.yml ================================================ apiVersion: v1 kind: Pod metadata: name: curl-pod spec: containers: - name: curl image: curlimages/curl command: [ "sleep", "600" ] ================================================ FILE: 19-GKE-Headless-Service/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Headless Service description: Implement GCP Google Kubernetes Engine GKE Headless Service --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # List GKE Kubernetes Worker Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement Kubernetes ClusterIP and Headless Service - Understand Headless Service in detail ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 4 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container #image: stacksimplify/kubenginx:1.0.0 image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0 ports: - containerPort: 8080 ``` ## Step-03: 02-kubernetes-clusterip-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-cip-service spec: type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-04: 03-kubernetes-headless-service.yaml - Add `spec.clusterIP: None` ### VERY IMPORTANT NODE 1. When using Headless Service, we should use both the "Service Port and Target Port" same. 2. Headless Service directly sends traffic to Pod with Pod IP and Container Port. 3. DNS resolution directly happens from headless service to Pod IP. ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-headless-service spec: #type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName clusterIP: None selector: app: myapp1 ports: - name: http port: 8080 # Service Port targetPort: 8080 # Container Port ``` ## Step-05: Deply Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods kubectl get pods -o wide Observation: make a note of Pod IP # List Services kubectl get svc Observation: 1. "CLUSTER-IP" will be "NONE" for Headless Service ## Sample Kalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.24.0.1 443/TCP 135m myapp1-cip-service ClusterIP 10.24.2.34 80/TCP 4m9s myapp1-headless-service ClusterIP None 80/TCP 4m9s Kalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ ``` ## Step-06: Review Curl Kubernetes Manifests - **Project Folder:** 02-kube-manifests-curl ```yaml apiVersion: v1 kind: Pod metadata: name: curl-pod spec: containers: - name: curl image: curlimages/curl command: [ "sleep", "600" ] ``` ## Step-07: Deply Curl-pod and Verify ClusterIP and Headless Services ```t # Deploy curl-pod kubectl apply -f 02-kube-manifests-curl # List Services kubectl get svc # GKE Cluster Kubernetes Service Full DNS Name format ..svc.cluster.local # Will open up a terminal session into the container kubectl exec -it curl-pod -- sh # ClusterIP Service: nslookup and curl Test nslookup myapp1-cip-service.default.svc.cluster.local curl myapp1-cip-service.default.svc.cluster.local ### ClusterIP Service nslookup Outptu $ nslookup myapp1-cip-service.default.svc.cluster.local Server: 10.24.0.10 Address: 10.24.0.10:53 Name: myapp1-cip-service.default.svc.cluster.local Address: 10.24.2.34 # Headless Service: nslookup and curl Test nslookup myapp1-headless-service.default.svc.cluster.local curl myapp1-headless-service.default.svc.cluster.local:8080 Observation: 1. There is no specific IP for Headless Service 2. It will be directly dns resolved to Pod IP 3. That said we should use the same port as Container Port for Headless Service (VERY VERY IMPORTANT) ### Headless Service nslookup Output $ nslookup myapp1-headless-service.default.svc.cluster.local Server: 10.24.0.10 Address: 10.24.0.10:53 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.0.25 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.0.26 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.1.28 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.1.29 ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 01-kube-manifests # Delete Kubernetes Resources - Curl Pod kubectl delete -f 02-kube-manifests-curl ``` ================================================ FILE: 20-GKE-Private-Cluster/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Private Cluster description: Implement GCP Google Kubernetes Engine GKE Private Cluster --- ## Step-01: Introduction - Create GKE Private Cluster - Create Cloud NAT - Deploy Sample App and Test - Perform Authorized Network Tests ## Step-02: Create Standard GKE Cluster - Go to Kubernetes Engine -> Clusters -> CREATE - Select **GKE Standard -> CONFIGURE** - **Cluster Basics** - **Name:** standard-cluster-private-1 - **Location type:** Regional - **Zone:** us-central1-a, us-central1-b, us-central1-c - **Release Channel** - **Release Channel:** Rapid Channel - **Version:** LATEST AVAIALABLE ON THAT DAY - REST ALL LEAVE TO DEFAULTS - **NODE POOLS: default-pool** - **Node pool details** - **Name:** default-pool - **Number of Nodes (per Zone):** 1 - **Nodes: Configure node settings** - **Image type:** Containerized Optimized OS - **Machine configuration** - **GENERAL PURPOSE SERIES:** e2 - **Machine Type:** e2-small - **Boot disk type:** standard persistent disk - **Boot disk size(GB):** 20 - **Enable Nodes on Spot VMs:** CHECKED - **Node Networking:** REVIEW AND LEAVE TO DEFAULTS - **Node Security:** - **Access scopes:** Allow full access to all Cloud APIs - REST ALL REVIEW AND LEAVE TO DEFAULTS - **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS - **CLUSTER** - **Automation:** REVIEW AND LEAVE TO DEFAULTS - **Networking:** - **Network Access:** Private Cluster - **Access control plane using its external IP address:** BY DEFAULT CHECKED - **Important Note:** Disabling this option locks down external access to the cluster control plane. There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone. This setting is permanent - **Enable Control Plane Global Access:** CHECKED - **Control Plane IP Range:** 172.16.0.0/28 - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Security:** REVIEW AND LEAVE TO DEFAULTS - **CHECK THIS BOX: Enable Workload Identity** IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Metadata:** REVIEW AND LEAVE TO DEFAULTS - **Features:** REVIEW AND LEAVE TO DEFAULTS - **Enable Compute Engine Persistent Disk CSI Driver:** SHOULD BE CHECKED BY DEFAULT - VERIFY - **Enable File Store CSI Driver:** CHECKED - CLICK ON **CREATE** ## Step-03: Review kube-manifests: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 imagePullPolicy: Always ``` ## Step-04: Review kube-manifest: 02-kubernetes-loadbalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-05: Deploy Kubernetes Manifests ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # Change Directory cd 20-GKE-Private-Cluster # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # Verify Pods kubectl get pods Observation: SHOULD FAIL - UNABLE TO DOWNLOAD DOCKER IMAGE FROM DOCKER HUB # Describe Pod kubectl describe pod # Clean-Up kubectl delete -f kube-manifests/ ``` ## Step-06: Create Cloud NAT - Go to Network Services -> CREATE CLOUD NAT GATEWAY - **Gateway Name:** gke-us-central1-default-cloudnat-gw - **Select Cloud Router:** - **Network:** default - **Region:** us-central1 - **Cloud Router:** CREATE NEW ROUTER - **Name:** gke-us-central1-cloud-router - **Description:** GKE Cloud Router Region us-central1 - **Network:** default (POPULATED by default) - **Region:** us-central1 (POPULATED by default) - **BGP Peer keepalive interval:** 20 seconds (LEAVE TO DEFAULT) - Click on **CREATE** - **Cloud NAT Mapping:** LEAVE TO DEFAULTS - **Destination (external):** LEAVE TO DEFAULTS - **Stackdriver logging:** LEAVE TO DEFAULTS - **Port allocation:** - CHECK **Enable Dynamic Port Allocation** - **Timeouts for protocol connections:** LEAVE TO DEFAULTS - CLICK on **CREATE** ## Step-07: Deploy Kubernetes Manifests ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # Verify Pods kubectl get pods Observation: SHOULD BE ABLE TO DOWNLOAD THE DOCKER IMAGE # List Services kubectl get svc # Access Application http:// # Clean-Up kubectl delete -f kube-manifests ``` ## Step-08: Authorized Network Test1: My Network - Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING - Control plane authorized networks -> EDIT - **Enable control plane authorized networks:** CHECKED - CLICK ON **ADD AUTHORIZED NETWORK** - **NAME:** MY-NETWORK-1 - **NETWORK:** 10.10.10.0/24 - Click on **DONE** - Click on **SAVE CHANGES** ```t # List Kubernetes Nodes kubectl get nodes Observation: 1. Access to GKE API Service from our local desktop kubectl cli is lost 2. Access to GKE API Service is now allowed only from "10.10.10.0/24" network 3. In short even though our GKE API Server has Internet enabled endpoint, its access is restricted to specific network of IPs ## Sample Output Kalyan-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes Unable to connect to the server: dial tcp 34.70.169.161:443: i/o timeout Kalyan-Mac-mini:google-kubernetes-engine kalyan$ ``` ## Step-09: Authorized Network Test2: My Desktop - Go to link [whatismyip](https://www.whatismyip.com/) and get desktop public IP - Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING - Control plane authorized networks -> EDIT - **Enable control plane authorized networks:** CHECKED - CLICK ON **ADD AUTHORIZED NETWORK** - **NAME:** MY-DESKTOP-1 - **NETWORK:** 10.10.10.0/24 - Click on **DONE** - Click on **SAVE CHANGES** ```t # List Kubernetes Nodes kubectl get nodes Observation: 1. Access to GKE API Service from our local desktop kubectl cli should be success ## Sample Output Kalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-pri-default-pool-90b1f67b-4z71 Ready 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-6xn6 Ready 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-dggg Ready 55m v1.24.3-gke.900 Kalyans-Mac-mini:google-kubernetes-engine kalyan$ ``` ## Step-10: Authorized Network Test2: Delete both network rules (Roll back to old state) - Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING - Control plane authorized networks -> EDIT - **Enable control plane authorized networks:** UN-CHECKED - AUTHORIZED NETWORKS -> DELETE -> MY-NETWORK-1, MY-DESKTOP-1 - Click on **SAVE CHANGES** ```t # List Kubernetes Nodes kubectl get nodes Observation: 1. Access to GKE API Service from our local desktop kubectl cli should be success ## Sample Output Kalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-pri-default-pool-90b1f67b-4z71 Ready 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-6xn6 Ready 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-dggg Ready 55m v1.24.3-gke.900 Kalyans-Mac-mini:google-kubernetes-engine kalyan$ ``` ## Additional Reference - [GKE Private Cluster with Terraform](https://github.com/GoogleCloudPlatform/gke-private-cluster-demo) ================================================ FILE: 20-GKE-Private-Cluster/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 imagePullPolicy: Always ================================================ FILE: 20-GKE-Private-Cluster/kube-manifests/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 21-GKE-PD-existing-SC-standard-rwo/README.md ================================================ --- title: GKE Persistent Disks Existing StorageClass standard-rwo description: Use existing storageclass standard-rwo in Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Understand Kubernetes Objects 01. Kubernetes PersistentVolumeClaim 02. Kubernetes ConfigMap 03. Kubernetes Deployment 04. Kubernetes Volumes 05. Kubernetes Volume Mounts 06. Kubernetes Environment Variables 07. Kubernetes ClusterIP Service 08. Kubernetes Init Containers 09. Kubernetes Service of Type LoadBalancer 10. Kubernetes StorageClass - Use predefined Storage Class `standard-rwo` - `standard-rwo` uses balanced persistent disk ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ``` ## Step-04: 02-UserManagement-ConfigMap.yaml ```yaml apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; ``` ## Step-05: 03-mysql-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ``` ## Step-06: 04-mysql-clusterip-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ``` ## Step-07: 05-UserMgmtWebApp-Deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ``` ## Step-08: 06-UserMgmtWebApp-LoadBalancer-Service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Classes kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 # Sample Message for Successful Start of JVM 2022-06-20 09:34:32.519 INFO 1 --- [ost-startStop-1] .r.SpringbootSecurityInternalApplication : Started SpringbootSecurityInternalApplication in 14.891 seconds (JVM running for 23.283) 20-Jun-2022 09:34:32.593 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive /usr/local/tomcat/webapps/ROOT.war has finished in 21,016 ms 20-Jun-2022 09:34:32.623 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-apr-8080"] 20-Jun-2022 09:34:32.688 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-apr-8009"] 20-Jun-2022 09:34:32.713 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 21275 ms ``` ## Step-10: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk ## Step-11: Verify Kubernetes Workloads, Services ConfigMaps on Kubernetes Engine Dashboard ```t # Verify Workloads Go to Kubernetes Engine -> Workloads Observation: 1. You should see "mysql" and "usermgmt-webapp" deployments # Verify Services Go to Kubernetes Engine -> Services & Ingress Observation: 1. You should "mysql ClusterIP Service" and "usermgmt-webapp-lb-service" # Verify ConfigMaps Go to Kubernetes Engine -> Secrets & ConfigMaps Observation: 1. You should find the ConfigMap "usermanagement-dbcreation-script" # Verify Persistent Volume Claim Go to Kubernetes Engine -> Storage -> PERSISTENT VOLUME CLAIMS TAB Observation: 1. You should see PVC "mysql-pv-claim" # Verify StorageClass Go to Kubernetes Engine -> Storage -> STORAGE CLASSES TAB Observation: 1. You should see 3 Storage Classes out of which "standard-rwo" and "premium-rwo" are part of Compute Engine Persistent Disks (latest and greatest - Recommended for use) 2. Not recommended to use Storage Class with name "standard" (Older version) ``` ## Step-13: Connect to MySQL Database ```t # Template: Connect to MySQL Database using kubectl kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h -u -p # MySQL Client 8.0: Replace ClusterIP Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-12: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 # Create New User Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # Verify this user in MySQL DB # Template: Connect to MySQL Database using kubectl kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h -u -p # MySQL Client 8.0: Replace ClusterIP Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> select * from user; Observation: 1. You should find the newly created user from browser successfully created in MySQL DB. 2. In simple terms, we have done the following a. Created MySQL k8s Deployment in GKE CLuster b. Created Java WebApplication k8s Deployment in GKE Cluster c. Accessed Application using GKE Load Balancer IP using browser d. Created a new user in this application and that user successfully stored in MySQL DB. e. END TO END FLOW from Browser to DB using GKE Cluster we have seen. ``` ## Step-13: Verify GCE PD CSI Driver Logging - https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver ```t # Cloud Logging Query resource.type="k8s_container" resource.labels.project_id="PROJECT_ID" resource.labels.cluster_name="CLUSTER_NAME" resource.labels.namespace_name="kube-system" resource.labels.container_name="gce-pd-driver" # Cloud Logging Query (Replace Values) resource.type="k8s_container" resource.labels.project_id="kdaida123" resource.labels.cluster_name="standard-cluster-private-1" resource.labels.namespace_name="kube-system" resource.labels.container_name="gce-pd-driver" ``` ## Step-14: Clean-Up ```t # Delete kube-manifests kubectl delete -f kube-manifests/ ``` ## Reference - [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver) ## Addtional-Data-01 1. It enables the automatic deployment and management of the persistent disk driver without having to manually set it up. 2. You can use customer-managed encryption keys (CMEKs). These keys are used to encrypt the data encryption keys that encrypt your data. 3. You can use volume snapshots with the Compute Engine persistent disk CSI Driver. Volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume. 4. Bug fixes and feature updates are rolled out independently from minor Kubernetes releases. This release schedule typically results in a faster release cadence. ## Addtional-Data-02 - For Standard Clusters: The Compute Engine persistent disk CSI Driver is enabled by default on newly created clusters - Linux clusters: GKE version 1.18.10-gke.2100 or later, or 1.19.3-gke.2100 or later. - Windows clusters: GKE version 1.22.6-gke.300 or later, or 1.23.2-gke.300 or later. - For Autopilot clusters: The Compute Engine persistent disk CSI Driver is enabled by default and cannot be disabled or edited. ## Addtional-Data-03 - GKE automatically installs the following StorageClasses: - standard-rwo: using balanced persistent disk - premium-rwo: using SSD persistent disk - For Autopilot clusters: The default StorageClass is standard-rwo, which uses the Compute Engine persistent disk CSI Driver. - For Standard clusters: The default StorageClass uses the Kubernetes in-tree gcePersistentDisk volume plugin. ```t # You can find the name of your installed StorageClasses by running the following command: kubectl get sc or kubectl get storageclass ``` ================================================ FILE: 21-GKE-PD-existing-SC-standard-rwo/kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 21-GKE-PD-existing-SC-standard-rwo/kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. ## We are going to use this in our MySQL k8s Deployment # 3. YAML Notation ## YAML Notation: |-: "strip": remove the line feed, remove the trailing blank lines. ## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines ================================================ FILE: 21-GKE-PD-existing-SC-standard-rwo/kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ================================================ FILE: 21-GKE-PD-existing-SC-standard-rwo/kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 21-GKE-PD-existing-SC-standard-rwo/kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 21-GKE-PD-existing-SC-standard-rwo/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 22-GKE-PD-existing-SC-premium-rwo/README.md ================================================ --- title: GKE Persistent Disks Existing StorageClass premium-rwo description: Use existing storageclass premium-rwo in Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Understand Kubernetes Objects 01. Kubernetes PersistentVolumeClaim 02. Kubernetes ConfigMap 03. Kubernetes Deployment 04. Kubernetes Volumes 05. Kubernetes Volume Mounts 06. Kubernetes Environment Variables 07. Kubernetes ClusterIP Service 08. Kubernetes Init Containers 09. Kubernetes Service of Type LoadBalancer 10. Kubernetes StorageClass - Use the predefined Storage class `premium-rwo` - By default, dynamically provisioned PersistentVolumes use the default StorageClass and are backed by `standard hard disks`. - If you need faster SSDs, you can use the `premium-rwo` storage class from the Compute Engine persistent disk CSI Driver to provision your volumes. - This can be done by setting the storageClassName field to `premium-rwo` in your PersistentVolumeClaim - `premium-rwo Storage Class` will provision `SSD Persistent Disk` ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: premium-rwo resources: requests: storage: 4Gi ``` ## Step-04: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section 1. 01-persistent-volume-claim.yaml 2. 02-UserManagement-ConfigMap.yaml 3. 03-mysql-deployment.yaml 4. 04-mysql-clusterip-service.yaml 5. 05-UserMgmtWebApp-Deployment.yaml 6. 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-05: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Classes kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-06: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** You should see the disk type as **SSD persistent disk** ## Step-07: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ## Step-08: Clean-Up ```t # Delete kube-manifests kubectl delete -f kube-manifests/ ``` ## Reference - [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver) ================================================ FILE: 22-GKE-PD-existing-SC-premium-rwo/kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: premium-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 22-GKE-PD-existing-SC-premium-rwo/kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 22-GKE-PD-existing-SC-premium-rwo/kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 22-GKE-PD-existing-SC-premium-rwo/kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 22-GKE-PD-existing-SC-premium-rwo/kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 22-GKE-PD-existing-SC-premium-rwo/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 23-GKE-PD-Custom-StorageClass/README.md ================================================ --- title: GKE Persistent Disks Custom StorageClass description: Use Custom storageclass to provision Google Disks for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - **Feaute-1:** Create custom Kubernetes StorageClass instead of using predefined one in GKE Cluster. custom storage class `gke-pd-standard-rwo-sc` - **Feature-2:** Test `allowVolumeExpansion: true` in Storage Class - **Feature-3:** Use `reclaimPolicy: Retain` in Storage Class and Test it ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 00-storage-class.yaml ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gke-pd-standard-rwo-sc provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Retain parameters: type: pd-balanced # STORAGE CLASS # 1. A StorageClass provides a way for administrators # to describe the "classes" of storage they offer. # 2. Here we are offering GCP PD Storage for GKE Cluster ``` ## Step-04: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: gke-pd-standard-rwo-sc resources: requests: storage: 4Gi ``` ## Step-05: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-06: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Classes kubectl get sc Observation: 1. You should find the new custom storage class object created with name as "gke-pd-standard-rwo-sc" # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-07: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** You should see the disk type as **Balanced persistent disk** ## Step-08: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 # Create New User (Used for testing `allowVolumeExpansion: true` Option) Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 ``` ## Step-09: Update 01-persistent-volume-claim.yaml from 4Gi to 8Gi ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: gke-pd-standard-rwo-sc resources: requests: #storage: 4Gi # Commment at Step-09 storage: 8Gi # UnCommment at Step-09 ``` ## Step-10: Deploy updated kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List PVC kubectl get pvc Observation: 1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi # List PV kubectl get pv Observation: 1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi # Access Application http:// Username: admin101 Password: password101 Observation: 1. No impact to underlying MySQL Database data. 2. VolumeExpansion is seamless without impacting the real data. 3. We should find the two users which are present before VolumeExpansion as-is. ``` ## Step-11: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk, as 4GB disk expaned to 8GB now. - **Observation:** You should see the disk type as **Balanced persistent disk** ## Step-12: Verify reclaimPolicy: Retain ```t # Delete kube-manifests kubectl delete -f kube-manifests/ # List Storage Class kubectl get sc Observation: 1. Custom storage class deleted # List PVC kubectl get pvc Observation: 1. PVC deleted # List PV kubectl get pv Observation: 1. PV still present 2. PV STATUS will be in "Released", not used by anyoe. ``` ## Step-13: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk. - **Observation:** You should see the disk is still present even after all kube-manifests (storageclass, pvc) all deleted. - This is due to we have used **reclaimPolicy: Retain** in Custom Storage Class ## Step-14: Clone Persistent Disk - **Question:** Why we are cloning the disk ? - **Answer:** In the next demo, we are going use the **pre-existing persistent disk** in our demo. For that purpose we are cloning it. - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk. - Click on **Clone Disk** - **Name:** preexisting-pd - **Description:** preexisting-pd Demo with GKE - **Location:** Single - **Snapshot Schedule:** UNCHECK - Click on **CREATE** ## Step-15: Delete Retained Persistent Disk from this Demo - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk. - **Disk Name:** pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14 - Click on **DELETE DISK** ```t # List PV kubectl get pv # Delete PV kubectl delete pv pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14 # List PV kubectl get pv ``` ## Step-16: Change PVC 8Gi to 4Gi: 01-persistent-volume-claim.yaml - Change PVC 8Gi to 4Gi so that `kube-manifests` will be demo ready for students. ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: gke-pd-standard-rwo-sc resources: requests: storage: 4Gi # Commment at Step-09 #storage: 8Gi # UnCommment at Step-09 ``` ## Reference - [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver) ================================================ FILE: 23-GKE-PD-Custom-StorageClass/kube-manifests/00-storage-class.yaml ================================================ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gke-pd-standard-rwo-sc provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Retain parameters: type: pd-balanced # Other Options supported are pd-ssd, pd-standard # STORAGE CLASS # 1. A StorageClass provides a way for administrators # to describe the "classes" of storage they offer. # 2. Here we are offering GCP PD Storage for GKE Cluster ================================================ FILE: 23-GKE-PD-Custom-StorageClass/kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: gke-pd-standard-rwo-sc resources: requests: storage: 4Gi # Commment at Step-09 #storage: 8Gi # UnCommment at Step-09 # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 23-GKE-PD-Custom-StorageClass/kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 23-GKE-PD-Custom-StorageClass/kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 23-GKE-PD-Custom-StorageClass/kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 23-GKE-PD-Custom-StorageClass/kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 23-GKE-PD-Custom-StorageClass/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 24-GKE-PD-preexisting-PD/README.md ================================================ --- title: GKE Persistent Disks Preexisting PD description: Use Google Disks Preexisting PD for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Use the **pre-existing Persistent Disk** created in previous demo. - As part of this demo, we are going to provision the **Persistent Volume (PV)** manually. We call this as Static Provisioning. ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 00-persistent-volume.yaml ```yaml apiVersion: v1 kind: PersistentVolume metadata: name: preexisting-pd spec: storageClassName: standard-rwo capacity: storage: 8Gi accessModes: - ReadWriteOnce claimRef: namespace: default name: mysql-pv-claim gcePersistentDisk: pdName: preexisting-pd fsType: ext4 ``` ## Step-04: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 8Gi ``` ## Step-05: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-06: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-07: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk - **Observation:** You should see the disk type **In Use By** updated and bound to **gke-standard-cluster-1-default-pool-db7b638f-j5lk** ## Step-08: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 Observation: 1. You should see admin102 already present. 2. This is because in previous demo, we already created admin102 and that data disk we have mounted here using "Static Provisioning PV" concept. ``` ## Step-09: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # List PVC kubectl get pvc # List PV kubectl get pv # Delete Persistent Disk: preexisting-pd 1. "preexisting-pd" will not get deleted automatically 2. We should manually delete it 3. We should observe that its "In Use By" field is empty (Not associated to anything) 4. Go to Compute Engine -> Disks -> preexisting-pd -> DELETE ``` ================================================ FILE: 24-GKE-PD-preexisting-PD/kube-manifests/00-persistent-volume.yaml ================================================ apiVersion: v1 kind: PersistentVolume metadata: name: preexisting-pd spec: storageClassName: standard-rwo capacity: storage: 8Gi accessModes: - ReadWriteOnce claimRef: namespace: default name: mysql-pv-claim gcePersistentDisk: pdName: preexisting-pd fsType: ext4 ================================================ FILE: 24-GKE-PD-preexisting-PD/kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 8Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 24-GKE-PD-preexisting-PD/kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 24-GKE-PD-preexisting-PD/kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 24-GKE-PD-preexisting-PD/kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 24-GKE-PD-preexisting-PD/kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 24-GKE-PD-preexisting-PD/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 25-GKE-PD-Regional-PD/README.md ================================================ --- title: GKE Persistent Disks - Use Regional PD description: Use Google Disks Regional PD for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Use Regional Persistent Disks ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 00-storage-class.yaml ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: regionalpd-storageclass provisioner: pd.csi.storage.gke.io parameters: #type: pd-standard # Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard. type: pd-ssd replication-type: regional-pd volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions: - key: topology.gke.io/zone values: - us-central1-c - us-central1-b ## Important Note - Regional PD # If using a regional cluster, you can leave allowedTopologies unspecified. If you do this, when you create a Pod that consumes a PersistentVolumeClaim which uses this StorageClass a regional persistent disk is provisioned with two zones. One zone is the same as the zone that the Pod is scheduled in. The other zone is randomly picked from the zones available to the cluster. # When using a zonal cluster, allowedTopologies must be set. # STORAGE CLASS # 1. A StorageClass provides a way for administrators # to describe the "classes" of storage they offer. # 2. Here we are offering GCP PD Storage for GKE Cluster ``` ## Step-04: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: regionalpd-storageclass resources: requests: storage: 4Gi ``` ## Step-05: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-06: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-07: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Zones:** us-central1-b, us-central1-c - **Type:** Regional SSD persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-08: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ## Step-09: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify if PD is deleted Go to Compute Engine -> Disks -> Search for 4GB Regional SSD persistent disk. It should be deleted. ``` ## References - [Regional PD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd) ================================================ FILE: 25-GKE-PD-Regional-PD/kube-manifests/00-storage-class.yaml ================================================ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: regionalpd-storageclass provisioner: pd.csi.storage.gke.io parameters: #type: pd-standard # Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard. type: pd-ssd replication-type: regional-pd volumeBindingMode: WaitForFirstConsumer #allowedTopologies: ##-->COMMENTED BECAUSE WE ARE USING REGIONAL GKE CLUSTER #- matchLabelExpressions: # - key: topology.gke.io/zone # values: # - us-central1-c # - us-central1-b ## Important Note - Regional PD # 1. If using a regional GKE cluster, you can leave allowedTopologies unspecified. # 2. If you do this, when you create a Pod that consumes a #PersistentVolumeClaim which uses this StorageClass a regional persistent #disk is provisioned with two zones. One zone is the same as the zone #that the Pod is scheduled in. The other zone is randomly picked from #the zones available to the cluster. # 3. When using a zonal cluster, allowedTopologies must be set. # STORAGE CLASS # 1. A StorageClass provides a way for administrators # to describe the "classes" of storage they offer. # 2. Here we are offering GCP PD Storage for GKE Cluster ================================================ FILE: 25-GKE-PD-Regional-PD/kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: regionalpd-storageclass resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 25-GKE-PD-Regional-PD/kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 25-GKE-PD-Regional-PD/kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 25-GKE-PD-Regional-PD/kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 25-GKE-PD-Regional-PD/kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 25-GKE-PD-Regional-PD/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/02-Volume-Snapshot/01-VolumeSnapshotClass.yaml ================================================ apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: my-snapshotclass driver: pd.csi.storage.gke.io deletionPolicy: Delete #parameters: # storage-locations: us-east2 # Optional Note: # To use a custom storage location, add a storage-locations parameter to the snapshot class. # To use this parameter, your clusters must use version 1.21 or later. ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/02-Volume-Snapshot/02-VolumeSnapshot.yaml ================================================ apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot1 spec: volumeSnapshotClassName: my-snapshotclass source: persistentVolumeClaimName: mysql-pv-claim ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/03-Volume-Restore/01-restore-pvc.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: dataSource: name: my-snapshot1 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io storageClassName: standard-rwo accessModes: - ReadWriteOnce resources: requests: storage: 4Gi ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/03-Volume-Restore/02-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: #claimName: mysql-pv-claim claimName: pvc-restore - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 26-GKE-PD-Volume-Snapshots-and-Restore/README.md ================================================ --- title: GKE Persistent Disks - Volume Snapshots and Restore description: Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction 1. Deploy UMS WebApp with `01-kube-manifests` 2. Create new User (admin102, admin103) 3. Create Volume Snapshot Kubernetes Objects and Deploy them 4. Delete User (admin102, admin103) 5. Deploy PVC Restore `03-Volume-Restore` 6. Verify if after restore 2 more users what we deleted got restored in our UMS App 7. Clean Up (kubectl delete -R -f ) ## Step-02: Kubernetes YAML Manifests - **Project Folder:** 01-kube-manifests - No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo` - 01-persistent-volume-claim.yaml - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-03: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-04: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Zones:** us-central1-c - **Type:** Balanced persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-05: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 # Create New User admin102 Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # Create New User admin103 Username: admin103 Password: password103 First Name: fname103 Last Name: lname103 Email Address: admin103@stacksimplify.com Social Security Address: ssn103 ``` ## Step-06: 02-Volume-Snapshot: Create Volume Snapshots - **Project Folder:** 02-Volume-Snapshot ### Step-06-01: 01-VolumeSnapshotClass.yaml ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: my-snapshotclass driver: pd.csi.storage.gke.io deletionPolicy: Delete #parameters: # storage-locations: us-east2 # Optional Note: # To use a custom storage location, add a storage-locations parameter to the snapshot class. # To use this parameter, your clusters must use version 1.21 or later. ``` ### Step-06-02: 02-VolumeSnapshot.yaml ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot1 spec: volumeSnapshotClassName: my-snapshotclass source: persistentVolumeClaimName: mysql-pv-claim ``` ### Step-06-03: Deploy Volume Snapshot Kubernetes Manifests ```t # Deploy Volume Snapshot Kubernetes Manifests kubectl apply -f 02-Volume-Snapshot/ # List VolumeSnapshotClass kubectl get volumesnapshotclass # Describe VolumeSnapshotClass kubectl describe volumesnapshotclass my-snapshotclass # List VolumeSnapshot kubectl get volumesnapshot # Describe VolumeSnapshot kubectl describe volumesnapshot my-snapshot1 # Verify the Snapshots Go to Compute Engine -> Storage -> Snapshots Observation: 1. You should find the new snapshot created 2. Review the "Creation Time" 3. Review the "Disk Size: 4GB" ``` ## Step-07: Delete users admin102, admin103 ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 # Delete Users admin102 admin103 ``` ## Step-08: 03-Volume-Restore: Create Volume Restore - **Project Folder:** 03-Volume-Restore ### Step-08-01: 01-restore-pvc.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: dataSource: name: my-snapshot1 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io storageClassName: standard-rwo accessModes: - ReadWriteOnce resources: requests: storage: 4Gi ``` ### Step-08-02: 02-mysql-deployment.yaml - Update Claim Name from `claimName: mysql-pv-claim` to `claimName: pvc-restore` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:5.6 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: #claimName: mysql-pv-claim claimName: pvc-restore - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ``` ### Step-08-03: Deploy Volume Restore Kubernetes Manifests ```t # Deploy Volume Restore Kubernetes Manifests kubectl apply -f 03-Volume-Restore/ # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods # Restart Deployments (Optional - If ERRORS) kubectl rollout restart deployment mysql kubectl rollout restart deployment usermgmt-webapp # Review Persistent Disk 1. Go to Compute Engine -> Storage -> Disks 2. You should find a new "Balanced persistent disk" created as part of new PVC "pvc-restore" 3. To get the exact Disk name for "pvc-restore" PVC run command "kubectl get pvc" # Access Application http:// Username: admin101 Password: password101 Observation: 1. You should find admin102, admin103 present 2. That proves, we have restored the MySQL Data using VolumeSnapshots and PVC ``` ## Step-09: Clean-Up ```t # Delete All (Disks, Snapshots) kubectl delete -f 01-kube-manifests -f 02-Volume-Snapshot -f 03-Volume-Restore # List PVC kubectl get pvc # List PV kubectl get pv # List VolumeSnapshotClass kubectl get volumesnapshotclass # List VolumeSnapshot kubectl get volumesnapshot # Verify Persistent Disks 1. Go to Compute Engine -> Storage -> Disks -> REFRESH 2. Two disks created as part of this demo is deleted # Verify Disk Snapshots 1. Go to Compute Engine -> Storage -> Snapshots -> REFRESH 2. There should not be any snapshot which we created as part of this demo. ``` ================================================ FILE: 27-GKE-PD-Volume-Clone/01-kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 27-GKE-PD-Volume-Clone/01-kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 27-GKE-PD-Volume-Clone/01-kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 27-GKE-PD-Volume-Clone/01-kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 27-GKE-PD-Volume-Clone/01-kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 27-GKE-PD-Volume-Clone/01-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/01-podpvc-clone.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: podpvc-clone spec: dataSource: name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App kind: PersistentVolumeClaim accessModes: - ReadWriteOnce storageClassName: standard-rwo # same as the StorageClass of the source PersistentVolumeClaim. resources: requests: storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim ================================================ FILE: 27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script2 data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql2 spec: replicas: 1 selector: matchLabels: app: mysql2 strategy: type: Recreate template: metadata: labels: app: mysql2 spec: containers: - name: mysql2 image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: #claimName: mysql-pv-claim claimName: podpvc-clone - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script2 ================================================ FILE: 27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql2 spec: selector: app: mysql2 ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp2 labels: app: usermgmt-webapp2 spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp2 template: metadata: labels: app: usermgmt-webapp2 spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql2 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp2 image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql2" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp2-lb-service labels: app: usermgmt-webapp2 spec: type: LoadBalancer selector: app: usermgmt-webapp2 ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: nodeSelector: nodetype: db containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/01-podpvc-clone.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: podpvc-clone spec: dataSource: name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App kind: PersistentVolumeClaim accessModes: - ReadWriteOnce storageClassName: standard-rwo # same as the StorageClass of the source PersistentVolumeClaim. resources: requests: storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script2 data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment) ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql2 spec: replicas: 1 selector: matchLabels: app: mysql2 strategy: type: Recreate template: metadata: labels: app: mysql2 spec: nodeSelector: nodetype: db containers: - name: mysql2 image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: #claimName: mysql-pv-claim claimName: podpvc-clone - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script2 ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql2 spec: selector: app: mysql2 ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp2 labels: app: usermgmt-webapp2 spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp2 template: metadata: labels: app: usermgmt-webapp2 spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql2 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp2 image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql2" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ================================================ FILE: 27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp2-lb-service labels: app: usermgmt-webapp2 spec: type: LoadBalancer selector: app: usermgmt-webapp2 ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 27-GKE-PD-Volume-Clone/README.md ================================================ --- title: GKE Persistent Disks - Volume Clone description: Use Google Disks Volume Clone for GKE Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Understand how to implement cloned Disks in GKE ## Step-02: Kubernetes YAML Manifests - **Project Folder:** 01-kube-manifests - No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo` - 01-persistent-volume-claim.yaml - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-03: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-04: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Zones:** us-central1-c - **Type:** Balanced persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-05: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 # Create New User admin102 Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # Create New User admin103 Username: admin103 Password: password103 First Name: fname103 Last Name: lname103 Email Address: admin103@stacksimplify.com Social Security Address: ssn103 ``` ## Step-06: Volume Clone: 01-podpvc-clone.yaml - **Project Folder:** 02-Use-Cloned-Volume-kube-manifests ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: podpvc-clone spec: dataSource: name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App kind: PersistentVolumeClaim accessModes: - ReadWriteOnce storageClassName: standard-rwo # same as the StorageClass of the source PersistentVolumeClaim. resources: requests: storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim ``` ## Step-07: 03-mysql-deployment.yaml - **Change-1:** Change the `claimName: mysql-pv-claim` to `claimName: podpvc-clone` - ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql2 spec: replicas: 1 selector: matchLabels: app: mysql2 strategy: type: Recreate template: metadata: labels: app: mysql2 spec: containers: - name: mysql2 image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: #claimName: mysql-pv-claim claimName: podpvc-clone - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script2 ``` ## Step-08: Kubernetes YAML Manifests - **Project Folder:** 02-Use-Cloned-Volume-kube-manifests - No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo` - For all the resource names and labels append with 2 (Example: mysql to mysql2, usermgmt-webapp to usermgmt-webapp2) - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 02-Use-Cloned-Volume-kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp2-6ff7d7d849-7lrg5 ``` ## Step-10: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Type:** Balanced persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-11: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 Observation: 1. You should see both "admin102" and "admin103" users already present. 2. This is because we have used the cloned disk from "01-kube-manifests" ``` ## Step-12: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f 01-kube-manifests -f 02-Use-Cloned-Volume-kube-manifests ``` ```t # Reference https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/ # Get Nodes kubectl get nodes # Show Node Labels kubectl get nodes --show-labels # Label Node kubectl label nodes nodetype=db kubectl label nodes gke-standard-cluster-pri-default-pool-4f7ab141-p0gz nodetype=db # Show Node Labels kubectl get nodes --show-labels ``` ================================================ FILE: 28-GKE-Storage-with-GCP-CloudSQL-Public/README.md ================================================ --- title: GKE Storage with GCP Cloud SQL - MySQL Public Instance description: Use GCP Cloud SQL MySQL DB for GKE Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --zone --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123 ``` ## Step-01: Introduction - GKE Private Cluster - GCP Cloud SQL with Public IP and Authorized Network for DB as entire internet (0.0.0.0/0) ## Step-02: Create Google Cloud SQL MySQL Instance - Go to SQL -> Choose MySQL - **Instance ID:** ums-db-public-instance - **Password:** KalyanReddy13 - **Database Version:** MYSQL 8.0 - **Choose a configuration to start with:** Development - **Choose region and zonal availability** - **Region:** US-central1(IOWA) - **Zonal availability:** Single Zone - **Primary Zone:** us-central1-a - **Customize your instance** - **Machine Type** - **Machine Type:** LightWeight (1 vCPU, 3.75GB) - **STORAGE** - **Storage Type:** HDD - **Storage Capacity:** 10GB - **Enable automatic storage increases:** CHECKED - **CONNECTIONS** - **Instance IP Assignment:** - **Private IP:** UNCHECKED - **Public IP:** CHECKED - **Authorized networks** - **Name:** All-Internet - **Network:** 0.0.0.0/0 - Click on **DONE** - **DATA PROTECTION** - **Automatic Backups:** UNCHECKED - **Enable Deletion protection:** UNCHECKED - **Maintenance:** Leave to defaults - **Flags:** Leave to defaults - **Labels:** Leave to defaults - Click on **CREATE INSTANCE** ## Step-03: Perform Telnet Test from local desktop ```t # Telnet Test telnet 3306 # Replace Public IP telnet 35.184.228.151 3306 ## SAMPLE OUTPUT Kalyans-Mac-mini:25-GKE-Storage-with-GCP-Cloud-SQL kalyanreddy$ telnet 35.184.228.151 3306 Trying 35.184.228.151... Connected to 151.228.184.35.bc.googleusercontent.com. Escape character is '^]'. Q 8.0.26-google?h'Sxcr+?nd'h ums-db-public-instance -> Databases -> **CREATE DATABASE** - **Database Name:** webappdb - **Character set:** utf8 - **Collation:** Default collation - Click on **CREATE** ## Step-05: 01-MySQL-externalName-Service.yaml - Update Cloud SQL MySQL DB `Public IP` in ExternalName Service ```yaml apiVersion: v1 kind: Service metadata: name: mysql-externalname-service spec: type: ExternalName externalName: 35.184.228.151 ``` ## Step-06: 02-Kubernetes-Secrets.yaml ```yaml apiVersion: v1 kind: Secret metadata: name: mysql-db-password type: Opaque data: db-password: S2FseWFuUmVkZHkxMw== # Base64 of KalyanReddy13 # https://www.base64encode.org/ # Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw== ``` ## Step-07: 03-UserMgmtWebApp-Deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql-externalname-service" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ``` ## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-10: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl ```t ## Verify from Kubernetes Cluster, we are able to connect to MySQL DB # Template kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h -u -p # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb ```t # Access Application http:// Username: admin101 Password: password101 # Create New User (Used for testing `allowVolumeExpansion: true` Option) Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-13: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Delete Cloud SQL MySQL Instance 1. Go to SQL -> ums-db-public-instance -> DELETE 2. Instance ID: ums-db-public-instance 3. Click on DELETE ``` ================================================ FILE: 28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/01-MySQL-externalName-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql-externalname-service spec: type: ExternalName externalName: 35.226.81.153 ================================================ FILE: 28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/02-Kubernetes-Secrets.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password type: Opaque data: db-password: S2FseWFuUmVkZHkxMw== # Base64 of KalyanReddy13 # https://www.base64encode.org/ # Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw== ================================================ FILE: 28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/03-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql-externalname-service" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ================================================ FILE: 28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/04-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 29-GKE-Storage-with-GCP-CloudSQL-Private/README.md ================================================ --- title: GKE Storage with GCP Cloud SQL - MySQL Private Instance description: Use GCP Cloud SQL MySQL DB for GKE Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Private Cluster - GCP Cloud SQL with Private IP ## Step-02: Create Private Service Connection to Google Managed Services from our VPC Network ## Step-02-01: Create ALLOCATED IP RANGES FOR SERVICES - Go to VPC Networks -> default -> PRIVATE SERVICE CONNECTION -> ALLOCATED IP RANGES FOR SERVICES - Click on **ALLOCATE IP RANGE** - **Name:** google-managed-services-default (google-managed-services-) - **Description:** google-managed-services-default - **IP Range:** Automatic - **Prefix Length:** 16 - Click on **ALLOCATE** ## Step-02-02: Create PRIVATE CONNECTIONS TO SERVICES - Delete existing connection if any present `servicenetworking-googleapis-com` - Click on **CREATE CONNECTION** - **Connected Service Provider:** Google Cloud Platform - **Connection Name:** servicenetworking-googleapis-com (DEFAULT POPULATED CANNOT CHANGE) - **Assigned IP Allocation:** google-managed-services-default - Click on **CONNECT** ## Step-03: Create Google Cloud SQL MySQL Instance - Go to SQL -> Choose MySQL - **Instance ID:** ums-db-private-instance - **Password:** KalyanReddy13 - **Database Version:** MYSQL 8.0 - **Choose a configuration to start with:** Development - **Choose region and zonal availability** - **Region:** US-central1(IOWA) - **Zonal availability:** Single Zone - **Primary Zone:** us-central1-c - **Customize your instance** - **Machine Type:** LightWeight (1 vCPU, 3.75GB) - **Storage Type:** HDD - **Storage Capacity:** 10GB - **Enable automatic storage increases:** CHECKED - **Instance IP Assignment:** - **Private IP:** CHECKED - **Associated networking:** default - **MESSAGE:** Private services access connection for network default has been successfully created. You will now be able to use the same network across all your project's managed services. If you would like to change this connection, please visit the Networking page. - **Allocated IP range (optional):** google-managed-services-default - **Public IP:** UNCHECKED - **Authorized networks:** NOT ADDED ANYTHING - **Data Protection** - **Automatic Backups:** UNCHECKED - **Instance deletion protection:** UNCHECKED - **Maintenance:** Leave to defaults - **Flags:** Leave to defaults - **Labels:** Leave to defaults - Click on **CREATE INSTANCE** ## Step-04: Create DB Schema webappdb - Go to SQL -> ums-db-public-instance -> Databases -> **CREATE DATABASE** - **Database Name:** webappdb - **Character set:** utf8 - **Collation:** Default collation - Click on **CREATE** ## Step-05: 01-MySQL-externalName-Service.yaml - Update Cloud SQL MySQL DB `Private IP` in ExternalName Service ```yaml apiVersion: v1 kind: Service metadata: name: mysql-externalname-service spec: type: ExternalName externalName: 10.64.18.3 ``` ## Step-06: 02-Kubernetes-Secrets.yaml ```yaml apiVersion: v1 kind: Secret metadata: name: mysql-db-password type: Opaque data: db-password: S2FseWFuUmVkZHkxMw== # Base64 of KalyanReddy13 # https://www.base64encode.org/ # Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw== ``` ## Step-07: 03-UserMgmtWebApp-Deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql-externalname-service" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ``` ## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-10: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl ```t ## Verify from Kubernetes Cluster, we are able to connect to MySQL DB # Template kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h -u -p # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb ```t # Access Application http:// Username: admin101 Password: password101 # Create New User (Used for testing `allowVolumeExpansion: true` Option) Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-13: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Important Note: DONT DELETE GCP Cloud SQL Instance. We will use it in next demo and clean-up ``` ## References - [Private Service Access with MySQL](https://cloud.google.com/sql/docs/mysql/configure-private-services-access#console) - [Private Service Access](https://cloud.google.com/vpc/docs/private-services-access) - [VPC Network Peering Limits](https://cloud.google.com/vpc/docs/quota#vpc-peering) - [Configuring Private Service Access](https://cloud.google.com/vpc/docs/configure-private-services-access) - [Additional Reference Only - Enabling private services access](https://cloud.google.com/service-infrastructure/docs/enabling-private-services-access) ================================================ FILE: 29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/01-MySQL-externalName-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql-externalname-service spec: type: ExternalName externalName: 10.80.0.3 ================================================ FILE: 29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/02-Kubernetes-Secrets.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password type: Opaque data: db-password: S2FseWFuUmVkZHkxMw== # Base64 of KalyanReddy13 # https://www.base64encode.org/ # Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw== ================================================ FILE: 29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/03-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql-externalname-service" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ================================================ FILE: 29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/04-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 30-GCP-CloudSQL-Private-NO-ExternalNameService/README.md ================================================ --- title: GKE Storage with GCP Cloud SQL - Without ExternalName Service description: Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Private Cluster - GCP Cloud SQL with Private IP - [GKE is a Fully Integrated Network Model](https://cloud.google.com/architecture/gke-compare-network-models) - GKE is a Fully Integrated Network Model for Kubernetes, that said without ExternalName service we can directly connect to Private or Public IP of Cloud SQL from GKE Cluster itself. - We are going to update the UMS Kubernetes Deployment `DB_HOSTNAME` with `Cloud SQL Private IP` and it should work without any issues. ## Step-02: 03-UserMgmtWebApp-Deployment.yaml - **Change-1:** Update Cloud SQL IP Address in `command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";']` - **Change-2:** Update Cloud SQL IP Address for `DB_HOSTNAME` value `value: 10.64.18.3` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 #command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME #value: "mysql-externalname-service" value: 10.64.18.3 - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ``` ## Step-03: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-04: Access Application ```t # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ## Step-05: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Delete Cloud SQL MySQL Instance 1. Go to SQL -> ums-db-private-instance -> DELETE 2. Instance ID: ums-db-private-instance 3. Click on DELETE ``` ## References - [Private Service Access with MySQL](https://cloud.google.com/sql/docs/mysql/configure-private-services-access#console) - [Private Service Access](https://cloud.google.com/vpc/docs/private-services-access) - [VPC Network Peering Limits](https://cloud.google.com/vpc/docs/quota#vpc-peering) - [Configuring Private Service Access](https://cloud.google.com/vpc/docs/configure-private-services-access) - [Additional Reference Only - Enabling private services access](https://cloud.google.com/service-infrastructure/docs/enabling-private-services-access) ================================================ FILE: 30-GCP-CloudSQL-Private-NO-ExternalNameService/kube-manifests/01-Kubernetes-Secrets.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password type: Opaque data: db-password: S2FseWFuUmVkZHkxMw== # Base64 of KalyanReddy13 # https://www.base64encode.org/ # Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw== ================================================ FILE: 30-GCP-CloudSQL-Private-NO-ExternalNameService/kube-manifests/02-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 #command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z 10.80.0.3 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME #value: "mysql-externalname-service" value: 10.80.0.3 - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ================================================ FILE: 30-GCP-CloudSQL-Private-NO-ExternalNameService/kube-manifests/03-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 31-GKE-FileStore-default-StorageClass/README.md ================================================ --- title: GKE Storage with GCP File Store - Default StorageClass description: Use GCP File Store for GKE Workloads with Default StorageClass --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Storage with GCP File Store - Default StorageClass ## Step-02: Enable Filestore CSI driver (If not enabled) - Go to Kubernetes Engine -> standard-cluster-private-1 -> Details -> Features -> Filestore CSI driver - Click on Checkbox **Enable Filestore CSI Driver** - Click on **SAVE CHANGES** ## Step-03: Verify if Filestore CSI Driver enabled ```t # Verify Filestore CSI Daemonset in kube-system namespace kubectl -n kube-system get ds | grep file Observation: 1. You should find the Daemonset with name "filestore-node" # Verify Filestore CSI Daemonset pods in kube-system namespace kubectl -n kube-system get pods | grep file Observation: 1. You should find the pods with name "filestore-node-*" ``` ## Step-04: Existing Storage Class - After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances: - **standard-rwx:** using the Basic HDD Filestore service tier - **premium-rwx:** using the Basic SSD Filestore service tier - **enterprise-rwx** - **enterprise-multishare-rwx** ```t # Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation: Below four storage class will be created by default standard-rwx premium-rwx enterprise-rwx enterprise-multishare-rwx ``` ## Step-05: 01-filestore-pvc.yaml ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gke-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Ti ``` ## Step-06: 02-write-to-filestore-pod.yaml ```yaml apiVersion: v1 kind: Pod metadata: name: filestore-writer-app spec: containers: - name: app image: centos command: ["/bin/sh"] args: ["-c", "while true; do echo GCP File Store used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ``` ## Step-07: 03-myapp1-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ``` ## Step-08: 04-loadBalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-09: Enable Cloud FileStore API (if not enabled) - Go to Search -> FileStore -> ENABLE ## Step-09: Deploy kube-manifests ```t # Deploy kube-manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods ``` ## Step-10: Verify GCP Cloud FileStore Instance - Go to FileStore -> Instances - Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f** - **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-* ## Step-11: Connect to filestore write app Kubernetes pods and Verify ```t # FileStore write app - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty filestore-writer-app -- /bin/sh cd /data ls tail -f myapp1.txt exit ``` ## Step-12: Connect to myapp1 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-13: Access Application ```t # List Services kubectl get svc # Access Application http:///filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt curl http://35.232.145.61/filestore/myapp1.txt ``` ## Step-14: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify if FileStore Instance is deleted Go to -> FileStore -> Instances ``` ================================================ FILE: 31-GKE-FileStore-default-StorageClass/kube-manifests/01-filestore-pvc.yaml ================================================ kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gke-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Ti ================================================ FILE: 31-GKE-FileStore-default-StorageClass/kube-manifests/02-write-to-filestore-pod.yaml ================================================ apiVersion: v1 kind: Pod metadata: name: filestore-writer-app spec: containers: - name: app image: centos command: ["/bin/sh"] args: ["-c", "while true; do echo GCP Cloud FileStore used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ================================================ FILE: 31-GKE-FileStore-default-StorageClass/kube-manifests/03-myapp1-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ================================================ FILE: 31-GKE-FileStore-default-StorageClass/kube-manifests/04-loadBalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 32-GKE-FileStore-custom-StorageClass/README.md ================================================ --- title: GKE Storage with GCP File Store - Custom StorageClass description: Use GCP File Store for GKE Workloads with Custom StorageClass --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Storage with GCP File Store - Custom StorageClass ## Step-02: Enable Filestore CSI driver (If not enabled) - Go to Kubernetes Engine -> standard-cluster-private -> Details -> Features -> Filestore CSI driver - Click on Checkbox **Enable Filestore CSI Driver** - Click on **SAVE CHANGES** ## Step-03: Verify if Filestore CSI Driver enabled ```t # Verify Filestore CSI Daemonset in kube-system namespace kubectl -n kube-system get ds | grep file Observation: 1. You should find the Daemonset with name "filestore-node" # Verify Filestore CSI Daemonset pods in kube-system namespace kubectl -n kube-system get pods | grep file Observation: 1. You should find the pods with name "filestore-node-*" ``` ## Step-04: Existing Storage Class - After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances: - **standard-rwx:** using the Basic HDD Filestore service tier - **premium-rwx:** using the Basic SSD Filestore service tier ```t # Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation: Below two storage class will be created by default standard-rwx premium-rwx ``` ## Step-05: 00-filestore-storage-class.yaml ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: filestore-storage-class provisioner: filestore.csi.storage.gke.io # File Store CSI Driver volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: tier: standard # Allowed values standard, premium, or enterprise network: default # The network parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up. ``` ## Step-06: Other YAML files are same as previous section - Other YAML files are same as previous section - 01-filestore-pvc.yaml - 02-write-to-filestore-pod.yaml - 03-myapp1-deployment.yaml - 04-loadBalancer-service.yaml ## Step-07: Deploy kube-manifests ```t # Deploy kube-manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods ``` ## Step-08: Verify GCP Cloud FileStore Instance - Go to FileStore -> Instances - Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f** - **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-* ## Step-09: Connect to filestore write app Kubernetes pods and Verify ```t # FileStore write app - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty filestore-writer-app -- /bin/sh cd /data ls tail -f myapp1.txt exit ``` ## Step-10: Connect to myapp1 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-11: Access Application ```t # List Services kubectl get svc # Access Application http:///filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt curl http://35.232.145.61/filestore/myapp1.txt ``` ## Step-12: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify if FileStore Instance is deleted Go to -> FileStore -> Instances ``` ================================================ FILE: 32-GKE-FileStore-custom-StorageClass/kube-manifests/00-filestore-storage-class.yaml ================================================ apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: filestore-storage-class provisioner: filestore.csi.storage.gke.io # File Store CSI Driver volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: tier: standard # Allowed values standard, premium, or enterprise network: default # The network parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up. ================================================ FILE: 32-GKE-FileStore-custom-StorageClass/kube-manifests/01-filestore-pvc.yaml ================================================ kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gke-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: filestore-storage-class resources: requests: storage: 1Ti ================================================ FILE: 32-GKE-FileStore-custom-StorageClass/kube-manifests/02-write-to-filestore-pod.yaml ================================================ apiVersion: v1 kind: Pod metadata: name: filestore-writer-app spec: containers: - name: app image: centos command: ["/bin/sh"] args: ["-c", "while true; do echo GCP Cloud FileStore used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ================================================ FILE: 32-GKE-FileStore-custom-StorageClass/kube-manifests/03-myapp1-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ================================================ FILE: 32-GKE-FileStore-custom-StorageClass/kube-manifests/04-loadBalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/01-filestore-pvc.yaml ================================================ kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gke-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: #storage: 1Ti storage: 100Gi ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/02-write-to-filestore-pod.yaml ================================================ apiVersion: v1 kind: Pod metadata: name: filestore-writer-app spec: containers: - name: app image: centos command: ["/bin/sh"] args: ["-c", "while true; do echo GCP Cloud FileStore used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/03-myapp1-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/04-loadBalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port # This will create a Classic Load Balancer # AWS will be retiring the EC2-Classic network on August 15, 2022. ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/02-volume-backup-kube-manifests/01-VolumeSnapshotClass.yaml ================================================ apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-gcp-filestore-backup-snap-class driver: filestore.csi.storage.gke.io parameters: type: backup deletionPolicy: Delete ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/02-volume-backup-kube-manifests/02-VolumeSnapshot.yaml ================================================ apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: myapp1-volume-snapshot spec: volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class source: persistentVolumeClaimName: gke-filestore-pvc ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/03-volume-restore-myapp2-kube-manifests/01-filestore-pvc.yaml ================================================ kind: PersistentVolumeClaim apiVersion: v1 metadata: name: restored-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Ti dataSource: kind: VolumeSnapshot name: myapp1-volume-snapshot apiGroup: snapshot.storage.k8s.io ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/03-volume-restore-myapp2-kube-manifests/02-myapp2-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp2-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp2 template: metadata: # Dictionary name: myapp2-pod labels: # Dictionary app: myapp2 # Key value pairs spec: containers: # List - name: myapp2-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: restored-filestore-pvc ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/03-volume-restore-myapp2-kube-manifests/03-myapp2-loadBalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp2-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp2 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 33-GKE-FileStore-Backup-and-Restore/README.md ================================================ --- title: GKE Storage with GCP File Store - Backup and Restore description: Use GCP File Store for GKE Workloads - Implement Backup and Restore --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Storage with GCP File Store - Implement Backups is `VolumeSnapshotClass` and `VolumeSnapshot` - Implement Restore of FileStore in myapp2 Application and Verify ## Step-02: YAML files are same as first FileStore Demo - **Project Folder:** 01-myapp1-kube-manifests - YAML files are same as first FileStore Demo - 01-filestore-pvc.yaml - 02-write-to-filestore-pod.yaml - 03-myapp1-deployment.yaml - 04-loadBalancer-service.yaml ## Step-03: Deploy 01-myapp1-kube-manifests and Verify ```t # Deploy 01-myapp1-kube-manifests kubectl apply -f 01-myapp1-kube-manifests # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods ``` ## Step-04: Verify GCP Cloud FileStore Instance - Go to FileStore -> Instances - Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f** - **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-* ## Step-05: Connect to filestore write app Kubernetes pods and Verify ```t # FileStore write app - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty filestore-writer-app -- /bin/sh cd /data ls tail -f myapp1.txt exit ``` ## Step-06: Connect to myapp1 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-07: Access Application ```t # List Services kubectl get svc # myapp1 - Access Application http:///filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt curl http://35.232.145.61/filestore/myapp1.txt ``` ## Step-08: Volume Backup: 01-VolumeSnapshotClass.yaml - **Project Folder:** 02-volume-backup-kube-manifests ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-gcp-filestore-backup-snap-class driver: filestore.csi.storage.gke.io parameters: type: backup deletionPolicy: Delete ``` ## Step-09: Volume Backup: 02-VolumeSnapshot.yaml - **Project Folder:** 02-volume-backup-kube-manifests ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: myapp1-volume-snapshot spec: volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class source: persistentVolumeClaimName: gke-filestore-pvc ``` ## Step-10: Volume Backup: Deploy 02-volume-backup-kube-manifests and Verify ```t # Deploy 02-volume-backup-kube-manifests kubectl apply -f 02-volume-backup-kube-manifests # List VolumeSnapshotClass kubectl get volumesnapshotclass # Describe VolumeSnapshotClass kubectl describe volumesnapshotclass csi-gcp-filestore-backup-snap-class # List VolumeSnapshot kubectl get volumesnapshot # Describe VolumeSnapshot kubectl describe volumesnapshot myapp1-volume-snapshot ``` ## Step-11: Volume Backup: Verify GCP Cloud FileStore Backups - Go to FileStore -> Backups - Observation: You should find the Backup with name `snapshot-` (Example: snapshot-b4f24bd7-649b-45bb-8a0a-2b09d5b0e631) ## Step-12: Volume Restore: 01-filestore-pvc.yaml - **Project Folder:** 03-volume-restore-myapp2-kube-manifests ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: restored-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Ti dataSource: kind: VolumeSnapshot name: myapp1-volume-snapshot apiGroup: snapshot.storage.k8s.io ``` ## Step-13: Volume Restore: 02-myapp2-deployment.yaml - **Project Folder:** 03-volume-restore-myapp2-kube-manifests ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp2-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp2 template: metadata: # Dictionary name: myapp2-pod labels: # Dictionary app: myapp2 # Key value pairs spec: containers: # List - name: myapp2-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: restored-filestore-pvc ``` ## Step-14: Volume Restore: 03-myapp2-loadBalancer-service.yaml - **Project Folder:** 03-volume-restore-myapp2-kube-manifests ```yaml apiVersion: v1 kind: Service metadata: name: myapp2-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp2 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-13: Volume Restore: Deploy 03-volume-restore-myapp2-kube-manifests and Verify ```t # Deploy 03-volume-restore-myapp2-kube-manifests kubectl apply -f 03-volume-restore-myapp2-kube-manifests # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods # Verify if new FileStore Instance is Created Go to -> FileStore -> Instances ``` ## Step-14: Volume Restore: Connect to myapp2 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp2-deployment-6dccd6557-9x6dn -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty -- /bin/sh kubectl exec --stdin --tty myapp2-deployment-6dccd6557-mbbjm -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-15: Volume Restore: Access Applications ```t # List Services kubectl get svc # myapp1 - Access Application http:///filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt # myapp2 - Access Application http:///filestore/myapp1.txt http://34.71.145.41/filestore/myapp1.txt OBSERVATION: 1. For MyApp1, writer app is writing to FileStore so we get latest timestamp lines (many lines and file growing) 2. For MyApp2, we have restored it from backup, which means the number of lines present in file at the time of snapshot will be only displayed. 3. KEY here is we are able to successfully use the filestore backup for our Kubernetes Workloads ``` ## Step-16: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f 01-myapp1-kube-manifests -f 02-volume-backup-kube-manifests -f 03-volume-restore-myapp2-kube-manifests # Verify if two FileStore Instances are deleted Go to -> FileStore -> Instances # Verify if FileStore Backup is deleted Go to -> FileStore -> Backups ``` ================================================ FILE: 34-GKE-Ingress-Basics/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Basics description: Implement GCP Google Kubernetes Engine GKE Ingress Basics --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - Learn Ingress Basics - [Ingress Diagram Reference](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#ingress_to_resource_mappings) ## Step-02: Verify HTTP Load Balancing enabled for your GKE Cluster - Go to Kubernetes Engine -> standard-cluster-private1 -> DETAILS tab -> Networking - Verify `HTTP Load Balancing: Enabled` ## Step-03: Kubernetes Deployment: 01-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-04: Kubernetes NodePort Service: 01-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ``` ## Step-05: 02-ingress-basic.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-basics annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ``` ## Step-06: Deploy kube-manifests and Verify ```t # Deploy kube-manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress kubectl get ingress Observation: 1. Wait for ADDRESS field to populate the Public IP Address # Describe Ingress kubectl describe ingress ingress-basics # Access Application http:// Important Note: 1. If you get 502 error, wait for 2 to 3 mins and retry. 2. It takes time to create load balancer on GCP. ``` ## Step-07: Verify Load Balancer - Go to Load Balancing -> Click on Load balancer ### Load Balancer View - DETAILS Tab - Frontend - Host and Path Rules - Backend Services - Health Checks - MONITORING TAB - CACHING TAB ### Load Balancer Components View - FORWARDING RULES - TARGET PROXIES - BACKEND SERVICES - BACKEND BUCKETS - CERTIFICATES - TARGET POOLS ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify if load balancer got deleted Go to Load Balancing -> Should not see any load balancers ``` ## GKE Ingress References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) -[Ingress Concepts](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress) - [Service Networking](https://cloud.google.com/kubernetes-engine/docs/concepts/service-networking) ================================================ FILE: 34-GKE-Ingress-Basics/kube-manifests/01-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 34-GKE-Ingress-Basics/kube-manifests/02-ingress-basic.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-basics annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ================================================ FILE: 35-GKE-Ingress-Context-Path-Routing/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Context Path Routing description: Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - Ingress Context Path based Routing - Discuss about the Architecture we are going to build as part of this Section - We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller - /app1/* - should go to app1-nginx-nodeport-service - /app2/* - should go to app2-nginx-nodeport-service - /* - should go to app3-nginx-nodeport-service ## Step-02: Review Nginx App1, App2 & App3 Deployment & Service - Differences for all 3 apps will be only one field from kubernetes manifests perspective and additionally their naming conventions - **Kubernetes Deployment:** Container Image name - **App1 Nginx: 01-Nginx-App1-Deployment-and-NodePortService.yaml** - **image:** stacksimplify/kube-nginxapp1:1.0.0 - **App2 Nginx: 02-Nginx-App2-Deployment-and-NodePortService.yaml** - **image:** stacksimplify/kube-nginxapp2:1.0.0 - **App3 Nginx: 03-Nginx-App3-Deployment-and-NodePortService.yaml** - **image:** stacksimplify/kubenginx:1.0.0 ## Step-03: 04-Ingress-ContextPath-Based-Routing.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cpr annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 # - path: / # pathType: Prefix # backend: # service: # name: app3-nginx-nodeport-service # port: # number: 80 ``` ## Step-04: Deploy kube-manifests and test ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-cpr ``` ## Step-05: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http:///app1/index.html http:///app2/index.html http:/// ``` ## Step-06: Verify Load Balancer - Go to Load Balancing -> Click on Load balancer ### Load Balancer View - DETAILS Tab - Frontend - Host and Path Rules - Backend Services - Health Checks - MONITORING TAB - CACHING TAB ### Load Balancer Components View - FORWARDING RULES - TARGET PROXIES - BACKEND SERVICES - BACKEND BUCKETS - CERTIFICATES - TARGET POOLS ## Step-07: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests ``` ================================================ FILE: 35-GKE-Ingress-Context-Path-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 35-GKE-Ingress-Context-Path-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 35-GKE-Ingress-Context-Path-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 35-GKE-Ingress-Context-Path-Routing/kube-manifests/04-Ingress-ContextPath-Based-Routing.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cpr annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 # - path: / # pathType: Prefix # backend: # service: # name: app3-nginx-nodeport-service # port: # number: 80 ================================================ FILE: 36-GKE-Ingress-Custom-Health-Check/README.md ================================================ --- title: GCP Google Kubernetes Engine Ingress Custom Health Check description: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - Ingress Context Path based Routing - Ingress Custom Health Checks for each application using Kubernetes Readiness Probes - **App1 Health Check Path:** /app1/index.html - **App2 Health Check Path:** /app2/index.html - **App3 Health Check Path:** /index.html ## Step-02: 01-Nginx-App1-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-03: 02-Nginx-App2-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-04: 03-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-05: 04-Ingress-Custom-Healthcheck.yaml - NO CHANGES FROM CONTEXT PATH ROUTING DEMO other than Ingress Service name `ingress-custom-healthcheck` ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-custom-healthcheck annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 # - path: / # pathType: Prefix # backend: # service: # name: app3-nginx-nodeport-service # port: # number: 80 ``` ## Step-06: Deploy kube-manifests and verify ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-custom-healthcheck ``` ## Step-07: Verify Health Checks - Go to Load Balancing -> Click on LB - DETAILS TAB - Backend services -> First Backend -> Click on Health Check Link - OR - Go to Compute Engine -> Instance Groups -> Health Checks - Review all 3 Health Checks and their Paths - **App1 Health Check Path:** /app1/index.html - **App2 Health Check Path:** /app2/index.html - **App3 Health Check Path:** /index.html ## Step-08: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http:///app1/index.html http:///app2/index.html http:/// ``` ## Step-09: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify Load Balancer Deleted Go to Network Services -> Load Balancing -> No Load balancers should be present ``` ## References - [GKE Ingress Healthchecks](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks) ================================================ FILE: 36-GKE-Ingress-Custom-Health-Check/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 36-GKE-Ingress-Custom-Health-Check/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 36-GKE-Ingress-Custom-Health-Check/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 36-GKE-Ingress-Custom-Health-Check/kube-manifests/04-Ingress-Custom-Healthcheck.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-custom-healthcheck annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 # - path: / # pathType: Prefix # backend: # service: # name: app3-nginx-nodeport-service # port: # number: 80 ================================================ FILE: 37-Google-Cloud-Domains/README.md ================================================ --- title: Google Cloud Domains description: Register Domain Name using Google Cloud Domains --- ## Step-01: Introduction - Register Domain Name using Google Cloud Domains ## Step-02: Register Domain - Go to Networking Services -> Cloud Domains -> Click on **REGISTER DOMAIN** - **Search Domain:** kalyanreddydaida.com - Click on **SELECT** - Click on **CONTINUE** - **DNS CONFIGURATION** - **DNS Provider:** Use Cloud DNS (Recommended) - Click on **CONTINUE** - **Privacy protection** - **Privacy Protection:** Privacy Protection ON - **Contact details** - Fill Contact Details - Click on **REGISTER** ## Step-03: Review the new domain at Cloud Domains Page - Go to Networking Services -> Cloud Domains - Review all details populated correctly ## Step-04: Cloud DNS - Go to Networking Services -> Cloud DNS -> kalyanreddydaida-com - Review all details ================================================ FILE: 38-GKE-Ingress-ExternalIP/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress with External IP description: Implement GCP Google Kubernetes Engine GKE Ingress with External IP --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Registered Domain using Google Cloud Domains ## Step-01: Introduction - Reserve an External IP Address - Using Annotaiton `kubernetes.io/ingress.global-static-ip-name` associate this External IP to Ingress Service ## Step-02: Create External IP Address using gcloud ```t # Create External IP Address gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip1 --global # Describe External IP Address gcloud compute addresses describe ADDRESS_NAME --global gcloud compute addresses describe gke-ingress-extip1 --global # List External IP Address gcloud compute addresses list # Verify Go to VPC Network -> IP Addresses -> External IP Address ``` ## Step-03: Add RECORDSET Google Cloud DNS for this External IP - Go to Network Services -> Cloud DNS -> kalyanreddydaida.com -> **ADD RECORD SET** - DNS NAME: demo1.kalyanreddydaida.com - **IPv4 Address:** - Click on **CREATE** ## Step-04: Verify DNS resolving to IP ```t # nslookup test nslookup demo1.kalyanreddydaida.com ## Sample Output Kalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ nslookup demo1.kalyanreddydaida.com Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: demo1.kalyanreddydaida.com Address: 34.120.32.120 Kalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ ``` ## Step-05: 04-Ingress-external-ip.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-external-ip annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-06: No changes to other 3 YAML Files - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ## Step-07: Deploy kube-manifests and verify ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-external-ip ``` ## Step-08: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http:///app1/index.html http:///app2/index.html http:/// # Replace Domain Name registered in Cloud DNS http://demo1.kalyanreddydaida.com/app1/index.html http://demo1.kalyanreddydaida.com/app2/index.html http://demo1.kalyanreddydaida.com/ ``` ## Step-09: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify Load Balancer Deleted Go to Network Services -> Load Balancing -> No Load balancers should be present ``` ================================================ FILE: 38-GKE-Ingress-ExternalIP/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 38-GKE-Ingress-ExternalIP/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 38-GKE-Ingress-ExternalIP/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 38-GKE-Ingress-ExternalIP/kube-manifests/04-Ingress-external-ip.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-external-ip annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 39-GKE-Ingress-Google-Managed-SSL/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress SSL description: Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Registered Domain using Google Cloud Domains 4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com) ## Step-01: Introduction - Google Managed Certificates for GKE Ingress - Ingress SSL - Certificate Validity: 90 days - 30 days before expiry google starts renewal process. We dont need to worry about it. - **Important Note:** Google-managed certificates are only supported with GKE Ingress using the external HTTP(S) load balancer. Google-managed certificates do not support third-party Ingress controllers. ## Step-02: kube-manifest - NO CHANGES - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ## Step-03: 05-Managed-Certificate.yaml - **Pre-requisite-1:** Registered Domain using Google Cloud Domains - **Pre-requisite-2:** DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com) ```yaml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - demo1.kalyanreddydaida.com ``` ## Step-04: 04-Ingress-SSL.yaml - Add the annotation `networking.gke.io/managed-certificates` to Ingress Service with Managed Certificate name. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-06: Deploy kube-manifests and Verify ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-ssl ``` ## Step-07: Verify Managed Certificates ```t # List Managed Certificate kubectl get managedcertificate # Describe managed certificate kubectl describe managedcertificate managed-cert-for-ingress Observation: 1. Wait for the Google-managed certificate to finish provisioning. 2. This might take up to 60 minutes. 3. Status of the certificate should change from PROVISIONING to ACTIVE demo1.kalyanreddydaida.com: PROVISIONING # List Certificates gcloud compute ssl-certificates list ``` ## Step-08: Verify SSL Certificates from Certificate Tab in Load Balancer ### Load Balancers Component View - View in **Load Balancers Component View** - Click on **CERTIFICATES** tab ### Load Balancers View - Review FRONTEND with HTTPS Protocol and associated with Certificate ## Step-09: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http:///app1/index.html http:///app2/index.html http:/// # Note: Replace Domain Name registered in Cloud DNS # HTTP URLs http://demo1.kalyanreddydaida.com/app1/index.html http://demo1.kalyanreddydaida.com/app2/index.html http://demo1.kalyanreddydaida.com/ # HTTPS URLs https://demo1.kalyanreddydaida.com/app1/index.html https://demo1.kalyanreddydaida.com/app2/index.html https://demo1.kalyanreddydaida.com/ ``` ## References - https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs - https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting - https://github.com/GoogleCloudPlatform/gke-managed-certs ================================================ FILE: 39-GKE-Ingress-Google-Managed-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 39-GKE-Ingress-Google-Managed-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 39-GKE-Ingress-Google-Managed-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 39-GKE-Ingress-Google-Managed-SSL/kube-manifests/04-Ingress-SSL.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 39-GKE-Ingress-Google-Managed-SSL/kube-manifests/05-Managed-Certificate.yaml ================================================ apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - demo1.kalyanreddydaida.com ================================================ FILE: 40-GKE-Ingress-Google-Managed-SSL-Redirect/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress SSL Redirect description: Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Registered Domain using Google Cloud Domains 4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com) ## Step-01: Introduction - Google Managed Certificates for GKE Ingress - Ingress SSL - Ingress SSL Redirect (HTTP to HTTPS) ## Step-02: 06-frontendconfig.yaml ```yaml apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ``` ## Step-03: 04-Ingress-SSL.yaml - Add the Annotation `networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config"` ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-04: Deploy kube-manifests and Verify - From previous `Ingress SSL` demo we didn't clean-up those Kubernetes resources. - We are going use them here, in addition to previous demo in this demo we are just adding `06-frontendconfig.yaml` ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests Observation: 1. Only "my-frontend-config" will be created, rest all unchanged ### Sample Output Kalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ kubectl apply -f kube-manifests/ deployment.apps/app1-nginx-deployment unchanged service/app1-nginx-nodeport-service unchanged deployment.apps/app2-nginx-deployment unchanged service/app2-nginx-nodeport-service unchanged deployment.apps/app3-nginx-deployment unchanged service/app3-nginx-nodeport-service unchanged ingress.networking.k8s.io/ingress-ssl configured managedcertificate.networking.gke.io/managed-cert-for-ingress unchanged frontendconfig.networking.gke.io/my-frontend-config created Kalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ # List FrontendConfig kubectl get frontendconfig # Describe FrontendConfig kubectl describe frontendconfig my-frontend-config # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-ssl ``` ## Step-05: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http:///app1/index.html http:///app2/index.html http:/// # Note: Replace Domain Name registered in Cloud DNS # HTTP URLs: Should redirect to HTTPS URLs http://demo1.kalyanreddydaida.com/app1/index.html http://demo1.kalyanreddydaida.com/app2/index.html http://demo1.kalyanreddydaida.com/ # HTTPS URLs https://demo1.kalyanreddydaida.com/app1/index.html https://demo1.kalyanreddydaida.com/app2/index.html https://demo1.kalyanreddydaida.com/ ``` ## Step-06: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify Load Balancer Deleted Go to Network Services -> Load Balancing -> No Load balancers should be present ``` ================================================ FILE: 40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/04-Ingress-SSL.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/05-Managed-Certificate.yaml ================================================ apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - demo1.kalyanreddydaida.com ================================================ FILE: 40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/06-frontendconfig.yaml ================================================ apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ================================================ FILE: 41-GKE-Workload-Identity/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Workload Identity description: Implement GCP Google Kubernetes Engine GKE Workload Identity --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction 1. Create GCP IAM Service Account 2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding) 3. Create Kubernetes Namespace 4. Create Kubernetes Service Account 5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding) 6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount) 7. Create a Sample App with and without Kubernetes Service Account 8. Test Workload Identity in GKE Cluster ## Step-02: Verify if Workload Identity Setting is enabled for GKE Cluster - Go to Kubernetes Engine -> Clusters -> standard-cluster-private-1 -> DETAILS Tab - In Security -> Workload Identity -> SHOULD BE IN ENABLED STATE ## Step-03: Create GCP IAM Service Account ```t # List IAM Service Accounts gcloud iam service-accounts list # List Google Cloud Projects gcloud projects list Observation: 1. Get the PROJECT_ID for your current project 2. Replace GSA_PROJECT_ID with PROJECT_ID for your current project # Create GCP IAM Service Account gcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT_ID GSA_NAME: the name of the new IAM service account. GSA_PROJECT_ID: the project ID of the Google Cloud project for your IAM service account. GSA_PROJECT==PROJECT_ID # Replace GSA_NAME and GSA_PROJECT gcloud iam service-accounts create wid-gcpiam-sa --project=kdaida123 # List IAM Service Accounts gcloud iam service-accounts list ``` ## Step-04: Add IAM Roles to GCP IAM Service Account - We are giving `"roles/compute.viewer"` permissions to IAM Service Account. - From Kubernetes Pod, we are going to list the compute instances. - With the help of the `Google IAM Service account` and `Kubernetes Service Account`, access for Kubernetes Pod from GKE cluster should be successful for listing the google computing instances. ```t # Add IAM Roles to GCP IAM Service Account gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account. GSA_PROJECT_ID==PROJECT_ID ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. # Replace PROJECT_ID, GSA_NAME, GSA_PROJECT_ID, ROLE_NAME gcloud projects add-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/compute.viewer" ``` ## Step-05: Create Kubernetes Namepsace and Service Account ```t # Create Kubernetes Namespace kubectl create namespace kubectl create namespace wid-kns # Create Service Account kubectl create serviceaccount --namespace kubectl create serviceaccount wid-ksa --namespace wid-kns ``` ## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account - Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts. - This binding allows the Kubernetes service account to act as the IAM service account. ```t # Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]" # Replace GSA_NAME, GSA_PROJECT_ID, PROJECT_ID, NAMESPACE, KSA_NAME gcloud iam service-accounts add-iam-policy-binding wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:kdaida123.svc.id.goog[wid-kns/wid-ksa]" ``` ## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address - Annotate the Kubernetes service account with the email address of the IAM service account. ```t # Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA_NAME \ --namespace NAMESPACE \ iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com # Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT_ID kubectl annotate serviceaccount wid-ksa \ --namespace wid-kns \ iam.gke.io/gcp-service-account=wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com # Describe Kubernetes Service Account kubectl describe sa wid-ksa -n wid-kns ``` ## Step-08: 01-wid-demo-pod-without-sa.yaml ```yaml apiVersion: v1 kind: Pod metadata: name: wid-demo-without-sa namespace: wid-kns spec: containers: - image: google/cloud-sdk:slim name: wid-demo-without-sa command: ["sleep","infinity"] #serviceAccountName: wid-ksa nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" ``` ## Step-09: 02-wid-demo-pod-with-sa.yaml - **Important Note:** For Autopilot clusters, omit the nodeSelector field. Autopilot rejects this nodeSelector because all nodes use Workload Identity. ```yaml apiVersion: v1 kind: Pod metadata: name: wid-demo-with-sa namespace: wid-kns spec: containers: - image: google/cloud-sdk:slim name: wid-demo-with-sa command: ["sleep","infinity"] serviceAccountName: wid-ksa nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" ``` ## Step-10: Deploy Kubernetes Manifests and Verify ```t # Deploy kube-manifests kubectl apply -f kube-manifests # List Pods kubectl -n wid-kns get pods ``` ## Step-11: Verify from Workload Identity without Service Account Pod ```t # Connect to Pod kubectl -n wid-kns exec -it wid-demo-without-sa -- /bin/bash # Default Service Account Pod is using currently gcloud auth list Observation: It chose the default account ## Sample Output root@wid-demo-without-sa:/# gcloud auth list Credentialed Accounts ACTIVE ACCOUNT * kdaida123.svc.id.goog To set the active account, run: $ gcloud config set account `ACCOUNT` root@wid-demo-without-sa:/# # List Compute Instances from workload-identity-demo pod gcloud compute instances list ## Sample Output root@wid-demo-without-sa:/# gcloud compute instances list ERROR: (gcloud.compute.instances.list) Some requests did not succeed: - Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project. root@wid-demo-without-sa:/# # Exit the container terminal exit ``` ## Step-12: Verify from Workload Identity with Service Account Pod ```t # Connect to Pod kubectl -n wid-kns exec -it wid-demo-with-sa -- /bin/bash # Default Service Account Pod is using currently gcloud auth list ## Sample Output root@wid-demo-with-sa:/# gcloud auth list Credentialed Accounts ACTIVE ACCOUNT * wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com To set the active account, run: $ gcloud config set account `ACCOUNT` root@wid-demo-with-sa:/# # List Compute Instances from workload-identity-demo pod gcloud compute instances list ## Sample Output root@wid-demo-with-sa:/# gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-standard-cluster-priva-new-pool-2-7c9415e8-5cds us-central1-c g1-small true 10.128.15.235 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz us-central1-c g1-small true 10.128.0.8 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6 us-central1-c g1-small true 10.128.0.2 RUNNING root@wid-demo-with-sa:/# ``` ## Step-13: Negative Usecase: Test access to Cloud DNS Record Sets ```t # gcloud list DNS Records gcloud dns record-sets list --zone=kalyanreddydaida-com Observation: 1. GCP IAM Service Account "wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" doesnt have roles assigned related to Cloud DNS so we got HTTP 403 ## Sample Output root@wid-demo-with-sa:/# gcloud dns record-sets list --zone=kalyanreddydaida-com ERROR: (gcloud.dns.record-sets.list) HTTPError 403: Forbidden root@wid-demo-with-sa:/# # Exit the container terminal exit ``` ## Step-14: Give Cloud DNS Admin Role for GCP IAM Servic Account wid-gcpiam-sa ```t # Add IAM Roles to GCP IAM Service Account gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account. ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. GSA_PROJECT==PROJECT_ID # Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects add-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/dns.admin" ``` ## Step-15: Verify from Workload Identity with Service Account Pod ```t # Connect to Pod kubectl -n wid-kns exec -it wid-demo-with-sa -- /bin/bash # List Cloud DNS Record Sets gcloud dns record-sets list --zone=kalyanreddydaida-com ### Sample Output root@wid-demo-with-sa:/# gcloud dns record-sets list --zone=kalyanreddydaida-com NAME TYPE TTL DATA kalyanreddydaida.com. NS 21600 ns-cloud-a1.googledomains.com.,ns-cloud-a2.googledomains.com.,ns-cloud-a3.googledomains.com.,ns-cloud-a4.googledomains.com. kalyanreddydaida.com. SOA 21600 ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300 demo1.kalyanreddydaida.com. A 300 34.120.32.120 root@wid-demo-with-sa:/# # List Compute Instances from workload-identity-demo pod gcloud compute instances list ## Sample Output root@wid-demo-with-sa:/# gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-standard-cluster-priva-new-pool-2-7c9415e8-5cds us-central1-c g1-small true 10.128.15.235 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz us-central1-c g1-small true 10.128.0.8 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6 us-central1-c g1-small true 10.128.0.2 RUNNING root@wid-demo-with-sa:/# # Exit the container terminal exit ``` ## Step-16: Clean-Up Kubernetes Resources ```t # Delete Kubernetes Pods kubectl delete -f kube-manifests # List Namespaces kubectl get ns # Delete Kubernetes Namespace kubectl delete ns wid-kns Observation: 1. Kubernetes Service Account "wid-ksa" will get automatically deleted when that namespace is deleted ``` ## Step-17: Clean-Up GCP IAM Resources ```t # List GCP IAM Service Accounts gcloud iam service-accounts list # Remove IAM Roles to GCP IAM Service Account gcloud projects remove-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account. GSA_PROJECT_ID==PROJECT_ID ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. # REMOVE ROLE: COMPUTE VIEWER: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects remove-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/compute.viewer" # REMOVE ROLE: DNS ADMIN: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects remove-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/dns.admin" # Delete the GCP IAM Service Account we have created gcloud iam service-accounts delete wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com --project=kdaida123 ``` ## References - [GKE - Use Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) ================================================ FILE: 41-GKE-Workload-Identity/kube-manifests/01-wid-demo-pod-without-sa.yaml ================================================ apiVersion: v1 kind: Pod metadata: name: wid-demo-without-sa namespace: wid-kns spec: containers: - image: google/cloud-sdk:slim name: wid-demo-without-sa command: ["sleep","infinity"] #serviceAccountName: wid-ksa nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" ================================================ FILE: 41-GKE-Workload-Identity/kube-manifests/02-wid-demo-pod-with-sa.yaml ================================================ apiVersion: v1 kind: Pod metadata: name: wid-demo-with-sa namespace: wid-kns spec: containers: - image: google/cloud-sdk:slim name: wid-demo-with-sa command: ["sleep","infinity"] serviceAccountName: wid-ksa nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" ================================================ FILE: 42-GKE-ExternalDNS-Install/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE External DNS Install description: Implement GCP Google Kubernetes Engine GKE External DNS Install --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction 1. Create GCP IAM Service Account: external-dns-gsa 2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding) 3. Create Kubernetes Namespace: external-dns-ns 4. Create Kubernetes Service Account: external-dns-ksa 5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding) 6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount) 7. Install Helm CLI on your local desktop (if not installed) 8. Install External-DNS using Helm 9. Verify External-DNS Logs 10. Additional Reference: Install [ExternalDNS Controller using Helm](https://github.com/kubernetes-sigs/external-dns) ## Step-03: Create GCP IAM Service Account ```t # List IAM Service Accounts gcloud iam service-accounts list # Create GCP IAM Service Account gcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT GSA_NAME: the name of the new IAM service account. GSA_PROJECT: the project ID of the Google Cloud project for your IAM service account. # Replace GSA_NAME and GSA_PROJECT gcloud iam service-accounts create external-dns-gsa --project=kdaida123 # List IAM Service Accounts gcloud iam service-accounts list ``` ## Step-04: Add IAM Roles to GCP IAM Service Account ```t # Add IAM Roles to GCP IAM Service Account gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account. ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. # Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects add-iam-policy-binding kdaida123 \ --member "serviceAccount:external-dns-gsa@kdaida123.iam.gserviceaccount.com" \ --role "roles/dns.admin" ``` ## Step-05: Create Kubernetes Namepsace and Kubernetes Service Account ```t # Create Kubernetes Namespace kubectl create namespace kubectl create namespace external-dns-ns # List Namespaces kubectl get ns # Create Service Account kubectl create serviceaccount --namespace kubectl create serviceaccount external-dns-ksa --namespace external-dns-ns # List Service Accounts kubectl -n external-dns-ns get sa ``` ## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account - Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts. - This binding allows the Kubernetes service account to act as the IAM service account. ```t # Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]" # Replace GSA_NAME, GSA_PROJECT, PROJECT_ID, NAMESPACE, KSA_NAME gcloud iam service-accounts add-iam-policy-binding external-dns-gsa@kdaida123.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:kdaida123.svc.id.goog[external-dns-ns/external-dns-ksa]" ``` ## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address - Annotate the Kubernetes service account with the email address of the IAM service account. ```t # Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA_NAME \ --namespace NAMESPACE \ iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com # Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT kubectl annotate serviceaccount external-dns-ksa \ --namespace external-dns-ns \ iam.gke.io/gcp-service-account=external-dns-gsa@kdaida123.iam.gserviceaccount.com # Describe Kubernetes Service Account kubectl -n external-dns-ns describe sa external-dns-ksa ``` ## Step-08: Install Helm Client on Local Desktop - [Install Helm](https://helm.sh/docs/intro/install/) ```t # Install Helm brew install helm # Verify Helm version helm version ``` ## Step-09: Review external-dns values.yaml - [external-dns values.yaml](https://github.com/kubernetes-sigs/external-dns/blob/master/charts/external-dns/values.yaml) - [external-dns Configuration](https://github.com/kubernetes-sigs/external-dns/tree/master/charts/external-dns#configuration) ## Step-10: Review external-dns Deployment Configs ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: k8s.gcr.io/external-dns/external-dns:v0.8.0 args: - --source=service - --source=ingress - --domain-filter=external-dns-test.gcp.zalan.do # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=google # - --google-project=zalando-external-dns-test # Use this to specify a project different from the one external-dns is running inside - --google-zone-visibility=private # Use this to filter to only zones with this visibility. Set to either 'public' or 'private'. Omitting will match public and private zones - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --registry=txt - --txt-owner-id=my-identifier ``` ## Step-11: Install external-dns using Helm ```t # Add external-dns repo to Helm helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/ # Install Helm Chart helm upgrade --install external-dns external-dns/external-dns \ --set provider=google \ --set policy=sync \ --set google-zone-visibility=public \ --set txt-owner-id=k8s \ --set serviceAccount.create=false \ --set serviceAccount.name=external-dns-ksa \ -n external-dns-ns # Optional Setting (Important Note: will make ExternalDNS see only the Cloud DNS zones matching provided domain, omit to process all available Cloud DNS zones) --set domain-filter=kalyanreddydaida.com \ ``` ## Step-12: Verify external-dns deployment ```t # List Helm helm list -n external-dns-ns # List Kubernetes Service Account kubectl -n external-dns-ns get sa # Describe Kubernetes Service Account kubectl -n external-dns-ns describe sa external-dns-ksa # List All resources from default Namespace kubectl -n external-dns-ns get all # List pods (external-dns pod should be in running state) kubectl -n external-dns-ns get pods # Verify Deployment by checking logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f ``` ## References - https://github.com/kubernetes-sigs/external-dns/tree/master/charts/external-dns - https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/gke.md ## External-DNS Logs from Reference ```log W0624 07:14:15.829747 14199 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke Error from server (BadRequest): container "external-dns" in pod "external-dns-6f49549d96-2jd5q" is waiting to start: ContainerCreating Kalyans-Mac-mini:48-GKE-Ingress-IAP kalyanreddy$ kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') W0624 07:14:23.520269 14201 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke W0624 07:14:24.512312 14203 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke time="2022-06-24T01:44:18Z" level=info msg="config: {APIServerURL: KubeConfig: RequestTimeout:30s DefaultTargets:[] ContourLoadBalancerService:heptio-contour/contour GlooNamespace:gloo-system SkipperRouteGroupVersion:zalando.org/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false IgnoreIngressRulesSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:google GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s GoogleZoneVisibility: DomainFilter:[] ExcludeDomains:[] RegexDomainFilter: RegexDomainExclusion: ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AWSSDServiceCleanup:false AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: BluecatDNSConfiguration: BluecatConfigFile:/etc/kubernetes/bluecat.json BluecatDNSView: BluecatGatewayHost: BluecatRootZone: BluecatDNSServerName: BluecatDNSDeployType:no-deploy BluecatSkipTLSVerify:false CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: AkamaiEdgercPath: AkamaiEdgercSection: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 InfobloxFQDNRegEx: InfobloxCreatePTR:false InfobloxCacheDuration:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:txt TXTOwnerID:default TXTPrefix: TXTSuffix: Interval:1m0s MinEventSyncInterval:5s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s TXTWildcardReplacement: ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136GSSTSIG:false RFC2136KerberosRealm: RFC2136KerberosUsername: RFC2136KerberosPassword: RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s RFC2136BatchChangeSize:50 NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50 ManagedDNSRecordTypes:[A CNAME] GoDaddyAPIKey: GoDaddySecretKey: GoDaddyTTL:0 GoDaddyOTE:false OCPRouterName:}" time="2022-06-24T01:44:18Z" level=info msg="Instantiating new Kubernetes client" time="2022-06-24T01:44:18Z" level=info msg="Using inCluster-config based on serviceaccount-token" time="2022-06-24T01:44:18Z" level=info msg="Created Kubernetes client https://10.104.0.1:443" time="2022-06-24T01:44:18Z" level=info msg="Google project auto-detected: kdaida123" time="2022-06-24T01:44:23Z" level=error msg="Get \"https://dns.googleapis.com/dns/v1/projects/kdaida123/managedZones?alt=json&prettyPrint=false\": compute: Received 403 `Unable to generate access token; IAM returned 403 Forbidden: The caller does not have permission\nThis error could be caused by a missing IAM policy binding on the target IAM service account.\nFor more information, refer to the Workload Identity documentation:\n\thttps://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to\n\n`" ``` ================================================ FILE: 43-GKE-ExternalDNS-Ingress-Demo/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress with External DNS description: Implement GCP Google Kubernetes Engine GKE Ingress with External DNS --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. External DNS Controller Installed ## Step-01: Introduction - Ingress with External DNS - We are going to use the Annotation `external-dns.alpha.kubernetes.io/hostname` in Ingress Service. - DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed ## Step-02: 01-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ``` ## Step-03: 02-ingress-external-dns.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-externaldns-demo annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer kubernetes.io/ingress.class: "gce" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingressextdns101.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ``` ## Step-04: Deploy Kubernetes Manifests and Verify ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # Describe Ingress Service kubectl describe ingress ingress-externaldns-demo # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be present # Access Application http:// http://ingressextdns101.kalyanreddydaida.com ``` ## Step-05: Delete kube-manifests ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be not preset (already deleted) ``` ================================================ FILE: 43-GKE-ExternalDNS-Ingress-Demo/kube-manifests/01-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 43-GKE-ExternalDNS-Ingress-Demo/kube-manifests/02-ingress-external-dns.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-externaldns-demo annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer kubernetes.io/ingress.class: "gce" # External DNS - For creating a Record Set in Google Cloud - Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingressextdns101.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ================================================ FILE: 44-GKE-ExternalDNS-Service-Demo/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Service with External DNS description: Implement GCP Google Kubernetes Engine GKE Service with External DNS --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. External DNS Controller Installed ## Step-01: Introduction - Kubernetes Service of Type Load Balancer with External DNS - We are going to use the Annotation `external-dns.alpha.kubernetes.io/hostname` in Kubernetes Service. - DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-03: 02-kubernetes-loadbalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service annotations: # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: extdns-k8s-svc-demo.kalyanreddydaida.com spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-05: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Kubernetes Service should be present # Access Application http:// http://extdns-k8s-svc-demo.kalyanreddydaida.com ``` ## Step-06: Delete kube-manifests ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Kubernetes Service should be not preset (already deleted) ``` ================================================ FILE: 44-GKE-ExternalDNS-Service-Demo/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ================================================ FILE: 44-GKE-ExternalDNS-Service-Demo/kube-manifests/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service annotations: # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: extdns-k8s-svc-demo.kalyanreddydaida.com spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 45-GKE-Ingress-NameBasedVhost-Routing/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing description: Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. External DNS Controller Installed ## Step-01: Introduction 1. Requests will be routed in Load Balancer based on DNS Names 2. `app1-ingress.kalyanreddydaida.com` will send traffic to `App1 Pods` 3. `app2-ingress.kalyanreddydaida.com` will send traffic to `App2 Pods` 4. `default-ingress.kalyanreddydaida.com` will send traffic to `App3 Pods` ## Step-02: Review kube-manifests 1. 01-Nginx-App1-Deployment-and-NodePortService.yaml 2. 02-Nginx-App2-Deployment-and-NodePortService.yaml 3. 03-Nginx-App3-Deployment-and-NodePortService.yaml 4. NO CHANGES TO ABOVE 3 files - Standard Deployment and NodePort Service we are using from previous Context Path based Routing Demo ## Step-03: 04-Ingress-NameBasedVHost-Routing.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-namebasedvhost-routing annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-04: 05-Managed-Certificate.yaml ```yaml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - default101-ingress.kalyanreddydaida.com - app101-ingress.kalyanreddydaida.com - app201-ingress.kalyanreddydaida.com ``` ## Step-05: 06-frontendconfig.yaml ```yaml apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be present # List FrontendConfigs kubectl get frontendconfig # List Managed Certificates kubectl get managedcertificate # Describe Managed Certificates kubectl describe managedcertificate managed-cert-for-ingress Observation: 1. Wait for Domain Status to be changed from "Provisioning" to "ACTIVE" 2. It might take minimum 60 minutes for provisioning Google Managed SSL Certificates ``` ## Step-07: Access Application ```t # Access Application http://app1-ingress.kalyanreddydaida.com/app1/index.html http://app2-ingress.kalyanreddydaida.com/app2/index.html http://default-ingress.kalyanreddydaida.com Observation: 1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing 2. HTTP to HTTPS redirect should work ``` ## Step-08: Access Application - Negative usecase Testing ```t # Access Application - App1 DNS Name http://app1-ingress.kalyanreddydaida.com/app2/index.html Observation: SHOULD FAIL In Pod App1 we don't app2 context path (app2 folder) - 404 ERROR # Access Application - App2 DNS Name http://app2-ingress.kalyanreddydaida.com/app1/index.html Observation: SHOULD FAIL In Pod App2 we don't app1 context path (app1 folder) - 404 ERROR # Access Application - App3 or Default DNS Name http://default-ingress.kalyanreddydaida.com/app1/index.html Observation: SHOULD FAIL In Pod App3 we don't app1 context path (app1 folder) - 404 ERROR ``` ## Step-09: Clean-Up - DONT DELETE, WE ARE GOING TO USE THESE KUBERNETES RESOURCES IN NEXT DEMO RELATED TO SSL-POLICY ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) ================================================ FILE: 45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/04-Ingress-NameBasedVHost-Routing.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-namebasedvhost-routing annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/05-Managed-Certificate.yaml ================================================ apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - default-ingress.kalyanreddydaida.com - app1-ingress.kalyanreddydaida.com - app2-ingress.kalyanreddydaida.com ================================================ FILE: 45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/06-frontendconfig.yaml ================================================ apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ================================================ FILE: 46-GKE-Ingress-SSL-Policy/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing description: Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. External DNS Controller Installed ## Step-01: Introduction - Implement SSL Policies in GCP and use it for Ingress Service ## Step-02: Create an SSL policy with a Google-managed profile - [Create SSL Policies](https://cloud.google.com/load-balancing/docs/use-ssl-policies#gcloud) ```t # List Available Features gcloud compute ssl-policies list-available-features # List SSL Policies gcloud compute ssl-policies list # Create an SSL policy with a Google-managed profile gcloud compute ssl-policies create SSL_POLICY_NAME \ --profile COMPATIBLE | MODERN | RESTRICTED \ --min-tls-version 1.0 | 1.1 | 1.2 # Replace Values gcloud compute ssl-policies create gke-ingress-ssl-policy --profile MODERN --min-tls-version 1.0 # List SSL Policies gcloud compute ssl-policies list # Verify using Google Cloud Console Go to Network Security -> SSL Policies -> gke-ingress-ssl-policy ``` ## Step-03: Review kube-manifests 1. 01-Nginx-App1-Deployment-and-NodePortService.yaml 2. 02-Nginx-App2-Deployment-and-NodePortService.yaml 3. 03-Nginx-App3-Deployment-and-NodePortService.yaml 4. 04-Ingress-NameBasedVHost-Routing.yaml 5. 05-Managed-Certificate.yaml 4. NO CHANGES TO ABOVE 5 files - same as previous demo ## Step-04: FrontendConfig ```yaml apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: # HTTP to HTTPS Redirect redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE # SSL Policy sslPolicy: gke-ingress-ssl-policy ``` ## Step-05: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests ### Sample Output Kalyans-Mac-mini:44-GKE-Ingress-SSL-Policy kalyanreddy$ kubectl apply -f kube-manifests deployment.apps/app1-nginx-deployment unchanged service/app1-nginx-nodeport-service unchanged deployment.apps/app2-nginx-deployment unchanged service/app2-nginx-nodeport-service unchanged deployment.apps/app3-nginx-deployment unchanged service/app3-nginx-nodeport-service unchanged ingress.networking.k8s.io/ingress-namebasedvhost-routing unchanged managedcertificate.networking.gke.io/managed-cert-for-ingress unchanged frontendconfig.networking.gke.io/my-frontend-config configured ----> CONFGIURED Kalyans-Mac-mini:44-GKE-Ingress-SSL-Policy kalyanreddy$ # Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Settings ``` ## Step-06: Dont Clean-Up - Dont Clean-Up, We are going to use it in next section. - To avoid delay of 1 hour for creating managed certificates, we will re-use same configs which are already created. ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) - [SSL Policy](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#ssl) ================================================ FILE: 46-GKE-Ingress-SSL-Policy/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 46-GKE-Ingress-SSL-Policy/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 46-GKE-Ingress-SSL-Policy/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 46-GKE-Ingress-SSL-Policy/kube-manifests/04-Ingress-NameBasedVHost-Routing.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-namebasedvhost-routing annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 46-GKE-Ingress-SSL-Policy/kube-manifests/05-Managed-Certificate.yaml ================================================ apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - default-ingress.kalyanreddydaida.com - app1-ingress.kalyanreddydaida.com - app2-ingress.kalyanreddydaida.com ================================================ FILE: 46-GKE-Ingress-SSL-Policy/kube-manifests/06-frontendconfig.yaml ================================================ apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: # HTTP to HTTPS Redirect redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE # SSL Policy sslPolicy: gke-ingress-ssl-policy ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy description: Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. External DNS Controller should be installed and ready to use ## Step-01: Introduction 1. Configuring the OAuth consent screen 2. Creating OAuth credentials 3. Setting up IAP access 4. Creating a Kubernetes Secret with OAuth Client ID Credentials 5. Adding an iap block to the BackendConfig ## Step-02: Create basic google gmail users (if not present) - I have created below two users for this IAP Demo - gcpuser901@gmail.com - gcpuser902@gmail.com ## Step-03: Enabling IAP for GKE - [Enabling IAP for GKE](https://cloud.google.com/iap/docs/enabling-kubernetes-howto) - We will follow steps from above documentation link to create below 2 items 1. [Configuring the OAuth consent screen](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-configure) 2. [Creating OAuth credentials](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-credentials) ```t # Make a note of Client ID and Client Secret Client ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com Client Secret: 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 # Template https://iap.googleapis.com/v1/oauth/clientIds/CLIENT_ID:handleRedirect # Replace CLIENT_ID (Update URL in OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds) https://iap.googleapis.com/v1/oauth/clientIds/1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com:handleRedirect ``` ## Step-04: Creating a Kubernetes Secret ```t # Make a note of Client ID and Client Secret Client ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com Client Secret: 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 # List Kubernetes Secrets (Default Namespace) kubectl get secrets # Create Kubernetes Secret kubectl create secret generic my-secret --from-literal=client_id=client_id_key \ --from-literal=client_secret=client_secret_key # Replace client_id_key, client_secret_key kubectl create secret generic my-secret --from-literal=client_id=1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com \ --from-literal=client_secret=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 # List Kubernetes Secrets (Default Namespace) kubectl get secrets ``` ## Step-05: Adding an iap block to the BackendConfig - **File Name:** 07-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: iap: enabled: true oauthclientCredentials: secretName: my-secret ``` ## Step-06: Review Kubenertes Manifests - All 3 Node Port Services will have annotation added `cloud.google.com/backend-config` - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ``` ## Step-07: Review Kubenertes Manifests - No changes to below YAML files from previous section - 04-Ingress-NameBasedVHost-Routing.yaml - 05-Managed-Certificate.yaml - 06-frontendconfig.yaml ## Step-08: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests Observation: 1. All other configs already created as part of previous demo, only backendconfig change will be applied now. # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # List Frontend Configs kubectl get frontendconfig # List Backend Configs kubectl get backendconfig ``` ## Step-09: Setting up IAP access - [Setting up IAP access](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#iap-access) - Add User `gcpuser901@gmail.com` as Principal. ## Step-10: Access Application ```t # Access Application http://app1-ingress.kalyanreddydaida.com/app1/index.html http://app2-ingress.kalyanreddydaida.com/app2/index.html http://default-ingress.kalyanreddydaida.com Username: gcpuser901@gmail.com (In your case it might be a different user you added as part of Step-09) Password: XXXXXXXXXX Observation: 1. All 3 URLS will redirect to Google Authentication. Provide credentials to login 2. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing 3. HTTP to HTTPS redirect should work ``` ## Step-11: Negative Usecase: Access using User which is not added in Principal ```t # Access Application http://app1-ingress.kalyanreddydaida.com/app1/index.html http://app2-ingress.kalyanreddydaida.com/app2/index.html http://default-ingress.kalyanreddydaida.com Username: gcpuser902@gmail.com (user which is not added in principal as part of Step-09) Password: XXXXXXXXXX Observation: 1. It should fail, Application should not be accessible. ``` ## Step-12: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete Kubernetes Secret kubectl delete secret my-secret # Delete OAuth Credentials Go to API & Services -> Credentials -> OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds -> DELETE ``` ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) - [Enabling IAP for GKE](https://cloud.google.com/iap/docs/enabling-kubernetes-howto) ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/04-Ingress-NameBasedVHost-Routing.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-namebasedvhost-routing annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/05-Managed-Certificate.yaml ================================================ apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - default-ingress.kalyanreddydaida.com - app1-ingress.kalyanreddydaida.com - app2-ingress.kalyanreddydaida.com ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/06-frontendconfig.yaml ================================================ apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ================================================ FILE: 47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/07-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: iap: enabled: true oauthclientCredentials: secretName: my-secret # sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged # and 1.0 means 100% of packets are logged. This field is only relevant if enable is set # to true. sampleRate is an optional field, but if it's configured then enable: true must # also be set or else it is interpreted as enable: false. ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks description: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction 1. Implement Self Signed SSL Certificates with GKE Ingress Service 2. Create SSL Certificates using OpenSSL. 3. Create Kubernetes Secret with SSL Certificate and Private Key 4. Reference these Kubernetes Secrets in Ingress Service **Ingress spec.tls** ## Step-02: App1 - Create Self-Signed SSL Certificates and Kubernetes Secrets ```t # Change Directory cd SSL-SelfSigned-Certs # Create your app1 key: openssl genrsa -out app1-ingress.key 2048 # Create your app1 certificate signing request: openssl req -new -key app1-ingress.key -out app1-ingress.csr -subj "/CN=app1.kalyanreddydaida.com" # Create your app1 certificate: openssl x509 -req -days 7300 -in app1-ingress.csr -signkey app1-ingress.key -out app1-ingress.crt # Create a Secret that holds your app1 certificate and key: kubectl create secret tls app1-secret --cert app1-ingress.crt --key app1-ingress.key # List Secrets kubectl get secrets ``` ## Step-03: App2 - Create Self-Signed SSL Certificates and Kubernetes Secrets ```t # Change Directory cd SSL-SelfSigned-Certs # Create your app2 key: openssl genrsa -out app2-ingress.key 2048 # Create your app2 certificate signing request: openssl req -new -key app2-ingress.key -out app2-ingress.csr -subj "/CN=app2.kalyanreddydaida.com" # Create your app2 certificate: openssl x509 -req -days 7300 -in app2-ingress.csr -signkey app2-ingress.key -out app2-ingress.crt # Create a Secret that holds your app2 certificate and key: kubectl create secret tls app2-secret --cert app2-ingress.crt --key app2-ingress.key # List Secrets kubectl get secrets ``` ## Step-03: App3 - Create Self-Signed SSL Certificates and Kubernetes Secrets ```t # Change Directory cd SSL-SelfSigned-Certs # Create your app3 key: openssl genrsa -out app3-ingress.key 2048 # Create your app3 certificate signing request: openssl req -new -key app3-ingress.key -out app3-ingress.csr -subj "/CN=app3-default.kalyanreddydaida.com" # Create your app3 certificate: openssl x509 -req -days 7300 -in app3-ingress.csr -signkey app3-ingress.key -out app3-ingress.crt # Create a Secret that holds your app3 certificate and key: kubectl create secret tls app3-secret --cert app3-ingress.crt --key app3-ingress.key # List Secrets kubectl get secrets ``` ## Step-04: No changes to following kube-manifests from previous Ingress Name Based Virtual Host Routing Demo 1. 01-Nginx-App1-Deployment-and-NodePortService.yaml 2. 02-Nginx-App2-Deployment-and-NodePortService.yaml 3. 03-Nginx-App3-Deployment-and-NodePortService.yaml 4. 05-frontendconfig.yaml ## Step-05: Review 04-ingress-self-signed-ssl.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-selfsigned-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com spec: # SSL Certs - Associate using Kubernetes Secrets tls: - secretName: app1-secret - secretName: app2-secret - secretName: app3-secret defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # Describe Ingress Service kubectl describe ingress ingress-selfsigned-ssl # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be present # List FrontendConfigs kubectl get frontendconfig # Verify SSL Certificates Go to Load Balancers 1. Load Balancers View -> In Frontends 2. Load Balancers Components View -> Certificates Tab ``` ## Step-07: Access Application ```t # Access Application http://app1.kalyanreddydaida.com/app1/index.html http://app2.kalyanreddydaida.com/app2/index.html http://app3-default.kalyanreddydaida.com Observation: 1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing 2. HTTP to HTTPS redirect should work 3. You will get a warning "The certificate is not trusted because it is self-signed.". Click on "Accept the risk and continue" ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # List Kubernetes Secrets kubectl get secrets # Delete Kubernetes Secrets kubectl delete secret app1-secret kubectl delete secret app2-secret kubectl delete secret app3-secret ``` ## References - [User Managed Certificates](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl#user-managed-certs) ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app1-ingress.crt ================================================ -----BEGIN CERTIFICATE----- MIICzzCCAbcCFDQiqvY0cwNLP1ljifVfveOZo7G4MA0GCSqGSIb3DQEBCwUAMCQx IjAgBgNVBAMMGWFwcDEua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx MDUyWhcNNDIxMTIwMDIxMDUyWjAkMSIwIAYDVQQDDBlhcHAxLmthbHlhbnJlZGR5 ZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5Nt/2cBg HSkMXNo04h9tN8f8ioPkZkl5rFwNmgW+tGei4wa2QFRt0xCeOyd7+0GWmyH6602M i3WabHTruHKWsCAikx20KaOEjYF+cmDMpWQdrWADoAIIfov+BO9VTmFJUX9JUDiR f6CHxtCZIifL4VM0InSpMIy4OJGQgzOVrlWwLYVcYla529VUGU5qBJFAliKve3N+ SBYoNI5uX0rERm4hqCUHKrQnsIfA0OnNccPdAoi+KmC/oipfUOpL9URholj7spAT JczcTGw+s7gehCDXm6YU7cBHtD2hLx106otEzJGIwys4JtmuKXtDw+w2eOGRUIrT Q7YLX6N5LJ8IDwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQABileCf1mnee2r0cLD 3bHaYo68JZkl9BS6dJN6DuOSD1Sha/NArgsuQa6uX8ApnUhTt5DucRyp7o5pdhCi rNaJwR9zYQmjxdH+RGtb5sPEAo7D47kQqp4wlJtL5AUfCI1nGgpg5cJCEqTVlbmP PEAJYlaWi8LNe4h+qukECcAA3Nsgvvm3Ls1qmKIEKJr05ppCq7EbYCrXJrN75Pl1 31w8Q0tr80qgNlhz65EyvLrIe6RK72qyOe9+oRCp9wRIoCvs47vUuNMfzZRxZXGn dPSLWkQt4LrQ/5RZr0lyUWtzr/l7GWu9GYljWe6toxTQHSqSkV/WAch8P4g0zvRJ NdHO -----END CERTIFICATE----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app1-ingress.csr ================================================ -----BEGIN CERTIFICATE REQUEST----- MIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMS5rYWx5YW5yZWRkeWRhaWRhLmNv bTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOTbf9nAYB0pDFzaNOIf bTfH/IqD5GZJeaxcDZoFvrRnouMGtkBUbdMQnjsne/tBlpsh+utNjIt1mmx067hy lrAgIpMdtCmjhI2BfnJgzKVkHa1gA6ACCH6L/gTvVU5hSVF/SVA4kX+gh8bQmSIn y+FTNCJ0qTCMuDiRkIMzla5VsC2FXGJWudvVVBlOagSRQJYir3tzfkgWKDSObl9K xEZuIaglByq0J7CHwNDpzXHD3QKIvipgv6IqX1DqS/VEYaJY+7KQEyXM3ExsPrO4 HoQg15umFO3AR7Q9oS8ddOqLRMyRiMMrOCbZril7Q8PsNnjhkVCK00O2C1+jeSyf CA8CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBexWuraAJF+txKTKQM6w2/Hnur 9BobBC/OYqdsracfmSAJ1eGEKI/ISUSZHVtZJptygxTEVXCTsN+ZukXlETM7AI4a Z8KatvHNrzhnpFV84ONpnCiUrQmik0IWwKcDJCzl4f7KUDITDC3hh5WGVY67OvuK mx03qu54ZFmFJkM2vwVn/ODbvdScYI5tDRjFIbyrwkxxW/1q1otkotOk7Z3hLdMN HWJh7IeXfw06q7+llX3Qg1OkpfyY682A0S2G2K6vrGFJUKFJ2CrPDzmht5G5kUz4 HANxZuSKeHcI7rlB3IVyjNa77oXOX0+ZQLYgCf/cA3Lu2zAkhEvtq9ui0Edl -----END CERTIFICATE REQUEST----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app1-ingress.key ================================================ -----BEGIN PRIVATE KEY----- MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDk23/ZwGAdKQxc 2jTiH203x/yKg+RmSXmsXA2aBb60Z6LjBrZAVG3TEJ47J3v7QZabIfrrTYyLdZps dOu4cpawICKTHbQpo4SNgX5yYMylZB2tYAOgAgh+i/4E71VOYUlRf0lQOJF/oIfG 0JkiJ8vhUzQidKkwjLg4kZCDM5WuVbAthVxiVrnb1VQZTmoEkUCWIq97c35IFig0 jm5fSsRGbiGoJQcqtCewh8DQ6c1xw90CiL4qYL+iKl9Q6kv1RGGiWPuykBMlzNxM bD6zuB6EINebphTtwEe0PaEvHXTqi0TMkYjDKzgm2a4pe0PD7DZ44ZFQitNDtgtf o3ksnwgPAgMBAAECggEAE65cvFky6s8Q5RtO2PNi7R0htrfI+JLxB8WS1eAQmmsf Mu7s1XNtTm1rbiLjIqRtU0IE1h+BKq0ebp1PeDlChDr/Pi+bwsjxKUotmaCBeOe3 NaXAKg6CtH9NhRcf+vGa4ItVvrRert8bThm6UZmiiuog3aWytx4i6Zp7Fw1kne1O 4k70Xt/jxHK9WJEp02seREZHZCEnP9mHp7ngzjBOC8ow4F14ONvC4NpRl2y3rdC3 Wyw26Hrfhy1j/Ww6J04j2qvsplXly+acpOOWZuCFSX33zUFEZUtw+ncNFPxk/3G5 PT4Jn7zPbB7YtId5wjN4dB+rv8b8rQf1ShIioW4KAQKBgQD8ZqQjxpmQTZGGsGjr tWZaGV/StW/pdhyFHrRwmCl6LbskuYDmWYuyHpoiVmEicg8mF+alCqGK8b5Cd7yH b8fl0uImC4CQVcWE8KieEPFAt5wYLyAper77iSunx/7g7Kv7sq4q84nIrFwlnBRO njb1QmY/JaPQ00CLFPwEVWwswQKBgQDoHupdlwDP97Zd4lUfn25pg0b3MNKNsFPH ALS4jR7qtJPjj7FdZTxGfBI1kJhVfWq2vWl/kK033c2JMGE0dpMNH/ZpQ2EVWqOR tA34diFXE2lBAOojHIdH5x38fWZv18aLdvIif/CwqaBhVfqRJGFgHk+4qlLpjAwE GxPkmWTYzwKBgQDJPZkvgSBdQst+BVeSX668tbCGEu2oyehRZzrc7yVa6e1liZYx k0HjgazJJfAKg8B6UeIuwvwsCTT2T/t8TO6n2m0/gjo+WnTC2xLF/KIuRHbrfV96 UwjFCwhInRgmA+3YIA3n5wd7fZl2zywNxu3wvMFDJeKoFFdIzTFmzykRwQKBgQCY lPnqW4ClNGgkfssF5n9lzG2xv94oVWg8wDILvng8QEeWprYodouQqa4ul8YLLE4h oZDf0fKLbrnVHIBJREiVsBUCTNBcgSBUfs9QLBbubkwZ9sfyHKawlTQY7TWQ/337 30x7cS5+coKCeUokbo2z6TjuYsftzal4aXRCKLMp8QKBgQCWBLJmI9jt9tRQvyKY EtEQd/3qFtymDjXnLaFn/FclIDm2me0YNfp+kumOe2XapG/m5TOPoPIG+OKkEwCV QXeOxjlQnzwvvMuQDbTUyjeYk4AV/wYExQAymmP1d8AeqddJQ103FSvj1iNM8kN6 vE3/wIDHWWmQa5AsHyGXfyebxA== -----END PRIVATE KEY----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app2-ingress.crt ================================================ -----BEGIN CERTIFICATE----- MIICzzCCAbcCFBIM4d2+RH+OQFbNyPH7vO7dj4aAMA0GCSqGSIb3DQEBCwUAMCQx IjAgBgNVBAMMGWFwcDIua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx MjMxWhcNNDIxMTIwMDIxMjMxWjAkMSIwIAYDVQQDDBlhcHAyLmthbHlhbnJlZGR5 ZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuKahBefa kPNvnvaxLkxnShxg5sX4P9FYlL1yCYLgrdW0DQWqouZ4nac31r4/YXeTJgXUCyQq HYy+tXNCangblaGFZXT71FcyoLT9RYE7p0/AePqHvP4gywJ/CEdk23obQak2Cuc/ cHoo/tnUnArevmvGoNLgb6TkHWiHE85LB8LPi1ra9KABU7/xF9XpyJWfRtkG8A7G jdbvggHVYD7l9oJyQB0+AcR7ddTbOk8D6CFHnMJa65/HyplErFWnrrKHvkKKqW6c f88kbp9qKPddmniwNOHqIu1QUADgJq97Y7fH9E0IZneMFWmGFRaXYxyUn4WziXEH npnbFg73/8FoFwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAAFOCpdOVoREKR6S6u F10Jp4DpQQDXsfgCVAxA58MNMGForwNhK1E28w0GBDm4K02nyOqqQxDWiFp8Am/T r+vzF1BBwNsiZ1r5naQTA5Jh2XgGrjOQOJhRZbEE4RwOxWsTvEyUJn2S0bYtGfES 5HjzZfq/0Gpxh3Z+oq8cINwzRzoirgf3Kk9SESvluxejnZMehVK5YIQp0IoM1Q5A t+ApJNyb107UxYLAfy8DSe5aMGtON8DYE+WLidL4CC1zRTABUjcBGsa09inGhgiF F5O8Eyc6LzA8EasmeJbWsUUYxUoLvq0OPXKq8Drjlt0SnttB4zpE8agtwzKKCodL tEoU -----END CERTIFICATE----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app2-ingress.csr ================================================ -----BEGIN CERTIFICATE REQUEST----- MIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMi5rYWx5YW5yZWRkeWRhaWRhLmNv bTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALimoQXn2pDzb572sS5M Z0ocYObF+D/RWJS9cgmC4K3VtA0FqqLmeJ2nN9a+P2F3kyYF1AskKh2MvrVzQmp4 G5WhhWV0+9RXMqC0/UWBO6dPwHj6h7z+IMsCfwhHZNt6G0GpNgrnP3B6KP7Z1JwK 3r5rxqDS4G+k5B1ohxPOSwfCz4ta2vSgAVO/8RfV6ciVn0bZBvAOxo3W74IB1WA+ 5faCckAdPgHEe3XU2zpPA+ghR5zCWuufx8qZRKxVp66yh75CiqlunH/PJG6faij3 XZp4sDTh6iLtUFAA4Cave2O3x/RNCGZ3jBVphhUWl2MclJ+Fs4lxB56Z2xYO9//B aBcCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBL3moJBmkteEAExoskvJrKbmW6 aMyMFZmHUhPYqe8IkFG2/QRwN0C3r9lU8+UX0Qt+XqVx8hzi2FFsQYyZ/gdhZ1NP Oq60qH9Z95evTdIzN5FbkQiT1kgb1dGFs7WgcDLJM10dIeaq4M7MQrF3R99tEtbj EGiHQaXogqkIU5dcwoD9tZFB+7i7ymv6C19SSGHE/amIMFVp1hBfcKH7wxQ6wlZF Ll5WRTtdrM7E685VYKmH8ccF5rB+oyAH9be3kO2NWYo48QyoSnqk82UvmzcL0H/P +DUD7EfXQvlK02HfpmJxpWjT9wKYuA/AUC21L0w/gZWhXwAUnQtsbDGKxfKj -----END CERTIFICATE REQUEST----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app2-ingress.key ================================================ -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC4pqEF59qQ82+e 9rEuTGdKHGDmxfg/0ViUvXIJguCt1bQNBaqi5nidpzfWvj9hd5MmBdQLJCodjL61 c0JqeBuVoYVldPvUVzKgtP1FgTunT8B4+oe8/iDLAn8IR2TbehtBqTYK5z9weij+ 2dScCt6+a8ag0uBvpOQdaIcTzksHws+LWtr0oAFTv/EX1enIlZ9G2QbwDsaN1u+C AdVgPuX2gnJAHT4BxHt11Ns6TwPoIUecwlrrn8fKmUSsVaeusoe+Qoqpbpx/zyRu n2oo912aeLA04eoi7VBQAOAmr3tjt8f0TQhmd4wVaYYVFpdjHJSfhbOJcQeemdsW Dvf/wWgXAgMBAAECggEAC7VJFYJHihRdegNjZa+jhv/4pvlbjdRc3QWMIw1A6NTZ l0/KK40YjcqKEFw80ZXO50TMVq6C2x/PAdtelTiraxf0SOQbibHDvIvtWUhh+3Bj oGgmTjYA505vtpssSnxaGRY9HoDeNWgRjGNMh15rFEDqNc1ZPMsESdcUZY2ZlVLJ twuH3icBtl6pN5opTt8GNu3BgHflSbuKFQba46OubKhaWaSj2/jSw73jZtaABqei eelmNnxRevDBBlvFr2WYzkFpG4hIS9gX7aBdDW3cjFsvaEwYK2oqI7BrLS0rmAwZ VX0Xsr5mt1Zz1SC+AETD/CmTNECCL6mJdlxLE0i72QKBgQDEoaE4e1VLVnWzQhc0 5V14QQ71XwV17bZQKTQM57jUGwIdj4+koXeNdh7bejTbZdfPC10i9V2nEa5UMccS QOI1B72fBX1zWNstW8wH3ZxnBpf4xRsCFvcGJ90vicMx2lXMLkuYBxJMG1/AeaF1 IAlmMv3LU0ITn/yaQRNCtXcB7wKBgQDwZvvldUOyPTaNqLA0QQ3i1CMvRvZqzL16 /u0FpR8XyAw8S1YnheYwSC4O+9WZoxzUetsH2v8C29VZUt1vgCea5HabEH87kHVh c7ZrJcAuiE/9a82ne+ikDDKrEAqkVERrnLQ0mE//pXGojjU16dLRtnQMMx7wiL/9 qG1P2kYEWQKBgQCHXvs+hnJ3VoPbsLGHYi1Sf//LX+rDgK9WSreh9tohdKKlNVPg NKW5B0xBL8Y6Echcq2cojSI3xg1tu4NhBrh1Z+ndFAuFIPRsKtmxxJlLuJdh1lk8 vBC+9Szq8H4o0TbmRi0W8i9fpCzstxA4MaEm8g4WMDC6kBd5HzoiYAoZkwKBgCqs /XaEVJolh7OqCG2eRsrHgd94p3HaGqDk9EqWP2jHWHSzov2tJWnYxmRejFKTxCBs FsnUNITbZYpPzYNnqqAygmOQkCWQxWWhVva6Yt1f0WNZac6bjnbgu3XmiR0W4HaC APN9PmZRhlW3uPZzJbuYug0YXhuxCvQKnC0awGcxAoGALGNotFeQOOJS1UKdLbKf BI7jMXnzNB+JqdMYZisCV8Ey2pc0NWe1sv3VB6nDKFA331EBkDNH/VpRluvOY8SQ hikUrY9YqIlYYXSie9LBgAXZ1LKB/1JknhqaQeCwWe4oVSVL8Jee9rcLZEXXfvch AzjalkzTL4lZQfOoLk6iDiw= -----END PRIVATE KEY----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app3-ingress.crt ================================================ -----BEGIN CERTIFICATE----- MIIC3zCCAccCFE5YVEQSuVzTlSNeeNV8BZuR6aWMMA0GCSqGSIb3DQEBCwUAMCwx KjAoBgNVBAMMIWFwcDMtZGVmYXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTAeFw0y MjExMjUwMjEzMDVaFw00MjExMjAwMjEzMDVaMCwxKjAoBgNVBAMMIWFwcDMtZGVm YXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAKI6FJgH3TJ5ejRd7H/AvY+7EN0Vft/BQDoEfcjEUwrc7VM5/wgM ExE5Uj1Z0aMvIAruEMq2Zxe+dDHmqircrLHzH5uPjni3iBQ7dxvOGZTcdIM6JIax rauZJ5XtyXWBDvWACag59LtmFNtLXQQjNJHOKHZpZgi3bG49t26Aw9F0EelUCNpX RhBMX8wzT+gz0B0RA+Lj5B2wwGm2z+GgcM8E0jScaTgBQfhQM/kHp8oSNifySO7S p0z7zLc4h6fvfZhjPw/g7PYCksSm6wS0DJxzloeaxfudc5GHejlk13EX9FCnTx8s lWwQBBbpjv8Ht/J4QY3WKXDFR8od2wcUjKcCAwEAATANBgkqhkiG9w0BAQsFAAOC AQEAJ4tl2RjaRciW5aemwS1cGkGwyEZOqrkBRRTxzBhKu/XYgMzzfFDRux/04QQR w214mPwTKhsO4laUQ0d0457AS+2dyFsqLT46lQynXqZilr8IrSYENdnnZV7qqw7h e5Js/EUw2sCjtnQiz5W3Ty/+TuDN6vhLDeU6e+68TEOjqyVEym6pISNJekw1IAL6 tO1nvb+Pj1Gq6tXbf8lXgr5ys6NU65sc6CpZQwD/FWWy0A4sLFjyHSproeNFxaln qBvj/5I4At0M6eJ6RtNGx9fem/VpOUWhjprsiYIDXBBbEOHmPKrc9u0I3VdfwREy Mmm+XAsfEVITpRvcDwoUHVsgKw== -----END CERTIFICATE----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app3-ingress.csr ================================================ -----BEGIN CERTIFICATE REQUEST----- MIICcTCCAVkCAQAwLDEqMCgGA1UEAwwhYXBwMy1kZWZhdWx0LmthbHlhbnJlZGR5 ZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAojoUmAfd Mnl6NF3sf8C9j7sQ3RV+38FAOgR9yMRTCtztUzn/CAwTETlSPVnRoy8gCu4QyrZn F750MeaqKtyssfMfm4+OeLeIFDt3G84ZlNx0gzokhrGtq5knle3JdYEO9YAJqDn0 u2YU20tdBCM0kc4odmlmCLdsbj23boDD0XQR6VQI2ldGEExfzDNP6DPQHRED4uPk HbDAabbP4aBwzwTSNJxpOAFB+FAz+QenyhI2J/JI7tKnTPvMtziHp+99mGM/D+Ds 9gKSxKbrBLQMnHOWh5rF+51zkYd6OWTXcRf0UKdPHyyVbBAEFumO/we38nhBjdYp cMVHyh3bBxSMpwIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBAIYBvMVB+MMu3Imm 8T8yEcxc1zCGsuTRNLyAaBHwbGUeqdxOncfxnPWoLLxgic3sUWtPrOgAnSkE7d2P oIn9fkNojyfmHzgoH4WEjghSFVzqenq/ABqs/fcZBTIjHSXXSah+nuOjrc7W218/ 6RAszOj6+tyQOAxz4kDvK8W/Ykigk9+vlBSSnUGsTjmB4afCctJzo6k3YBiD9wFT ev9IRdRPH1b+WzBP/HxBfkHsTPg3YEEa9ldMySJ514tHlJRHk9URbDj+fOCD6QG1 IY7/IfdIz040xiXaXVOh8bs8qBWqpBjChvxVeW3HxtGQE8koMppFspD0gw1KEe3g ddAOF+8= -----END CERTIFICATE REQUEST----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app3-ingress.key ================================================ -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCiOhSYB90yeXo0 Xex/wL2PuxDdFX7fwUA6BH3IxFMK3O1TOf8IDBMROVI9WdGjLyAK7hDKtmcXvnQx 5qoq3Kyx8x+bj454t4gUO3cbzhmU3HSDOiSGsa2rmSeV7cl1gQ71gAmoOfS7ZhTb S10EIzSRzih2aWYIt2xuPbdugMPRdBHpVAjaV0YQTF/MM0/oM9AdEQPi4+QdsMBp ts/hoHDPBNI0nGk4AUH4UDP5B6fKEjYn8kju0qdM+8y3OIen732YYz8P4Oz2ApLE pusEtAycc5aHmsX7nXORh3o5ZNdxF/RQp08fLJVsEAQW6Y7/B7fyeEGN1ilwxUfK HdsHFIynAgMBAAECggEAEqpVGUL6X+bbOTA/WFmgVevDnnRtMyiEj8hZgqKYHW1a /xLytYXSIc6zGCz/8mMnMCrBEtnW1cQLkXxFQwY99oGPNvJXBau0RAOtiiz2A4sz +q9TaY4C+fX2uIjx/4uYYYXYVptIfdFaf/rVWncEguwx+qHY5BLarnp6YwP8w9oE ejTPG56QZIp3nv62iRpt4WK/+sMFf+LNRneg+yKinZf/vg404NfhPwadcZ19s4NC jDNBtX9wC7vygUpeLGbF4Fb6oEOoW3o6i9GrX3mK1SAW138npE0uTTpd6qcjfKhd Ubhw/yXSg3bkhZOnlx96+K5h4IRWy++4Sr+gI+xlrQKBgQC9a/g/bnnD4EhCcSgr M7wKW/6ZykRo6PVCrusJf+vVO+lOEDpXU+pu/GX24Vn6EdU4Vailz6t3LnDY0LEq oGlhijGTvAh4mwVp5jLAywvEfHy9XGEKADQxFpfreCUJnU4Oluao2/uit2fjavsQ POjEfx88LgkeXBpo2TMXH91ETQKBgQDbPyKsF0o7pyJo2EG74/tqBw4xNmoASwtW CHFigj4dGmT0ojQalJp6ruvNJsnUUhnZ51fh3qBJQOIphDbAFhtyO9Pmhy9uJ6up b53dsm/3GPTLowkC57WPDrs8W4h8eRilKrICd7fPGXDFydpBWavbdZxByqsQzxYc daW7T+qewwKBgQCgBwJgXG4EnIuPjle4P+nB+rxaovYuh3kE0BADI45SxF2zNKSF OIDbKOLfsry4Nq6i/EMRaiPa+WIe2hiDAahl3kFKJVYmxhjJwc/o7uFPKzibJdtZ fpiZTBQmu4bW242hZ70QtWCetEHRcIUQz9R6hUcXKXFMs9Uf9TdjdukRFQKBgE6M uCdf0MC+iJ13nVVrwM+j53nKPQAN4unX7IeWkhprMnBTDMfZJd9+fAzsMLNZFtnz AJFz6YlVLbIiJFt9kCfFN44IMP4OSHpT+wNKwsKMtmee6cOYsHuok3x0btnpqOLE ATLRIZGZU8YJI6D2N5RQ9sK7kb5b81gO7mnFoBFxAoGAZBtT5LZpGV+uguhBbmd9 nXdOhzoU29zgpbPHwETSY1uk5DwVhJL7xucHrGBVimHKaP/t0hI2YnyofvHyybHA ML8OeTKn8NEwW5G3nnP9p6m9nCwP7bjwMXb+H96jl9MYeyyTwa9IJajtJKrSga5H lkNlw3MuKofm0pfTpqQa5go= -----END PRIVATE KEY----- ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/kube-manifests/04-ingress-self-signed-ssl.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-selfsigned-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com spec: # SSL Certs - Associate using Kubernetes Secrets tls: - secretName: app1-secret - secretName: app2-secret - secretName: app3-secret defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 48-GKE-Ingress-SelfSigned-SSL/kube-manifests/05-frontendconfig.yaml ================================================ apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks description: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction 1. Implement Self Signed SSL Certificates with GKE Ingress Service 2. Create SSL Certificates using OpenSSL. 3. Create pre-shared certificates in Google Cloud using `gcloud compute ssl-certificates create` 4. Reference these pre-shared certificates in Ingress Service using Annotation `ingress.gcp.kubernetes.io/pre-shared-cert: "app1-ingress,app2-ingress,app3-ingress"` ## Step-02: Creating pre-shared certificates in Google Cloud ```t # List SSL Certificates gcloud compute ssl-certificates list # Change Directory cd SSL-SelfSigned-Certs Observation: We should find certificates we have created in previous Self Signed Certs Demo # App1 - Create a certificate resource in your Google Cloud project: gcloud compute ssl-certificates create app1-ingress --certificate app1-ingress.crt --private-key app1-ingress.key # App2 - Create a certificate resource in your Google Cloud project: gcloud compute ssl-certificates create app2-ingress --certificate app2-ingress.crt --private-key app2-ingress.key # App3 - Create a certificate resource in your Google Cloud project: gcloud compute ssl-certificates create app3-ingress --certificate app3-ingress.crt --private-key app3-ingress.key # List SSL Certificates gcloud compute ssl-certificates list ``` ## Step-03: No changes to following kube-manifests from previous Ingress Name Based Virtual Host Routing Demo 1. 01-Nginx-App1-Deployment-and-NodePortService.yaml 2. 02-Nginx-App2-Deployment-and-NodePortService.yaml 3. 03-Nginx-App3-Deployment-and-NodePortService.yaml 4. 05-frontendconfig.yaml ## Step-04: Review 04-ingress-preshared-ssl.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-preshared-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com # Pre-shared certificate resources ingress.gcp.kubernetes.io/pre-shared-cert: "app1-ingress,app2-ingress,app3-ingress" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-05: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # Describe Ingress Service kubectl describe ingress ingress-preshared-ssl # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be present # List FrontendConfigs kubectl get frontendconfig # Verify SSL Certificates Go to Load Balancers 1. Load Balancers View -> In FRONTENDS -> Certificate 2. Load Balancers Components View -> CERTIFICATEs Tab 3. Load Balancers Components View -> TARGET PROXIES -> HTTPS Proxy -> SSL Certificates ``` ## Step-06: Access Application ```t # Access Application http://app1.kalyanreddydaida.com/app1/index.html --> VIEW CERTIFICATE WHEN ACCESSING URL http://app2.kalyanreddydaida.com/app2/index.html --> VIEW CERTIFICATE WHEN ACCESSING URL http://app3-default.kalyanreddydaida.com --> VIEW CERTIFICATE WHEN ACCESSING URL Observation: 1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing 2. HTTP to HTTPS redirect should work 3. You will get a warning "The certificate is not trusted because it is self-signed.". Click on "Accept the risk and continue" ``` ## Step-07: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests ``` ## Step-08: Clean-Up SSL Certs from your Google Cloud Project ```t # List SSL Certificates gcloud compute ssl-certificates list # Delete SSL Certificates gcloud compute ssl-certificates delete app1-ingress gcloud compute ssl-certificates delete app2-ingress gcloud compute ssl-certificates delete app3-ingress # List SSL Certificates gcloud compute ssl-certificates list # Verify SSL Certificates In Load Balancing Section Go to Load Balancers 1. Load Balancers Components View -> CERTIFICATEs Tab ``` ## References - [Ingress Pre-shared Certificates](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl#pre-shared-certs) ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app1-ingress.crt ================================================ -----BEGIN CERTIFICATE----- MIICzzCCAbcCFDQiqvY0cwNLP1ljifVfveOZo7G4MA0GCSqGSIb3DQEBCwUAMCQx IjAgBgNVBAMMGWFwcDEua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx MDUyWhcNNDIxMTIwMDIxMDUyWjAkMSIwIAYDVQQDDBlhcHAxLmthbHlhbnJlZGR5 ZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5Nt/2cBg HSkMXNo04h9tN8f8ioPkZkl5rFwNmgW+tGei4wa2QFRt0xCeOyd7+0GWmyH6602M i3WabHTruHKWsCAikx20KaOEjYF+cmDMpWQdrWADoAIIfov+BO9VTmFJUX9JUDiR f6CHxtCZIifL4VM0InSpMIy4OJGQgzOVrlWwLYVcYla529VUGU5qBJFAliKve3N+ SBYoNI5uX0rERm4hqCUHKrQnsIfA0OnNccPdAoi+KmC/oipfUOpL9URholj7spAT JczcTGw+s7gehCDXm6YU7cBHtD2hLx106otEzJGIwys4JtmuKXtDw+w2eOGRUIrT Q7YLX6N5LJ8IDwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQABileCf1mnee2r0cLD 3bHaYo68JZkl9BS6dJN6DuOSD1Sha/NArgsuQa6uX8ApnUhTt5DucRyp7o5pdhCi rNaJwR9zYQmjxdH+RGtb5sPEAo7D47kQqp4wlJtL5AUfCI1nGgpg5cJCEqTVlbmP PEAJYlaWi8LNe4h+qukECcAA3Nsgvvm3Ls1qmKIEKJr05ppCq7EbYCrXJrN75Pl1 31w8Q0tr80qgNlhz65EyvLrIe6RK72qyOe9+oRCp9wRIoCvs47vUuNMfzZRxZXGn dPSLWkQt4LrQ/5RZr0lyUWtzr/l7GWu9GYljWe6toxTQHSqSkV/WAch8P4g0zvRJ NdHO -----END CERTIFICATE----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app1-ingress.csr ================================================ -----BEGIN CERTIFICATE REQUEST----- MIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMS5rYWx5YW5yZWRkeWRhaWRhLmNv bTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOTbf9nAYB0pDFzaNOIf bTfH/IqD5GZJeaxcDZoFvrRnouMGtkBUbdMQnjsne/tBlpsh+utNjIt1mmx067hy lrAgIpMdtCmjhI2BfnJgzKVkHa1gA6ACCH6L/gTvVU5hSVF/SVA4kX+gh8bQmSIn y+FTNCJ0qTCMuDiRkIMzla5VsC2FXGJWudvVVBlOagSRQJYir3tzfkgWKDSObl9K xEZuIaglByq0J7CHwNDpzXHD3QKIvipgv6IqX1DqS/VEYaJY+7KQEyXM3ExsPrO4 HoQg15umFO3AR7Q9oS8ddOqLRMyRiMMrOCbZril7Q8PsNnjhkVCK00O2C1+jeSyf CA8CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBexWuraAJF+txKTKQM6w2/Hnur 9BobBC/OYqdsracfmSAJ1eGEKI/ISUSZHVtZJptygxTEVXCTsN+ZukXlETM7AI4a Z8KatvHNrzhnpFV84ONpnCiUrQmik0IWwKcDJCzl4f7KUDITDC3hh5WGVY67OvuK mx03qu54ZFmFJkM2vwVn/ODbvdScYI5tDRjFIbyrwkxxW/1q1otkotOk7Z3hLdMN HWJh7IeXfw06q7+llX3Qg1OkpfyY682A0S2G2K6vrGFJUKFJ2CrPDzmht5G5kUz4 HANxZuSKeHcI7rlB3IVyjNa77oXOX0+ZQLYgCf/cA3Lu2zAkhEvtq9ui0Edl -----END CERTIFICATE REQUEST----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app1-ingress.key ================================================ -----BEGIN PRIVATE KEY----- MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDk23/ZwGAdKQxc 2jTiH203x/yKg+RmSXmsXA2aBb60Z6LjBrZAVG3TEJ47J3v7QZabIfrrTYyLdZps dOu4cpawICKTHbQpo4SNgX5yYMylZB2tYAOgAgh+i/4E71VOYUlRf0lQOJF/oIfG 0JkiJ8vhUzQidKkwjLg4kZCDM5WuVbAthVxiVrnb1VQZTmoEkUCWIq97c35IFig0 jm5fSsRGbiGoJQcqtCewh8DQ6c1xw90CiL4qYL+iKl9Q6kv1RGGiWPuykBMlzNxM bD6zuB6EINebphTtwEe0PaEvHXTqi0TMkYjDKzgm2a4pe0PD7DZ44ZFQitNDtgtf o3ksnwgPAgMBAAECggEAE65cvFky6s8Q5RtO2PNi7R0htrfI+JLxB8WS1eAQmmsf Mu7s1XNtTm1rbiLjIqRtU0IE1h+BKq0ebp1PeDlChDr/Pi+bwsjxKUotmaCBeOe3 NaXAKg6CtH9NhRcf+vGa4ItVvrRert8bThm6UZmiiuog3aWytx4i6Zp7Fw1kne1O 4k70Xt/jxHK9WJEp02seREZHZCEnP9mHp7ngzjBOC8ow4F14ONvC4NpRl2y3rdC3 Wyw26Hrfhy1j/Ww6J04j2qvsplXly+acpOOWZuCFSX33zUFEZUtw+ncNFPxk/3G5 PT4Jn7zPbB7YtId5wjN4dB+rv8b8rQf1ShIioW4KAQKBgQD8ZqQjxpmQTZGGsGjr tWZaGV/StW/pdhyFHrRwmCl6LbskuYDmWYuyHpoiVmEicg8mF+alCqGK8b5Cd7yH b8fl0uImC4CQVcWE8KieEPFAt5wYLyAper77iSunx/7g7Kv7sq4q84nIrFwlnBRO njb1QmY/JaPQ00CLFPwEVWwswQKBgQDoHupdlwDP97Zd4lUfn25pg0b3MNKNsFPH ALS4jR7qtJPjj7FdZTxGfBI1kJhVfWq2vWl/kK033c2JMGE0dpMNH/ZpQ2EVWqOR tA34diFXE2lBAOojHIdH5x38fWZv18aLdvIif/CwqaBhVfqRJGFgHk+4qlLpjAwE GxPkmWTYzwKBgQDJPZkvgSBdQst+BVeSX668tbCGEu2oyehRZzrc7yVa6e1liZYx k0HjgazJJfAKg8B6UeIuwvwsCTT2T/t8TO6n2m0/gjo+WnTC2xLF/KIuRHbrfV96 UwjFCwhInRgmA+3YIA3n5wd7fZl2zywNxu3wvMFDJeKoFFdIzTFmzykRwQKBgQCY lPnqW4ClNGgkfssF5n9lzG2xv94oVWg8wDILvng8QEeWprYodouQqa4ul8YLLE4h oZDf0fKLbrnVHIBJREiVsBUCTNBcgSBUfs9QLBbubkwZ9sfyHKawlTQY7TWQ/337 30x7cS5+coKCeUokbo2z6TjuYsftzal4aXRCKLMp8QKBgQCWBLJmI9jt9tRQvyKY EtEQd/3qFtymDjXnLaFn/FclIDm2me0YNfp+kumOe2XapG/m5TOPoPIG+OKkEwCV QXeOxjlQnzwvvMuQDbTUyjeYk4AV/wYExQAymmP1d8AeqddJQ103FSvj1iNM8kN6 vE3/wIDHWWmQa5AsHyGXfyebxA== -----END PRIVATE KEY----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app2-ingress.crt ================================================ -----BEGIN CERTIFICATE----- MIICzzCCAbcCFBIM4d2+RH+OQFbNyPH7vO7dj4aAMA0GCSqGSIb3DQEBCwUAMCQx IjAgBgNVBAMMGWFwcDIua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx MjMxWhcNNDIxMTIwMDIxMjMxWjAkMSIwIAYDVQQDDBlhcHAyLmthbHlhbnJlZGR5 ZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuKahBefa kPNvnvaxLkxnShxg5sX4P9FYlL1yCYLgrdW0DQWqouZ4nac31r4/YXeTJgXUCyQq HYy+tXNCangblaGFZXT71FcyoLT9RYE7p0/AePqHvP4gywJ/CEdk23obQak2Cuc/ cHoo/tnUnArevmvGoNLgb6TkHWiHE85LB8LPi1ra9KABU7/xF9XpyJWfRtkG8A7G jdbvggHVYD7l9oJyQB0+AcR7ddTbOk8D6CFHnMJa65/HyplErFWnrrKHvkKKqW6c f88kbp9qKPddmniwNOHqIu1QUADgJq97Y7fH9E0IZneMFWmGFRaXYxyUn4WziXEH npnbFg73/8FoFwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAAFOCpdOVoREKR6S6u F10Jp4DpQQDXsfgCVAxA58MNMGForwNhK1E28w0GBDm4K02nyOqqQxDWiFp8Am/T r+vzF1BBwNsiZ1r5naQTA5Jh2XgGrjOQOJhRZbEE4RwOxWsTvEyUJn2S0bYtGfES 5HjzZfq/0Gpxh3Z+oq8cINwzRzoirgf3Kk9SESvluxejnZMehVK5YIQp0IoM1Q5A t+ApJNyb107UxYLAfy8DSe5aMGtON8DYE+WLidL4CC1zRTABUjcBGsa09inGhgiF F5O8Eyc6LzA8EasmeJbWsUUYxUoLvq0OPXKq8Drjlt0SnttB4zpE8agtwzKKCodL tEoU -----END CERTIFICATE----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app2-ingress.csr ================================================ -----BEGIN CERTIFICATE REQUEST----- MIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMi5rYWx5YW5yZWRkeWRhaWRhLmNv bTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALimoQXn2pDzb572sS5M Z0ocYObF+D/RWJS9cgmC4K3VtA0FqqLmeJ2nN9a+P2F3kyYF1AskKh2MvrVzQmp4 G5WhhWV0+9RXMqC0/UWBO6dPwHj6h7z+IMsCfwhHZNt6G0GpNgrnP3B6KP7Z1JwK 3r5rxqDS4G+k5B1ohxPOSwfCz4ta2vSgAVO/8RfV6ciVn0bZBvAOxo3W74IB1WA+ 5faCckAdPgHEe3XU2zpPA+ghR5zCWuufx8qZRKxVp66yh75CiqlunH/PJG6faij3 XZp4sDTh6iLtUFAA4Cave2O3x/RNCGZ3jBVphhUWl2MclJ+Fs4lxB56Z2xYO9//B aBcCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBL3moJBmkteEAExoskvJrKbmW6 aMyMFZmHUhPYqe8IkFG2/QRwN0C3r9lU8+UX0Qt+XqVx8hzi2FFsQYyZ/gdhZ1NP Oq60qH9Z95evTdIzN5FbkQiT1kgb1dGFs7WgcDLJM10dIeaq4M7MQrF3R99tEtbj EGiHQaXogqkIU5dcwoD9tZFB+7i7ymv6C19SSGHE/amIMFVp1hBfcKH7wxQ6wlZF Ll5WRTtdrM7E685VYKmH8ccF5rB+oyAH9be3kO2NWYo48QyoSnqk82UvmzcL0H/P +DUD7EfXQvlK02HfpmJxpWjT9wKYuA/AUC21L0w/gZWhXwAUnQtsbDGKxfKj -----END CERTIFICATE REQUEST----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app2-ingress.key ================================================ -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC4pqEF59qQ82+e 9rEuTGdKHGDmxfg/0ViUvXIJguCt1bQNBaqi5nidpzfWvj9hd5MmBdQLJCodjL61 c0JqeBuVoYVldPvUVzKgtP1FgTunT8B4+oe8/iDLAn8IR2TbehtBqTYK5z9weij+ 2dScCt6+a8ag0uBvpOQdaIcTzksHws+LWtr0oAFTv/EX1enIlZ9G2QbwDsaN1u+C AdVgPuX2gnJAHT4BxHt11Ns6TwPoIUecwlrrn8fKmUSsVaeusoe+Qoqpbpx/zyRu n2oo912aeLA04eoi7VBQAOAmr3tjt8f0TQhmd4wVaYYVFpdjHJSfhbOJcQeemdsW Dvf/wWgXAgMBAAECggEAC7VJFYJHihRdegNjZa+jhv/4pvlbjdRc3QWMIw1A6NTZ l0/KK40YjcqKEFw80ZXO50TMVq6C2x/PAdtelTiraxf0SOQbibHDvIvtWUhh+3Bj oGgmTjYA505vtpssSnxaGRY9HoDeNWgRjGNMh15rFEDqNc1ZPMsESdcUZY2ZlVLJ twuH3icBtl6pN5opTt8GNu3BgHflSbuKFQba46OubKhaWaSj2/jSw73jZtaABqei eelmNnxRevDBBlvFr2WYzkFpG4hIS9gX7aBdDW3cjFsvaEwYK2oqI7BrLS0rmAwZ VX0Xsr5mt1Zz1SC+AETD/CmTNECCL6mJdlxLE0i72QKBgQDEoaE4e1VLVnWzQhc0 5V14QQ71XwV17bZQKTQM57jUGwIdj4+koXeNdh7bejTbZdfPC10i9V2nEa5UMccS QOI1B72fBX1zWNstW8wH3ZxnBpf4xRsCFvcGJ90vicMx2lXMLkuYBxJMG1/AeaF1 IAlmMv3LU0ITn/yaQRNCtXcB7wKBgQDwZvvldUOyPTaNqLA0QQ3i1CMvRvZqzL16 /u0FpR8XyAw8S1YnheYwSC4O+9WZoxzUetsH2v8C29VZUt1vgCea5HabEH87kHVh c7ZrJcAuiE/9a82ne+ikDDKrEAqkVERrnLQ0mE//pXGojjU16dLRtnQMMx7wiL/9 qG1P2kYEWQKBgQCHXvs+hnJ3VoPbsLGHYi1Sf//LX+rDgK9WSreh9tohdKKlNVPg NKW5B0xBL8Y6Echcq2cojSI3xg1tu4NhBrh1Z+ndFAuFIPRsKtmxxJlLuJdh1lk8 vBC+9Szq8H4o0TbmRi0W8i9fpCzstxA4MaEm8g4WMDC6kBd5HzoiYAoZkwKBgCqs /XaEVJolh7OqCG2eRsrHgd94p3HaGqDk9EqWP2jHWHSzov2tJWnYxmRejFKTxCBs FsnUNITbZYpPzYNnqqAygmOQkCWQxWWhVva6Yt1f0WNZac6bjnbgu3XmiR0W4HaC APN9PmZRhlW3uPZzJbuYug0YXhuxCvQKnC0awGcxAoGALGNotFeQOOJS1UKdLbKf BI7jMXnzNB+JqdMYZisCV8Ey2pc0NWe1sv3VB6nDKFA331EBkDNH/VpRluvOY8SQ hikUrY9YqIlYYXSie9LBgAXZ1LKB/1JknhqaQeCwWe4oVSVL8Jee9rcLZEXXfvch AzjalkzTL4lZQfOoLk6iDiw= -----END PRIVATE KEY----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app3-ingress.crt ================================================ -----BEGIN CERTIFICATE----- MIIC3zCCAccCFE5YVEQSuVzTlSNeeNV8BZuR6aWMMA0GCSqGSIb3DQEBCwUAMCwx KjAoBgNVBAMMIWFwcDMtZGVmYXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTAeFw0y MjExMjUwMjEzMDVaFw00MjExMjAwMjEzMDVaMCwxKjAoBgNVBAMMIWFwcDMtZGVm YXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBAKI6FJgH3TJ5ejRd7H/AvY+7EN0Vft/BQDoEfcjEUwrc7VM5/wgM ExE5Uj1Z0aMvIAruEMq2Zxe+dDHmqircrLHzH5uPjni3iBQ7dxvOGZTcdIM6JIax rauZJ5XtyXWBDvWACag59LtmFNtLXQQjNJHOKHZpZgi3bG49t26Aw9F0EelUCNpX RhBMX8wzT+gz0B0RA+Lj5B2wwGm2z+GgcM8E0jScaTgBQfhQM/kHp8oSNifySO7S p0z7zLc4h6fvfZhjPw/g7PYCksSm6wS0DJxzloeaxfudc5GHejlk13EX9FCnTx8s lWwQBBbpjv8Ht/J4QY3WKXDFR8od2wcUjKcCAwEAATANBgkqhkiG9w0BAQsFAAOC AQEAJ4tl2RjaRciW5aemwS1cGkGwyEZOqrkBRRTxzBhKu/XYgMzzfFDRux/04QQR w214mPwTKhsO4laUQ0d0457AS+2dyFsqLT46lQynXqZilr8IrSYENdnnZV7qqw7h e5Js/EUw2sCjtnQiz5W3Ty/+TuDN6vhLDeU6e+68TEOjqyVEym6pISNJekw1IAL6 tO1nvb+Pj1Gq6tXbf8lXgr5ys6NU65sc6CpZQwD/FWWy0A4sLFjyHSproeNFxaln qBvj/5I4At0M6eJ6RtNGx9fem/VpOUWhjprsiYIDXBBbEOHmPKrc9u0I3VdfwREy Mmm+XAsfEVITpRvcDwoUHVsgKw== -----END CERTIFICATE----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app3-ingress.csr ================================================ -----BEGIN CERTIFICATE REQUEST----- MIICcTCCAVkCAQAwLDEqMCgGA1UEAwwhYXBwMy1kZWZhdWx0LmthbHlhbnJlZGR5 ZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAojoUmAfd Mnl6NF3sf8C9j7sQ3RV+38FAOgR9yMRTCtztUzn/CAwTETlSPVnRoy8gCu4QyrZn F750MeaqKtyssfMfm4+OeLeIFDt3G84ZlNx0gzokhrGtq5knle3JdYEO9YAJqDn0 u2YU20tdBCM0kc4odmlmCLdsbj23boDD0XQR6VQI2ldGEExfzDNP6DPQHRED4uPk HbDAabbP4aBwzwTSNJxpOAFB+FAz+QenyhI2J/JI7tKnTPvMtziHp+99mGM/D+Ds 9gKSxKbrBLQMnHOWh5rF+51zkYd6OWTXcRf0UKdPHyyVbBAEFumO/we38nhBjdYp cMVHyh3bBxSMpwIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBAIYBvMVB+MMu3Imm 8T8yEcxc1zCGsuTRNLyAaBHwbGUeqdxOncfxnPWoLLxgic3sUWtPrOgAnSkE7d2P oIn9fkNojyfmHzgoH4WEjghSFVzqenq/ABqs/fcZBTIjHSXXSah+nuOjrc7W218/ 6RAszOj6+tyQOAxz4kDvK8W/Ykigk9+vlBSSnUGsTjmB4afCctJzo6k3YBiD9wFT ev9IRdRPH1b+WzBP/HxBfkHsTPg3YEEa9ldMySJ514tHlJRHk9URbDj+fOCD6QG1 IY7/IfdIz040xiXaXVOh8bs8qBWqpBjChvxVeW3HxtGQE8koMppFspD0gw1KEe3g ddAOF+8= -----END CERTIFICATE REQUEST----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app3-ingress.key ================================================ -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCiOhSYB90yeXo0 Xex/wL2PuxDdFX7fwUA6BH3IxFMK3O1TOf8IDBMROVI9WdGjLyAK7hDKtmcXvnQx 5qoq3Kyx8x+bj454t4gUO3cbzhmU3HSDOiSGsa2rmSeV7cl1gQ71gAmoOfS7ZhTb S10EIzSRzih2aWYIt2xuPbdugMPRdBHpVAjaV0YQTF/MM0/oM9AdEQPi4+QdsMBp ts/hoHDPBNI0nGk4AUH4UDP5B6fKEjYn8kju0qdM+8y3OIen732YYz8P4Oz2ApLE pusEtAycc5aHmsX7nXORh3o5ZNdxF/RQp08fLJVsEAQW6Y7/B7fyeEGN1ilwxUfK HdsHFIynAgMBAAECggEAEqpVGUL6X+bbOTA/WFmgVevDnnRtMyiEj8hZgqKYHW1a /xLytYXSIc6zGCz/8mMnMCrBEtnW1cQLkXxFQwY99oGPNvJXBau0RAOtiiz2A4sz +q9TaY4C+fX2uIjx/4uYYYXYVptIfdFaf/rVWncEguwx+qHY5BLarnp6YwP8w9oE ejTPG56QZIp3nv62iRpt4WK/+sMFf+LNRneg+yKinZf/vg404NfhPwadcZ19s4NC jDNBtX9wC7vygUpeLGbF4Fb6oEOoW3o6i9GrX3mK1SAW138npE0uTTpd6qcjfKhd Ubhw/yXSg3bkhZOnlx96+K5h4IRWy++4Sr+gI+xlrQKBgQC9a/g/bnnD4EhCcSgr M7wKW/6ZykRo6PVCrusJf+vVO+lOEDpXU+pu/GX24Vn6EdU4Vailz6t3LnDY0LEq oGlhijGTvAh4mwVp5jLAywvEfHy9XGEKADQxFpfreCUJnU4Oluao2/uit2fjavsQ POjEfx88LgkeXBpo2TMXH91ETQKBgQDbPyKsF0o7pyJo2EG74/tqBw4xNmoASwtW CHFigj4dGmT0ojQalJp6ruvNJsnUUhnZ51fh3qBJQOIphDbAFhtyO9Pmhy9uJ6up b53dsm/3GPTLowkC57WPDrs8W4h8eRilKrICd7fPGXDFydpBWavbdZxByqsQzxYc daW7T+qewwKBgQCgBwJgXG4EnIuPjle4P+nB+rxaovYuh3kE0BADI45SxF2zNKSF OIDbKOLfsry4Nq6i/EMRaiPa+WIe2hiDAahl3kFKJVYmxhjJwc/o7uFPKzibJdtZ fpiZTBQmu4bW242hZ70QtWCetEHRcIUQz9R6hUcXKXFMs9Uf9TdjdukRFQKBgE6M uCdf0MC+iJ13nVVrwM+j53nKPQAN4unX7IeWkhprMnBTDMfZJd9+fAzsMLNZFtnz AJFz6YlVLbIiJFt9kCfFN44IMP4OSHpT+wNKwsKMtmee6cOYsHuok3x0btnpqOLE ATLRIZGZU8YJI6D2N5RQ9sK7kb5b81gO7mnFoBFxAoGAZBtT5LZpGV+uguhBbmd9 nXdOhzoU29zgpbPHwETSY1uk5DwVhJL7xucHrGBVimHKaP/t0hI2YnyofvHyybHA ML8OeTKn8NEwW5G3nnP9p6m9nCwP7bjwMXb+H96jl9MYeyyTwa9IJajtJKrSga5H lkNlw3MuKofm0pfTpqQa5go= -----END PRIVATE KEY----- ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/kube-manifests/04-ingress-preshared-ssl.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-preshared-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com # Pre-shared certificate resources ingress.gcp.kubernetes.io/pre-shared-cert: "app1-ingress,app2-ingress,app3-ingress" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 49-GKE-Ingress-Preshared-SSL/kube-manifests/05-frontendconfig.yaml ================================================ apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ================================================ FILE: 50-GKE-Ingress-Cloud-CDN/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress and Cloud CDN description: Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction - Implement following Features for Ingress Service 1. BackendConfig for Ingress Service 2. Backend Service Timeout 3. Connection Draining 4. Ingress Service HTTP Access Logging 5. Enable Cloud CDN ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: cdn-demo-deployment spec: replicas: 2 selector: matchLabels: app: cdn-demo template: metadata: labels: app: cdn-demo spec: containers: - name: cdn-demo image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0 ports: - containerPort: 8080 ``` ## Step-03: 02-kubernetes-NodePort-service.yaml - Update Backend Config with annotation **cloud.google.com/backend-config: '{"default": "my-backendconfig"}'** ```yaml apiVersion: v1 kind: Service metadata: name: cdn-demo-nodeport-service annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: cdn-demo ports: - port: 80 targetPort: 8080 ``` ## Step-04: 03-ingress.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cdn-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingress-cdn-demo.kalyanreddydaida.com spec: defaultBackend: service: name: cdn-demo-nodeport-service port: number: 80 ``` ## Step-05: 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 cdn: enabled: true cachePolicy: includeHost: true includeProtocol: true includeQueryString: false ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Service kubectl get ingress # List Backend Config kubectl get backendconfig kubectl describe backendconfig my-backendconfig ``` ## Step-07: Verify Settings in Load Balancer - Go to Network Services -> Load Balancing -> Click on Load Balancer - Go to Backend -> Backend Services - Verify the Settings - **Timeout:** 42 seconds - **Connection draining timeout:** 62 seconds - **Cloud CDN:** Enabled - **Logging: Enabled:** (sample rate: 1) ## Step-08: Verify Cloud CDN - Go to Network Services -> Cloud CDN -> (Automatically created when Ingress Deployed k8s1-c6634a10-default-cdn-demo-nodeport-service-80-553facae) - Verify Settings - DETAILS TAB - MONITORING TAB - CACHING TAB ## Step-09: Access Application and Verify Cache Age ```t # Access Application http:// [or] http:// # Access Application using DNS Name http://ingress-cdn-demo.kalyanreddydaida.com curl -v http://ingress-cdn-demo.kalyanreddydaida.com/?cache=true curl -v http://ingress-cdn-demo.kalyanreddydaida.com curl -v http://ingress-cdn-demo.kalyanreddydaida.com ## Important Note: 1. The output shows the response headers and body. 2. In the response headers, you can see that the content was cached. The Age header tells you how many seconds the content has been cached ## Sample Output Kalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ curl -v http://ingress-cdn-demo.kalyanreddydaida.com * Trying 34.120.32.120:80... * Connected to ingress-cdn-demo.kalyanreddydaida.com (34.120.32.120) port 80 (#0) > GET / HTTP/1.1 > Host: ingress-cdn-demo.kalyanreddydaida.com > User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Content-Length: 76 < Via: 1.1 google < Date: Thu, 23 Jun 2022 04:47:42 GMT < Content-Type: text/plain; charset=utf-8 < Age: 1625 < Cache-Control: max-age=3600,public < Hello, world! Version: 1.0.0 Hostname: cdn-demo-deployment-6f4c8f655d-htpsn * Connection #0 to host ingress-cdn-demo.kalyanreddydaida.com left intact Kalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ ``` ## Step-10: Verify Cloud CDN Monitoring Tab - Go to Network Services -> Cloud CDN -> MONITORING Tab - Review Charts - CDN Bandwidth - CDN Hit Rate - CDN Fill Rate - CDN Egress Rate - Requests - Response Codes ## Step-11: Verify Ingress Service Logs in Cloud Logging - Go to Cloud Logging -> Logs Explorer -> Log Fields -> Select - Resource Type: Cloud HTTP Load Balancer - Severity: Info - Project ID: kdaida123 - Review the logs - Access Application and parallely review the logs ```t # Access Application curl -v http://ingress-cdn-demo.kalyanreddydaida.com ``` ## Step-12: Verify Ingress Service Logs in Cloud Logging using Other Approach - Go to Cloud Logging -> Logs Dashboard - Go to Chart -> HTTP/S Load Balancer Logs By Severity -> Click on **VIEW LOGS** ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) - [Caching overview](https://cloud.google.com/cdn/docs/caching#cacheability) ================================================ FILE: 50-GKE-Ingress-Cloud-CDN/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: cdn-demo-deployment spec: replicas: 2 selector: matchLabels: app: cdn-demo template: metadata: labels: app: cdn-demo spec: containers: - name: cdn-demo image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0 ports: - containerPort: 8080 ================================================ FILE: 50-GKE-Ingress-Cloud-CDN/kube-manifests/02-kubernetes-NodePort-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: cdn-demo-nodeport-service annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: cdn-demo ports: - port: 80 targetPort: 8080 ================================================ FILE: 50-GKE-Ingress-Cloud-CDN/kube-manifests/03-ingress.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cdn-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingress-cdn-demo.kalyanreddydaida.com spec: defaultBackend: service: name: cdn-demo-nodeport-service port: number: 80 ================================================ FILE: 50-GKE-Ingress-Cloud-CDN/kube-manifests/04-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 cdn: enabled: true cachePolicy: includeHost: true includeProtocol: true includeQueryString: false # sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged # and 1.0 means 100% of packets are logged. This field is only relevant if enable is set # to true. sampleRate is an optional field, but if it's configured then enable: true must # also be set or else it is interpreted as enable: false. ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: cdn-demo-deployment spec: replicas: 4 selector: matchLabels: app: cdn-demo template: metadata: labels: app: cdn-demo spec: containers: - name: cdn-demo image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0 ports: - containerPort: 8080 ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/02-kubernetes-NodePort-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: cdn-demo-nodeport-service annotations: #cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig"}}' cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: cdn-demo ports: - port: 80 targetPort: 8080 ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/03-ingress.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cdn-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingress-with-clientip-affinity.kalyanreddydaida.com spec: defaultBackend: service: name: cdn-demo-nodeport-service port: number: 80 ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/04-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "CLIENT_IP" # Disable at Step-07 #affinityType: "" # Enable at Step-07 ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: cdn-demo-deployment2 spec: replicas: 4 selector: matchLabels: app: cdn-demo2 template: metadata: labels: app: cdn-demo2 spec: containers: - name: cdn-demo2 image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0 ports: - containerPort: 8080 ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/02-kubernetes-NodePort-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: cdn-demo-nodeport-service2 annotations: #cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig2"}}' cloud.google.com/backend-config: '{"default": "my-backendconfig2"}' spec: type: NodePort selector: app: cdn-demo2 ports: - port: 80 targetPort: 8080 ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/03-ingress.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cdn-demo2 annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip2" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingress-without-clientip-affinity.kalyanreddydaida.com spec: defaultBackend: service: name: cdn-demo-nodeport-service2 port: number: 80 ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/04-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig2 spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 # sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged # and 1.0 means 100% of packets are logged. This field is only relevant if enable is set # to true. sampleRate is an optional field, but if it's configured then enable: true must # also be set or else it is interpreted as enable: false. ================================================ FILE: 51-GKE-Ingress-ClientIP-Affinity/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity description: Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction - Implement following Features for Ingress Service - BackendConfig - CLIENT_IP Affinity for Ingress Service - We are going to create two projects - **Project-01:** CLIENT_IP Affinity enabled - **Project-02:** CLIENT_IP Affinity disabled ## Step-02: Create External IP Address using gcloud ```t # Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS) gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip1 --global # Create External IP Address 2 gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip2 --global # Describe External IP Address to get gcloud compute addresses describe ADDRESS_NAME --global gcloud compute addresses describe gke-ingress-extip2 --global # Verify Go to VPC Network -> IP Addresses -> External IP Address ``` ## Step-03: Project-01: Review YAML Manifests - **Project Folder:** 01-kube-manifests-with-clientip-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "CLIENT_IP" ``` ## Step-04: Project-02: Review YAML Manifests - **Project Folder:** 02-kube-manifests-without-clientip-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig2 spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 ``` ## Step-05: Deploy Kubernetes Manifests ```t # Project-01: Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests-with-clientip-affinity # Project-02: Deploy Kubernetes Manifests kubectl apply -f 02-kube-manifests-without-clientip-affinity # Verify Deployments kubectl get deploy # Verify Pods kubectl get pods # Verify Node Port Services kubectl get svc # Verify Ingress Services kubectl get ingress # Verify Backend Config kubectl get backendconfig # Project-01: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting Observation: Client IP Affinity setting should be in enabled state # Project-02: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting Client IP Affinity setting should be in disabled state ``` ## Step-06: Access Application ```t # Project-01: Access Application using DNS or ExtIP http://ingress-with-clientip-affinity.kalyanreddydaida.com http:// curl ingress-with-clientip-affinity.kalyanreddydaida.com Observation: 1. Request will keep going always to only one POD due to CLIENT_IP Affinity we configured # Project-02: Access Application using DNS or ExtIP http://ingress-without-clientip-affinity.kalyanreddydaida.com http:// curl ingress-without-clientip-affinity.kalyanreddydaida.com Observation: 1. Requests will be load balanced to 4 pods created as part of "cdn-demo2" deployment. ``` ## Step-07: How to remove a setting from FrontendConfig or BackendConfig - To revoke an Ingress feature, you must explicitly disable the feature configuration in the FrontendConfig or BackendConfig CRD - **Important Note:** To clear or disable a previously enabled configuration, set the field's value to an empty string ("") or to a Boolean value of false, depending on the field type. ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: #affinityType: "CLIENT_IP" # Disable at Step-07 affinityType: "" # Enable at Step-07 ``` ## Step-08: Apply Changes and Verify ```t # Apply Changes kubectl apply -f 01-kube-manifests-with-clientip-affinity # Verify Load Balancer Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting Observation: Should be disabled ``` ## Step-09: Deleting a FrontendConfig or BackendConfig - [Deleting a FrontendConfig or BackendConfig](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#deleting_a_frontendconfig_or_backendconfig) ## Step-10: Clean-Up ```t # Project-01: Delete Kubernetes Resources kubectl delete -f 01-kube-manifests-with-clientip-affinity # Project-02: Delete Kubernetes Resources kubectl delete -f 02-kube-manifests-without-clientip-affinity ``` ## Step-11: Rollback 04-backendconfig.yaml - Put back `affinityType: "CLIENT_IP"` it will be ready for Students Demo. ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "CLIENT_IP" # Disable at Step-07 #affinityType: "" # Enable at Step-07 ``` ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: cdn-demo-deployment spec: replicas: 4 selector: matchLabels: app: cdn-demo template: metadata: labels: app: cdn-demo spec: containers: - name: cdn-demo image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0 ports: - containerPort: 8080 ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/02-kubernetes-NodePort-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: cdn-demo-nodeport-service annotations: #cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig"}}' cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: cdn-demo ports: - port: 80 targetPort: 8080 ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/03-ingress.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cdn-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingress-with-cookie-affinity.kalyanreddydaida.com spec: defaultBackend: service: name: cdn-demo-nodeport-service port: number: 80 ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/04-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "GENERATED_COOKIE" affinityCookieTtlSec: 50 # TTL of 50 seconds # sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged # and 1.0 means 100% of packets are logged. This field is only relevant if enable is set # to true. sampleRate is an optional field, but if it's configured then enable: true must # also be set or else it is interpreted as enable: false. ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: cdn-demo-deployment2 spec: replicas: 4 selector: matchLabels: app: cdn-demo2 template: metadata: labels: app: cdn-demo2 spec: containers: - name: cdn-demo2 image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0 ports: - containerPort: 8080 ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/02-kubernetes-NodePort-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: cdn-demo-nodeport-service2 annotations: #cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig2"}}' cloud.google.com/backend-config: '{"default": "my-backendconfig2"}' spec: type: NodePort selector: app: cdn-demo2 ports: - port: 80 targetPort: 8080 ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/03-ingress.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cdn-demo2 annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip2" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingress-without-cookie-affinity.kalyanreddydaida.com spec: defaultBackend: service: name: cdn-demo-nodeport-service2 port: number: 80 ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/04-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig2 spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 # sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged # and 1.0 means 100% of packets are logged. This field is only relevant if enable is set # to true. sampleRate is an optional field, but if it's configured then enable: true must # also be set or else it is interpreted as enable: false. ================================================ FILE: 52-GKE-Ingress-Cookie-Affinity/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Cookie Affinity description: Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction - Implement following Features for Ingress Service - BackendConfig - GENERATED_COOKIE Affinity for Ingress Service - We are going to create two projects - **Project-01:** GENERATED_COOKIE Affinity enabled - **Project-02:** GENERATED_COOKIE Affinity disabled ## Step-02: Create External IP Address using gcloud ```t # Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS) gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip1 --global # Create External IP Address 2 gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip2 --global # Describe External IP Address to get gcloud compute addresses describe ADDRESS_NAME --global gcloud compute addresses describe gke-ingress-extip2 --global # Verify Go to VPC Network -> IP Addresses -> External IP Address ``` ## Step-03: Project-01: Review YAML Manifests - **Project Folder:** 01-kube-manifests-with-cookie-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "GENERATED_COOKIE" ``` ## Step-04: Project-02: Review YAML Manifests - **Project Folder:** 02-kube-manifests-without-cookie-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig2 spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 ``` ## Step-05: Deploy Kubernetes Manifests ```t # Project-01: Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests-with-cookie-affinity # Project-02: Deploy Kubernetes Manifests kubectl apply -f 02-kube-manifests-without-cookie-affinity # Verify Deployments kubectl get deploy # Verify Pods kubectl get pods # Verify Node Port Services kubectl get svc # Verify Ingress Services kubectl get ingress # Verify Backend Config kubectl get backendconfig # Project-01: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting Observation: Cookie Affinity setting should be in enabled state # Project-02: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting Cookie Affinity setting should be in disabled state ``` ## Step-06: Access Application ```t # Project-01: Access Application using DNS or ExtIP http://ingress-with-cookie-affinity.kalyanreddydaida.com http:// Observation: 1. Request will keep going always to only one POD due to GENERATED_COOKIE Affinity we configured # Project-02: Access Application using DNS or ExtIP http://ingress-without-cookie-affinity.kalyanreddydaida.com http:// Observation: 1. Requests will be load balanced to 4 pods created as part of "cdn-demo2" deployment. ``` ## Step-07: Clean-Up ```t # Project-01: Delete Kubernetes Resources kubectl delete -f 01-kube-manifests-with-cookie-affinity # Project-02: Delete Kubernetes Resources kubectl delete -f 02-kube-manifests-without-cookie-affinity ``` ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) http://ingress-without-cookie-affinity.kalyanreddydaida.com http://ingress-with-cookie-affinity.kalyanreddydaida.com ================================================ FILE: 53-GKE-Ingress-HealthCheck-with-backendConfig/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks description: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement Ingress Custom Health Checks - Comment `Readiness Probe` in Kubernetes Deployment. - Add Custom Health Checks in `kind: BackendConfig` Kubernetes Resource ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) #readinessProbe: # httpGet: # scheme: HTTP # path: /index.html # port: 80 # initialDelaySeconds: 10 # periodSeconds: 15 # timeoutSeconds: 5 ``` ## Step-03: 02-kubernetes-NodePort-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: #cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig"}}' cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ``` ## Step-04: 03-ingress.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-custom-healthcheck annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ``` ## Step-05: 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 healthCheck: checkIntervalSec: 5 # Default is 5 seconds timeoutSec: 5 # The value of timeoutSec must be less than or equal to the checkIntervalSec healthyThreshold: 2 # Default value 2 unhealthyThreshold: 2 # Default value 2 type: HTTP # The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2 protocols requestPath: /index.html port: 80 ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get po # List Services kubectl get svc # List Backend Configs kubectl get backendconfig # List Ingress Service kubectl get ingress ``` ## Step-07: Verify Load Balancer Details - Go to Network Services -> Loadbalancing -> Load Balancer - Backends -> Backend -> Click on **Health check related link** - Verify health check details ## Step-08: Access Application ```t # List Ingress Service kubectl get ingress # Access Application http:// ``` ## Step-09: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests ``` ## References - [Ingress Health Checks](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks) - [Custom Health Check Configuration](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health) ================================================ FILE: 53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) #readinessProbe: # httpGet: # scheme: HTTP # path: /index.html # port: 80 # initialDelaySeconds: 10 # periodSeconds: 15 # timeoutSeconds: 5 ================================================ FILE: 53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/02-kubernetes-NodePort-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: #cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig"}}' cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/03-ingress.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-custom-healthcheck annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ================================================ FILE: 53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/04-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 healthCheck: checkIntervalSec: 5 # Default is 5 seconds timeoutSec: 5 # The value of timeoutSec must be less than or equal to the checkIntervalSec healthyThreshold: 2 # Default value 2 unhealthyThreshold: 2 # Default value 2 type: HTTP # The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2 protocols requestPath: /index.html port: 80 # sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged # and 1.0 means 100% of packets are logged. This field is only relevant if enable is set # to true. sampleRate is an optional field, but if it's configured then enable: true must # also be set or else it is interpreted as enable: false. ================================================ FILE: 54-GKE-Ingress-InternalLB/01-kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 54-GKE-Ingress-InternalLB/01-kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app2-nginx-nodeport-service labels: app: app2-nginx annotations: spec: type: NodePort selector: app: app2-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 54-GKE-Ingress-InternalLB/01-kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ================================================ FILE: 54-GKE-Ingress-InternalLB/01-kube-manifests/04-Ingress-internal-lb.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-internal-lb annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer # Internal Load Balancer kubernetes.io/ingress.class: "gce-internal" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ================================================ FILE: 54-GKE-Ingress-InternalLB/02-kube-manifests-curl/01-curl-pod.yml ================================================ apiVersion: v1 kind: Pod metadata: name: curl-pod spec: containers: - name: curl image: curlimages/curl command: [ "sleep", "600" ] ================================================ FILE: 54-GKE-Ingress-InternalLB/README.md ================================================ --- title: GCP Google Kubernetes Engine Ingress Internal Load Balancer description: Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Ingress Internal Load Balancer ## Step-02: Review Kubernetes Deployment manifests - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ## Step-03: 04-Ingress-internal-lb.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-internal-lb annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer # Internal Load Balancer kubernetes.io/ingress.class: "gce-internal" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-04: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get po # List Services kubectl get svc # List Backend Configs kubectl get backendconfig # List Ingress Service kubectl get ingress # Describe Ingress Service kubectl describe ingress ingress-internal-lb # Verify Load Balancer Go to Network Services -> Load Balancing -> Load Balancer ``` ## Step-05: Review Curl Kubernetes Manifests - **Project Folder:** 02-kube-manifests-curl ```yaml apiVersion: v1 kind: Pod metadata: name: curl-pod spec: containers: - name: curl image: curlimages/curl command: [ "sleep", "600" ] ``` ## Step-06: Deply Curl-pod and Verify Internal LB ```t # Deploy curl-pod kubectl apply -f 02-kube-manifests-curl # Will open up a terminal session into the container kubectl exec -it curl-pod -- sh # App1 Curl Test curl http:///app1/index.html # App2 Curl Test curl http:///app2/index.html # App3 Curl Test curl http:// ``` ## Step-07: Clean-Up ```t # Delete Kubernetes Manifests kubectl delete -f 01-kube-manifests kubectl delete -f 02-kube-manifests-curl ``` ## References - [Ingress for Internal HTTP(S) Load Balancing](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb) ================================================ FILE: 55-GKE-Ingress-Cloud-Armor/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Ingress with Cloud Armor description: Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. Registered Domain using Google Cloud Domains 4. External DNS Controller installed and ready to use ```t # List External DNS Pods kubectl -n external-dns-ns get pods ``` 5. Verify if External IP Address is created ```t # List External IP Address gcloud compute addresses list # Describe External IP Address gcloud compute addresses describe gke-ingress-extip1 --global ``` ## Step-01: Introduction - Ingress Service with Cloud Armor ## Step-02: Create Cloud Armor Policy - Go to Network Security -> Cloud Armor -> CREATE POLICY ### Configure Policy - **Name:** cloud-armor-policy-1 - **Description:** Cloud Armor Demo with GKE Ingress - **Policy type:** Backend security policy - **Default rule action:** Deny - **Deny Status:** 403(Forbidden) - Click on **NEXT STEP** ### Add More Rules (Optional) - Leave to default - NO NEW RULES OTHER THAN EXISTING DEFAULT RULE - ALL IP ADDRESS -> DENY -> With 403 ERROR -> Priority 2,147,483,647 - Click on **NEXT STEP** ### Add Policy to Targets (Optional) - Leave to default - Click on **NEXT STEP** ### Advanced configurations (Adaptive Protection) (optional) - Click on **Enable** checkbox - Click on **DONE** - Click on **CREATE POLICY** ## Step-03: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: cloud-armor-demo-deployment spec: replicas: 2 selector: matchLabels: app: cloud-armor-demo template: metadata: labels: app: cloud-armor-demo spec: containers: - name: cloud-armor-demo image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-04: 02-kubernetes-NodePort-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: cloud-armor-demo-nodeport-service annotations: cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig"}}' spec: type: NodePort selector: app: cloud-armor-demo ports: - port: 80 targetPort: 80 ``` ## Step-05: 03-ingress.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cloud-armor-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: cloudarmor-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: cloud-armor-demo-nodeport-service port: number: 80 ``` ## Step-06: 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 securityPolicy: name: "cloud-armor-policy-1" ``` ## Step-07: Deploy Kubernetes Manifests and Verify ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get po # List Services kubectl get svc # List Ingress Services kubectl get ingress # List Backendconfig kubectl get backendconfig # Access Application http:// http://cloudarmor-ingress.kalyanreddydaida.com Observation: 1. We should get 403 Forbidden error. 2. This is expected because we have configured a Cloud Armor Policy to block All IP Addresses with 403 Error ``` ## Step-08: Make a note of Public IP for your Internet Connection - Go to [URL: www.whatismyip.com](https://www.whatismyip.com/) and make a note of your local desktop Public IP - If you are behind Company / Organizations proxies, not sure if it works. - I am using my Home Internet Connection ## Step-09: Add new rule in Cloud Armor Policy - Go to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> RULES -> ADD RULE - **Description:** Allow-from-my-desktop - **Mode:** Basic Mode(IP Address / Ranges only) - **Match:** 49.206.52.84 (My internet connection public ip) - **Action:** Allow - **Priority:** 1 - Click on **ADD** - WAIT FOR 5 MINUTES for new policy to go live ## Step-10: Access Application ```t # Access Application from local desktop http:// http://cloudarmor-ingress.kalyanreddydaida.com curl http://cloudarmor-ingress.kalyanreddydaida.com Observation: 1. Application access should be successful ``` ## Step-11: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete Cloud Armor Policy Go to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> DELETE ``` ## References - https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#cloud_armor - https://cloud.google.com/armor/docs/security-policy-overview - https://cloud.google.com/armor/docs/integrating-cloud-armor - https://cloud.google.com/armor/docs/configure-security-policies ================================================ FILE: 55-GKE-Ingress-Cloud-Armor/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: cloud-armor-demo-deployment spec: replicas: 2 selector: matchLabels: app: cloud-armor-demo template: metadata: labels: app: cloud-armor-demo spec: containers: - name: cloud-armor-demo image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ================================================ FILE: 55-GKE-Ingress-Cloud-Armor/kube-manifests/02-kubernetes-NodePort-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: cloud-armor-demo-nodeport-service annotations: #cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig"}}' cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: cloud-armor-demo ports: - port: 80 targetPort: 80 ================================================ FILE: 55-GKE-Ingress-Cloud-Armor/kube-manifests/03-ingress.yaml ================================================ apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cloud-armor-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: cloudarmor-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: cloud-armor-demo-nodeport-service port: number: 80 ================================================ FILE: 55-GKE-Ingress-Cloud-Armor/kube-manifests/04-backendconfig.yaml ================================================ apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 securityPolicy: name: "cloud-armor-policy-1" # sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged # and 1.0 means 100% of packets are logged. This field is only relevant if enable is set # to true. sampleRate is an optional field, but if it's configured then enable: true must # also be set or else it is interpreted as enable: false. ================================================ FILE: 56-GKE-Artifact-Registry/01-Docker-Image/Dockerfile ================================================ FROM nginx COPY index.html /usr/share/nginx/html ================================================ FILE: 56-GKE-Artifact-Registry/01-Docker-Image/index.html ================================================

Welcome to StackSimplify

Google Kubernetes Engine

Application Version: V1

================================================ FILE: 56-GKE-Artifact-Registry/02-kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: myapp1-deployment spec: replicas: 2 selector: matchLabels: app: myapp1 template: metadata: name: myapp1-pod labels: app: myapp1 spec: containers: - name: myapp1-container #image: us-central1-docker.pkg.dev///myapp1:v1 image: us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1 ports: - containerPort: 80 ================================================ FILE: 56-GKE-Artifact-Registry/02-kube-manifests/02-kubernetes-loadBalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 56-GKE-Artifact-Registry/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE Artifact Registry description: Implement GCP Google Kubernetes Engine GKE Artifact Registry --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal. ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Build a Docker Image - Create a Docker repository in Google Artifact Registry. - Set up authentication. - Push an image to the repository. - Pull the image from the repository and Create Deployment in GKE Cluster - Access Sample Application in browser and verify ## Step-02: Create Dockefile - **Dockerfile** ```t FROM nginx COPY index.html /usr/share/nginx/html ``` ## Step-03: Build Docker Image ```t # Change Directory cd google-kubernetes-engine/56-GKE-Artifact-Registry/ cd 01-Docker-Image # Build Docker Image docker build -t myapp1:v1 . # List Docker Image docker images myapp1 ``` ## Step-04: Run Docker Image ```t # Run Docker Image docker run --name myapp1 -p 80:80 -d myapp1:v1 # Access in browser http://localhost # List Running Docker Containers docker ps # Stop Docker Container docker stop myapp1 # List All Docker Containers (Stopped Containers) docker ps -a # Delete Stopped Container docker rm myapp1 # List All Docker Containers (Stopped Containers) docker ps -a ``` ## Step-05: Create Google Artifact Registry - Go to Artifact Registry -> Repositories -> Create ```t # Create Google Artifact Registry Name: gke-artifact-repo1 Format: Docker Region: us-central-1 Encryption: Google-managed encryption key Click on Create ``` ## Step-06: Configure Google Artifact Repository authentication ```t # Google Artifact Repository authentication ## To set up authentication to Docker repositories in the region us-central1 gcloud auth configure-docker -docker.pkg.dev gcloud auth configure-docker us-central1-docker.pkg.dev ``` ## Step-07: Tag & push the Docker image to Google Artifact Registry ```t # Tag the Docker Image docker tag myapp1:v1 -docker.pkg.dev///: # Replace Values for docker tag command # - LOCATION, # - GOOGLE-PROJECT-ID, # - GOOGLE-ARTIFACT-REGISTRY-NAME, # - IMAGE-NAME, # - IMAGE-TAG docker tag myapp1:v1 us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1 # Push the Docker Image to Google Artifact Registry docker push us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1 ``` ## Step-08: Verify the Docker Image on Google Artifact Registry - Go to Google Artifact Registry -> Repositories -> gke-artifact-repo1 - Review **myapp1** Docker Image ## Step-09: Update Docker Image and Review kube-manifests - **Project-Folder:** 02-kube-manifests ```yaml # Dcoker Image image: us-central1-docker.pkg.dev///myapp1:v1 # Update Docker Image in 01-kubernetes-deployment.yaml image: us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1 ``` ## Step-10: Deploy kube-manifests ```t # Deploy kube-manifests kubectl apply -f 02-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # Describe Pod kubectl describe pod ## Observation - Verify Events command "kubectl describe pod " ### We should see image pulled from "us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 86s default-scheduler Successfully assigned default/myapp1-deployment-5f8d5c6f48-pb686 to gke-standard-cluster-1-default-pool-2c852f67-46hv Normal Pulling 85s kubelet Pulling image "us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1" Normal Pulled 81s kubelet Successfully pulled image "us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1" in 4.285567138s Normal Created 81s kubelet Created container myapp1-container Normal Started 80s kubelet Started container myapp1-container Kalyans-MacBook-Pro:41-GKE-Artiact-Registry kdaida$ # List Services kubectl get svc # Access Application http:// ``` ## Step-11: Clean-Up ```t # Undeploy sample App kubectl delete -f 02-kube-manifests ``` ## References - [Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/overview) ================================================ FILE: 57-GKE-Continuous-Integration/01-SSH-Keys/id_gcp_cloud_source ================================================ -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACBMtrPvYdsBE/05pRtCNK6HfcySVB3HsupIh6b1TwgctgAAAKAPWMfAD1jH wAAAAAtzc2gtZWQyNTUxOQAAACBMtrPvYdsBE/05pRtCNK6HfcySVB3HsupIh6b1Twgctg AAAEAbFRNQOxoqetg+QLLX4IAWQEMlwFjk0Al4395DbuHdQky2s+9h2wET/TmlG0I0rod9 zJJUHcey6kiHpvVPCBy2AAAAFmRrYWx5YW5yZWRkeUBnbWFpbC5jb20BAgMEBQYH -----END OPENSSH PRIVATE KEY----- ================================================ FILE: 57-GKE-Continuous-Integration/01-SSH-Keys/id_gcp_cloud_source.pub ================================================ ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEy2s+9h2wET/TmlG0I0rod9zJJUHcey6kiHpvVPCBy2 dkalyanreddy@gmail.com ================================================ FILE: 57-GKE-Continuous-Integration/02-Docker-Image/Dockerfile ================================================ FROM nginx COPY index.html /usr/share/nginx/html ================================================ FILE: 57-GKE-Continuous-Integration/02-Docker-Image/index.html ================================================

Welcome to StackSimplify

Google Kubernetes Engine

Application Version: V1

================================================ FILE: 57-GKE-Continuous-Integration/03-cloudbuild-yaml/cloudbuild.yaml ================================================ steps: # This step builds the container image. - name: 'gcr.io/cloud-builders/docker' id: Build args: - 'build' - '-t' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' - '.' # This step pushes the image to Artifact Registry # The PROJECT_ID and SHORT_SHA variables are automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/docker' id: Push args: - 'push' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' ================================================ FILE: 57-GKE-Continuous-Integration/04-kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 1 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container #image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:d1c3b88 image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:3d5c45b ports: - containerPort: 80 ================================================ FILE: 57-GKE-Continuous-Integration/04-kube-manifests/02-kubernetes-loadBalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 57-GKE-Continuous-Integration/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE CI description: Implement GCP Google Kubernetes Engine GKE Continuous Integration --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement Continuous Integration for GKE Workloads using - Google Cloud Source - Google Cloud Build - Google Artifact Repository ## Step-02: Enable APIs in Google Cloud ```t # Enable APIs in Google Cloud gcloud services enable container.googleapis.com \ cloudbuild.googleapis.com \ sourcerepo.googleapis.com \ artifactregistry.googleapis.com # Google Cloud Services GKE: container.googleapis.com Cloud Build: cloudbuild.googleapis.com Cloud Source: sourcerepo.googleapis.com Artifact Registry: artifactregistry.googleapis.com ``` ## Step-03: Create Artifact Repository ```t # List Artifact Repositories gcloud artifacts repositories list # Create Artifact Repository gcloud artifacts repositories create myapps-repository \ --repository-format=docker \ --location=us-central1 # List Artifact Repositories gcloud artifacts repositories list # Describe Artifact Repository gcloud artifacts repositories describe myapps-repository --location=us-central1 ``` ## Step-04: Install Git client on local desktop (if not present) ```t # Download and Install Git Client and Installed https://git-scm.com/downloads ``` ## Step-05: Create SSH Keys for Git Repo Access - [Generating SSH Key Pair](https://cloud.google.com/source-repositories/docs/authentication#generate_a_key_pair) ```t # Change Directory cd 01-SSH-Keys # Create SSH Keys ssh-keygen -t [KEY_TYPE] -C "[USER_EMAIL]" KEY_TYPE: rsa, ecdsa, ed25519 USER_EMAIL: dkalyanreddy@gmail.com # Replace Values KEY_TYPE, USER_EMAIL ssh-keygen -t ed25519 -C "dkalyanreddy@gmail.com" Provide the File Name as "id_gcp_cloud_source" ## Sample Output Kalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ssh-keygen -t ed25519 -C "dkalyanreddy@gmail.com" Generating public/private ed25519 key pair. Enter file in which to save the key (/Users/kalyanreddy/.ssh/id_ed25519): id_gcp_cloud_source Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in id_gcp_cloud_source Your public key has been saved in id_gcp_cloud_source.pub The key fingerprint is: SHA256:YialyCj3XaSa4b8ewk4bcK1hXxO7DDM5uiCP1J2TOZ0 dkalyanreddy@gmail.com The key's randomart image is: +--[ED25519 256]--+ | | | | | . o | | o . + + o | |o = B % S | |...B.&=X.o | |....%B+Eo | |.+ + *o. | |. . +.+. | +----[SHA256]-----+ Kalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ls -lrta total 16 drwxr-xr-x 6 kalyanreddy staff 192 Jun 29 09:45 .. -rw------- 1 kalyanreddy staff 419 Jun 29 09:46 id_gcp_cloud_source drwxr-xr-x 4 kalyanreddy staff 128 Jun 29 09:46 . -rw-r--r-- 1 kalyanreddy staff 104 Jun 29 09:46 id_gcp_cloud_source.pub Kalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ``` ## Step-06: Review SSH Keys (Public and Private Keys) ```t # Change Directroy cd 01-SSH-Keys # Review Private Key: id_gcp_cloud_source cat id_gcp_cloud_source # Review Public Key: id_gcp_cloud_source.pub cat id_gcp_cloud_source.pub ``` ## Step-07: Update SSH Public Key in Google Cloud Source - Go to -> Source Repositories -> 3 Dots -> Manage SSH Keys -> Register SSH Key - [Google Cloud Source URL](https://source.cloud.google.com/) ```t # Key Name Name: gke-course Key: Output from command "cat id_gcp_cloud_source.pub" in previous step. Put content from Public Key ``` - Click on **Register** ## Step-08: Update SSH Private Key in Git Config - Update SSH Private Key in your local desktop Git Config ```t # Copy SSH Private Key to your ".ssh" folder in your Home Directory from your course cd 01-SSH-Keys cp id_gcp_cloud_source $HOME/.ssh # Change Directory (Your local desktop home directory) cd $HOME/.ssh # Verify File in "$HOME/.ssh" ls -lrta id_gcp_cloud_source # Verify existing git "config" file cat config # Backup any existing "config" file cp config config_bkup_before_cloud_source # Update "config" file to point to "id_gcp_cloud_source" private key vi config ## Sample Output after changes Kalyans-Mac-mini:.ssh kalyanreddy$ cat config Host * AddKeysToAgent yes UseKeychain yes IdentityFile ~/.ssh/id_gcp_cloud_source Kalyans-Mac-mini:.ssh kalyanreddy$ # Backup config with cloudsource cp config config_with_cloud_source_key ``` ## Step-09: Update Git Global Config in your local deskopt ```t # List Global Git Config git config --list # Update Global Git Config git config --global user.email "YOUR_EMAIL_ADDRESS" git config --global user.name "YOUR_NAME" # Replace YOUR_EMAIL_ADDRESS, YOUR_NAME git config --global user.name "Kalyan Reddy Daida" git config --global user.email "dkalyanreddy@gmail.com" # List Global Git Config git config --list ``` ## Step-10: Create Git repositories in Cloud Source ```t # List Cloud Source Repository gcloud source repos list # Create Git repositories in Cloud Source gcloud source repos create myapp1-app-repo # List Cloud Source Repository gcloud source repos list # Verify using Cloud Console Search for -> Source Repositories https://source.cloud.google.com/repos ``` ## Step-11: Clone Cloud Source Git Repository, Commit a Change, Push to Remote Repo and Verify ```t # Change Directory cd course-repos # Verify using Cloud Console Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo -> SSH Authentication # Copy the git clone command and run git clone ssh://dkalyanreddy@gmail.com@source.developers.google.com:2022/p/kdaida123/r/myapp1-app-repo # Change Directory cd myapp1-app-repo # Create a simple readme file touch README.md echo "# GKE CI Demo" > README.md ls -lrta # Add Files and do local commit git add . git commit -am "First Commit" # Push file to Cloud Source Git Repo (Remote Repo) git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-12: Review Files in 02-Docker-Image folder 1. Dockerfile 2. index.html ## Step-13: Copy files from 02-Docker-Image folder to Git Repo ```t # Change Directroy cd 57-GKE-Continuous-Integration/02-Docker-Image # Copy Files to Git repo "myapp1-app-repo" 1. Dockerfile 2. index.html # Local Git Commit and Push to Remote Repo git add . git commit -am "Second Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-14: Create a container image with Cloud Build and store it in Artifact Registry using glcoud builds command ```t # Change Directory (Git App Repo: myapp1-app-repo) cd myapp1-app-repo # Get latest git commit id (current branch) git rev-parse HEAD # Get latest git commit id first 7 chars (current branch) git rev-parse --short=7 HEAD # Ensure you are in local git repo folder where "Dockerfile, index.html" present cd myapp1-app-repo # Create a Cloud Build build based on the latest commit gcloud builds submit --tag="us-central1-docker.pkg.dev/${PROJECT_ID}/${$APP_ARTIFACT_REPO}/myapp1:${COMMIT_ID}" . # Replace Values ${PROJECT_ID}, ${$APP_ARTIFACT_REPO}, ${COMMIT_ID} gcloud builds submit --tag="us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:6f7d338" . ``` ## Step-15: Review Cloud Build YAML file ```yaml steps: # This step builds the container image. - name: 'gcr.io/cloud-builders/docker' id: Build args: - 'build' - '-t' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' - '.' # This step pushes the image to Artifact Registry # The PROJECT_ID and SHORT_SHA variables are automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/docker' id: Push args: - 'push' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' ``` ## Step-16: Copy cloudbuild.yaml to Git Repo ```t # Change Directroy cd 57-GKE-Continuous-Integration/03-cloudbuild-yaml # Copy Files to Git repo 1. cloudbuild.yaml # Local Git Commit and Push to Remote Repo git add . git commit -am "Third Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-17: Create Continuous Integration Pipeline in Cloud Build - Go to Cloud Build -> Dashboard -> Region: us-central-1 -> Click on **SET UP BUILD TRIGGERS** [OR] - Go to Cloud Build -> TRIGGERS -> Click on **CREATE TRIGGER** - **Name:** myapp1-ci - **Region:** us-central1 - **Description:** myapp1 Continuous Integration Pipeline - **Tags:** environment=dev - **Event:** Push to a branch - **Source:** myapp1-app-repo - **Branch:** main (Auto-populated) - **Configuration:** Cloud Build configuration file (yaml or json) - **Location:** Repository - **Cloud Build Configuration file location:** /cloudbuild.yaml - **Approval:** leave unchecked - **Service account:** leave to default - Click on **CREATE** ## Step-18: Make a simple change in "index.html" and push the changes to Git Repo ```t # Change Directroy cd myapp1-app-repo # Update file index.html (change V1 to V2)

Application Version: V2

# Local Git Commit and Push to Remote Repo git status git add . git commit -am "V2 Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-19: Verify Code Build CI Pipeline ```t # Verify Code Build 1. Go to Code Build -> Dashboard or go directly to Code Build -> History 2. Click on Build History -> View All 3. Verify "BUILD LOG" 4. Verify "EXECUTION DETAILS" 5. Verify "VIEW RAW" # Verify Artifact Repository 1. Go to Artifact Registry -> myapps-repository -> myapp1 2. You should find the docker image pushed to Artifact Registry ``` ## Step-20: Review Kubernetes Manifests - **Project Folder:** 04-kube-manifests - 01-kubernetes-deployment.yaml - 02-kubernetes-loadBalancer-service.yaml ## Step-21: Update Container Image to V1 Docker Image we built ```yaml # 01-kubernetes-deployment.yaml: Update "image" spec: containers: # List - name: myapp1-container image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:d1c3b88 ports: - containerPort: 80 ``` ## Step-22: Deploy Kubernetes Manifests and Verify ```t # Change Directory You should in Course Content folder google-kubernetes-engine/ # Deploy Kubernetes Manifests kubectl apply -f 04-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # Describe Pod (Review Events to understand from where Docker Image downloaded) kubectl describe pod # List Services kubectl get svc # Access Application http:// Observation: 1. You should see "Application Version: V1" ``` ## Step-23: Update Container Image to V2 Docker Image we built ```yaml # 01-kubernetes-deployment.yaml: Update "image" spec: containers: # List - name: myapp1-container image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:3af592c ports: - containerPort: 80 ``` ## Step-24: Update Kubernetes Deployment and Verify ```t # Deply Kubernetes Manifests (Updated Image Tag) kubectl apply -f 04-kube-manifests # Restart Kubernetes Deployment (Optional - if it is not updated) kubectl rollout restart deployment myapp1-deployment # List Deployments kubectl get deploy # List Pods kubectl get pods # Describe Pod (Review Events to understand from where Docker Image downloaded) kubectl describe pod # List Services kubectl get svc # Access Application http:// Observation: 1. You should see "Application Version: V2" ``` ## Step-25: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 04-kube-manifests ``` ## Step-26: How to add Approvals before starting the Build Process ? ### Step-26-01: Enable Approval in Cloud Build - Go to Cloud Build -> Triggers -> myapp1-ci - Check the box in **Approval: Require approval before build executes** ### Step-26-02: Add Users to Cloud Build Approver IAM Role - Go to IAM & Admin -> GRANT ACCESS - **Add Principal:** dkalyanreddy@gmail.com - **Assign Roles:** Cloud Build Approver - Click on **SAVE** ## Step-27: Update the Git Repo to test Build Approval Process ```t # Change Directroy cd myapp1-app-repo # Update file index.html (change V2 to V3)

Application Version: V3

# Local Git Commit and Push to Remote Repo git status git add . git commit -am "V3 Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-28: Verify and Approve the Build - Go to Cloud Build -> Triggers -> myapp1-ci -> Select and Approve - Verify if build is successful. ## References - [Cloud Build for Docker Images](https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build) ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/01-myapp1-k8s-repo/cloudbuild-delivery.yaml ================================================ # [START cloudbuild-delivery] steps: # This step deploys the new version of our container image # in the "standard-cluster-private-1" Google Kubernetes Engine cluster. - name: 'gcr.io/cloud-builders/kubectl' id: Deploy args: - 'apply' - '-f' - 'kubernetes.yaml' env: - 'CLOUDSDK_COMPUTE_REGION=us-central1' #- 'CLOUDSDK_COMPUTE_ZONE=us-central1-c' - 'CLOUDSDK_CONTAINER_CLUSTER=standard-cluster-private-1' # Provide GKE Cluster Name # This step copies the applied manifest to the production branch # The COMMIT_SHA variable is automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/git' id: Copy to production branch entrypoint: /bin/sh args: - '-c' - | set -x && \ # Configure Git to create commits with Cloud Build's service account git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') && \ # Switch to the production branch and copy the kubernetes.yaml file from the candidate branch git fetch origin production && git checkout production && \ git checkout $COMMIT_SHA kubernetes.yaml && \ # Commit the kubernetes.yaml file with a descriptive commit message git commit -m "Manifest from commit $COMMIT_SHA $(git log --format=%B -n 1 $COMMIT_SHA)" && \ # Push the changes back to Cloud Source Repository git push origin production # [END cloudbuild-delivery] ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/02-Source-Writer-IAM-Role/myapp1-k8s-repo-policy.yaml ================================================ bindings: - members: - serviceAccount:1057267725005@cloudbuild.gserviceaccount.com role: roles/source.writer ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/Dockerfile ================================================ FROM nginx COPY index.html /usr/share/nginx/html ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/README.md ================================================ # GKE CI Demo ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/cloudbuild-trigger-cd.yaml ================================================ # [START cloudbuild - Docker Image Build] steps: # This step builds the container image. - name: 'gcr.io/cloud-builders/docker' id: Build args: - 'build' - '-t' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' - '.' # This step pushes the image to Artifact Registry # The PROJECT_ID and SHORT_SHA variables are automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/docker' id: Push args: - 'push' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' # [END cloudbuild - Docker Image Build] # [START cloudbuild-trigger-cd] # This step clones the myapp1-k8s-repo repository - name: 'gcr.io/cloud-builders/gcloud' id: Clone myapp1-k8s-repo repository entrypoint: /bin/sh args: - '-c' - | gcloud source repos clone myapp1-k8s-repo && \ cd myapp1-k8s-repo && \ git checkout candidate && \ git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') # This step generates the new manifest - name: 'gcr.io/cloud-builders/gcloud' id: Generate Kubernetes manifest entrypoint: /bin/sh args: - '-c' - | sed "s/GOOGLE_CLOUD_PROJECT/${PROJECT_ID}/g" kubernetes.yaml.tpl | \ sed "s/COMMIT_SHA/${SHORT_SHA}/g" > myapp1-k8s-repo/kubernetes.yaml # This step pushes the manifest back to myapp1-k8s-repo - name: 'gcr.io/cloud-builders/gcloud' id: Push manifest entrypoint: /bin/sh args: - '-c' - | set -x && \ cd myapp1-k8s-repo && \ git add kubernetes.yaml && \ git commit -m "Deploying image us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:${SHORT_SHA} Built from commit ${COMMIT_SHA} of repository myapp1-app-repo Author: $(git log --format='%an <%ae>' -n 1 HEAD)" && \ git push origin candidate # [END cloudbuild-trigger-cd] ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/cloudbuild.yaml ================================================ # [START cloudbuild - Docker Image Build] steps: # This step builds the container image. - name: 'gcr.io/cloud-builders/docker' id: Build args: - 'build' - '-t' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' - '.' # This step pushes the image to Artifact Registry # The PROJECT_ID and SHORT_SHA variables are automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/docker' id: Push args: - 'push' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' # [END cloudbuild - Docker Image Build] # [START cloudbuild-trigger-cd] # This step clones the myapp1-k8s-repo repository - name: 'gcr.io/cloud-builders/gcloud' id: Clone myapp1-k8s-repo repository entrypoint: /bin/sh args: - '-c' - | gcloud source repos clone myapp1-k8s-repo && \ cd myapp1-k8s-repo && \ git checkout candidate && \ git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') # This step generates the new manifest - name: 'gcr.io/cloud-builders/gcloud' id: Generate Kubernetes manifest entrypoint: /bin/sh args: - '-c' - | sed "s/GOOGLE_CLOUD_PROJECT/${PROJECT_ID}/g" kubernetes.yaml.tpl | \ sed "s/COMMIT_SHA/${SHORT_SHA}/g" > myapp1-k8s-repo/kubernetes.yaml # This step pushes the manifest back to myapp1-k8s-repo - name: 'gcr.io/cloud-builders/gcloud' id: Push manifest entrypoint: /bin/sh args: - '-c' - | set -x && \ cd myapp1-k8s-repo && \ git add kubernetes.yaml && \ git commit -m "Deploying image us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:${SHORT_SHA} Built from commit ${COMMIT_SHA} of repository myapp1-app-repo Author: $(git log --format='%an <%ae>' -n 1 HEAD)" && \ git push origin candidate # [END cloudbuild-trigger-cd] ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/index.html ================================================

Welcome to StackSimplify

Google Kubernetes Engine

Application Version: V101

================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/kubernetes.yaml.tpl ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: myapp1-deployment labels: app: myapp1 spec: replicas: 1 selector: matchLabels: app: myapp1 template: metadata: labels: app: myapp1 spec: containers: - name: myapp1 image: us-central1-docker.pkg.dev/GOOGLE_CLOUD_PROJECT/myapps-repository/myapp1:COMMIT_SHA ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: myapp1-lb-service spec: type: LoadBalancer selector: app: myapp1 ports: - protocol: TCP port: 80 targetPort: 80 ================================================ FILE: 58-GKE-Continuous-Delivery-with-CloudBuild/README.md ================================================ --- title: GCP Google Kubernetes Engine GKE CD description: Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement Continuous Delivery Pipeline for GKE Workloads using - Google Cloud Source - Google Cloud Build - Google Artifact Repository ## Step-02: Assign Kubernetes Engine Developer IAM Role to Cloud Build - To deploy the application in your Googke GKE Kubernetes cluster, **Cloud Build** needs the **Kubernetes Engine Developer Identity and Access Management Role.** ```t # Verify if changes took place using Google Cloud Console 1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions 2. Kubernetes Engine -> Should be in "DISABLED" state # Get current project PROJECT_ID PROJECT_ID="$(gcloud config get-value project)" echo ${PROJECT_ID} # Get Google Cloud Project Number PROJECT_NUMBER="$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')" echo ${PROJECT_NUMBER} # Associate Kubernetes Engine Developer IAM Role to Cloud Build gcloud projects add-iam-policy-binding ${PROJECT_NUMBER} \ --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \ --role=roles/container.developer # Verify if changes took place using Google Cloud Console 1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions 2. Kubernetes Engine -> Should be in "ENABLED" state ``` ## Step-03: Review File cloudbuild-delivery.yaml - **File Location:** 01-myapp1-k8s-repo ```yaml # [START cloudbuild-delivery] steps: # This step deploys the new version of our container image # in the "standard-cluster-private-1" Google Kubernetes Engine cluster. - name: 'gcr.io/cloud-builders/kubectl' id: Deploy args: - 'apply' - '-f' - 'kubernetes.yaml' env: - 'CLOUDSDK_COMPUTE_REGION=us-central1' #- 'CLOUDSDK_COMPUTE_ZONE=us-central1-c' - 'CLOUDSDK_CONTAINER_CLUSTER=standard-cluster-private-1' # Provide GKE Cluster Name # This step copies the applied manifest to the production branch # The COMMIT_SHA variable is automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/git' id: Copy to production branch entrypoint: /bin/sh args: - '-c' - | set -x && \ # Configure Git to create commits with Cloud Build's service account git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') && \ # Switch to the production branch and copy the kubernetes.yaml file from the candidate branch git fetch origin production && git checkout production && \ git checkout $COMMIT_SHA kubernetes.yaml && \ # Commit the kubernetes.yaml file with a descriptive commit message git commit -m "Manifest from commit $COMMIT_SHA $(git log --format=%B -n 1 $COMMIT_SHA)" && \ # Push the changes back to Cloud Source Repository git push origin production # [END cloudbuild-delivery] ``` ## Step-04: Create and Initialize myapp1-k8s-repo Repo, Copy Files and Push to Cloud Source Repository ```t # Change Directory cd course-repos # List Cloud Source Repositories gcloud source repos list # Create Cloud Source Gith Repo: myapp1-k8s-repo gcloud source repos create myapp1-k8s-repo # Initialize myapp1-k8s-repo Repo gcloud source repos clone myapp1-k8s-repo # Copy Files to myapp1-k8s-repo cloudbuild-delivery.yaml from "58-GKE-Continuous-Delivery-with-CloudBuild/01-myapp1-k8s-repo" # Change Directory cd myapp1-k8s-repo # Commit Changes git add . git commit -m "Create cloudbuild-delivery.yaml for k8s deployment" # Create a candidate branch and push to be available in Cloud Source Repositories. git checkout -b candidate git push origin candidate # Create a production branch and push to be available in Cloud Source Repositories. git checkout -b production git push origin production ``` ## Step-05: Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account - Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account for the **myapp1-k8s-repo** repository. ```t # Get current project PROJECT_ID PROJECT_ID="$(gcloud config get-value project)" echo ${PROJECT_ID} # GET GCP PROJECT NUMBER PROJECT_NUMBER="$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')" echo ${PROJECT_NUMBER} # Change Directory cd 02-Source-Writer-IAM-Role # Clean-Up File (put the file empty - No Content) >myapp1-k8s-repo-policy.yaml # Create IAM Policy YAML File cat >myapp1-k8s-repo-policy.yaml < Triggers -> Region: us-central-1 -> Click on **CREATE TRIGGER** - **Name:** myapp1-cd - **Region:** us-central1 - **Description:** myapp1 Continuous Deployment Pipeline - **Tags:** environment=dev - **Event:** Push to a branch - **Source:** myapp1-k8s-repo - **Branch:** candidate - **Configuration:** Cloud Build configuration file (yaml or json) - **Location:** Repository - **Cloud Build Configuration file location:** cloudbuild-delivery.yaml - **Approval:** leave unchecked - **Service account:** leave to default - Click on **CREATE** ## Step-06: Review files in folder 03-myapp1-app-repo 1. Dockerfile 2. index.html 3. kubernetes.yaml.tpl 4. cloudbuild-trigger-cd.yaml 5. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml) ```yaml # [START cloudbuild - Docker Image Build] steps: # This step builds the container image. - name: 'gcr.io/cloud-builders/docker' id: Build args: - 'build' - '-t' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' - '.' # This step pushes the image to Artifact Registry # The PROJECT_ID and SHORT_SHA variables are automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/docker' id: Push args: - 'push' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' # [END cloudbuild - Docker Image Build] # [START cloudbuild-trigger-cd] # This step clones the myapp1-k8s-repo repository - name: 'gcr.io/cloud-builders/gcloud' id: Clone myapp1-k8s-repo repository entrypoint: /bin/sh args: - '-c' - | gcloud source repos clone myapp1-k8s-repo && \ cd myapp1-k8s-repo && \ git checkout candidate && \ git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') # This step generates the new manifest - name: 'gcr.io/cloud-builders/gcloud' id: Generate Kubernetes manifest entrypoint: /bin/sh args: - '-c' - | sed "s/GOOGLE_CLOUD_PROJECT/${PROJECT_ID}/g" kubernetes.yaml.tpl | \ sed "s/COMMIT_SHA/${SHORT_SHA}/g" > myapp1-k8s-repo/kubernetes.yaml # This step pushes the manifest back to myapp1-k8s-repo - name: 'gcr.io/cloud-builders/gcloud' id: Push manifest entrypoint: /bin/sh args: - '-c' - | set -x && \ cd myapp1-k8s-repo && \ git add kubernetes.yaml && \ git commit -m "Deploying image us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:${SHORT_SHA} Built from commit ${COMMIT_SHA} of repository myapp1-app-repo Author: $(git log --format='%an <%ae>' -n 1 HEAD)" && \ git push origin candidate # [END cloudbuild-trigger-cd] ``` ## Step-07: Update index.html in myapp1-app-repo, Push and Verify ```t # Change Directory (GIT REPO) cd myapp1-app-repo # Update index.html

Application Version: V4

# Add additional files to myapp1-app-repo 1. kubernetes.yaml.tpl 2. cloudbuild-trigger-cd.yaml 3. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml) # Git Commit and Push to Remote Repository git status git add . git commit -am "V4 Commit CI CD" git push # Verify Cloud Source Repository: myapp1-app-repo https://source.cloud.google.com/ myapp1-app-repo # Verify Cloud Source Repository: myapp1-k8s-repo https://source.cloud.google.com/ myapp1-k8s-repo Branch: Candidate You should find "kubernetes.yaml" file with latest commit code for Image from "myapp1-app-repo" ``` ## Step-08: Verify myapp1-ci and myapp1-cd builds - Go to Cloud Build -> History - Review latest **myapp1-ci** build steps - Review latest **myapp1-cd** build steps ## Step-09: Verify Files in Cloud Source Repositories - Go to Cloud Source - **myapp1-app-repo:** New files should be present - **myapp1-k8s-repo:** kubernetes.yaml file with values replaced related to GOOGLE GOOGLE_CLOUD_PROJECT and COMMIT_SHA should be replaced `image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:2a3e72a` ## Step-10: Verify Google Artifact Registry - Go to Artifact Registry -> Repositories -> myapps-repository -> myapp1 - Shoud see a new docker image ## Step-11: Access Application ```t # List Pods kubect get pods # List Deployments kubectl get deploy # List Services kubectl get svc # Access Application http:// Observation: 1. Should see v4 version of application deployed ``` ## Step-12: Test CI CD one more time - Update index.html to V5 ```t # Change Directory (GIT REPO) cd myapp1-app-repo # Update index.html

Application Version: V5

# Git Commit and Push to Remote Repository git status git add . git commit -am "V5 Commit CI CD" git push # Verify Build process Go to Cloud Build -> myapp1-ci -> BUILD LOG Go to Cloud Build -> myapp1-cd -> BUILD LOG # Access Application http:// Observation: 1. Should see v5 version of application deployed ``` ## Step-13: Verify Application Rollback by just rebuilding CD Pipeline - Go to ANY version of `myapp1-cd` and click on `REBUILD` - Verify by accessing Application ```t # List Pods kubect get pods # List Deployments kubectl get deploy # List Services kubectl get svc # Access Application http:// Observation: 1. Should see V4 version of application deployed ``` ## Step-14: Clean-Up ```t # Disable / Delete CI CD Pipelines 1. Go to Cloud Build -> myapp1-ci -> 3 dots -> Delete 2. Go to Cloud Build -> myapp1-cd -> 3 dots -> Delete # Delete Cloud Source Repositories Go to Cloud Source (https://source.cloud.google.com/repos) 1. myapp1-app-repo -> Settings -> Delete this repository 2. myapp1-k8s-repo -> Settings -> Delete this repository # Delete Kubernetes Deployment kubect get deploy kubectl delete deploy myapp1-deployment # Delete Kubernetes Service kubectl get svc kubectl delete svc myapp1-lb-service # Delete Artifact Registry Go to Artifact Registry -> Repositories -> myapps-repository -> DELETE # Delete Local Repos cd course-repos rm -rf myapp1-app-repo rm -rf myapp1-k8s-repo ``` ## References - https://github.com/GoogleCloudPlatform/gke-gitops-tutorial-cloudbuild ================================================ FILE: 59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. ## We are going to use this in our MySQL k8s Deployment # 3. YAML Notation ## YAML Notation: |-: "strip": remove the line feed, remove the trailing blank lines. ## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines ================================================ FILE: 59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 #env: # - name: MYSQL_ROOT_PASSWORD # value: dbpassword11 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ================================================ FILE: 59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" #- name: DB_PASSWORD # value: "dbpassword11" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password # Liveness Probe Linux Command livenessProbe: exec: command: - /bin/sh - -c - nc -z localhost 8080 initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value # Types of Liveness Probes we can define # 1. Linux Command # 2. HTTP Request # 3. TCP Ping # What happens ?? # 1. To perform a probe, the kubelet executes the command # Command: /bin/sh -c - nc -z localhost 8080 in the target container. # 2. If the command succeeds, it returns 0, and the kubelet considers the # container to be alive and healthy. # 3. If the command returns a non-zero value, the kubelet kills the # container and restarts it. # More Details # 1. failureThreshold: When a probe fails, Kubernetes will try failureThreshold # times before giving up. Giving up in case of liveness probe means # restarting the container. In case of readiness probe the Pod will be # marked Unready. Defaults to 3. Minimum value is 1. # 2. successThreshold: Minimum consecutive successes for the probe to be # considered successful after having failed. Defaults to 1. # Must be 1 for liveness and startup Probes. Minimum value is 1. ================================================ FILE: 59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/07-kubernetes-secret.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password #type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured. #It can contain arbitrary key-value pairs. type: Opaque data: # Output of echo -n 'dbpassword11' | base64 db-password: ZGJwYXNzd29yZDEx ================================================ FILE: 59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. ## We are going to use this in our MySQL k8s Deployment # 3. YAML Notation ## YAML Notation: |-: "strip": remove the line feed, remove the trailing blank lines. ## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines ================================================ FILE: 59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 #env: # - name: MYSQL_ROOT_PASSWORD # value: dbpassword11 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ================================================ FILE: 59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" #- name: DB_PASSWORD # value: "dbpassword11" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password # Liveness Probe HTTP Request livenessProbe: httpGet: path: /login port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value # Types of Liveness Probes we can define # 1. Linux Command # 2. HTTP Request # 3. TCP Ping # What happens ?? # 1. To perform a probe, the kubelet sends an HTTP GET request to the # server that is running in the container and listening on port 8080. # 2. If the handler for the server's /login path returns a success code, # the kubelet considers the container to be alive and healthy. # 3. If the handler returns a failure code, the kubelet kills the # container and restarts it. # 4. Any code greater than or equal to 200 and less than 400 # indicates success. Any other code indicates failure. ================================================ FILE: 59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/07-kubernetes-secret.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password #type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured. #It can contain arbitrary key-value pairs. type: Opaque data: # Output of echo -n 'dbpassword11' | base64 db-password: ZGJwYXNzd29yZDEx ================================================ FILE: 59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. ## We are going to use this in our MySQL k8s Deployment # 3. YAML Notation ## YAML Notation: |-: "strip": remove the line feed, remove the trailing blank lines. ## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines ================================================ FILE: 59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 #env: # - name: MYSQL_ROOT_PASSWORD # value: dbpassword11 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ================================================ FILE: 59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" #- name: DB_PASSWORD # value: "dbpassword11" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password # Liveness Probe TCP request livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value # Types of Liveness Probes we can define # 1. Linux Command # 2. HTTP Request # 3. TCP Ping # What happens ?? # 1. The kubelet will run the first liveness TCP probe 60 seconds after the # container starts. # 2. This will attempt to connect to the UMS container on port 8080. # If the liveness probe fails, the container will be restarted. ================================================ FILE: 59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/07-kubernetes-secret.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password #type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured. #It can contain arbitrary key-value pairs. type: Opaque data: # Output of echo -n 'dbpassword11' | base64 db-password: ZGJwYXNzd29yZDEx ================================================ FILE: 59-Kubernetes-liveness-probe/README.md ================================================ --- title: GCP Google Kubernetes Engine Kubernetes Liveness Probes description: Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement `Liveness Probe` and Test it ## Step-02: Understand Liveness Probe 1. Liveness probes lets Kubernetes know whether our application running in a container inside a pod is healthy or not. 2. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy. 3. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy. 4. In short, Use liveness probe to remove unhealthy pods ## Step-03: Liveness Probe Type: Command ### Step-03-01: Review Liveness Probe Type: Command - **File Name:** `01-liveness-probe-linux-command/05-UserMgmtWebApp-Deployment.yaml` ```yaml # Liveness Probe Linux Command livenessProbe: exec: command: - /bin/sh - -c - nc -z localhost 8080 initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value ``` ### Step-03-02: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-liveness-probe-linux-command # List Pods kubectl get pods Observation: # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ### Step-03-03: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 01-liveness-probe-linux-command ``` ## Step-04: Liveness Probe Type: HTTP Request ### Step-04-01: Review Liveness Probe Type: HTTP Request - **File Name:** `02-liveness-probe-HTTP-Request/05-UserMgmtWebApp-Deployment.yaml` ```yaml # Liveness Probe HTTP Request livenessProbe: httpGet: path: /login port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value ``` ### Step-04-02: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 02-liveness-probe-HTTP-Request # List Pods kubectl get pods Observation: # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ### Step-04-03: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 02-liveness-probe-HTTP-Request ``` ## Step-05: Liveness Probe Type: TCP Request ### Step-05-01: Review Liveness Probe Type: TCP Request - **File Name:** `03-liveness-probe-TCP-Request/05-UserMgmtWebApp-Deployment.yaml` ```yaml # Liveness Probe TCP request livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value ``` ### Step-05-02: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 03-liveness-probe-TCP-Request # List Pods kubectl get pods Observation: # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ### Step-05-03: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 03-liveness-probe-TCP-Request ``` ================================================ FILE: 60-Kubernetes-Startup-Probe/README.md ================================================ --- title: GCP Google Kubernetes Engine Kubernetes Startup Probes description: Implement GCP Google Kubernetes Engine Kubernetes Startup Probes --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement `Startup Probe` and Test it ## Step-02: Understand Startup Probe 1. Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. 2. The application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. 3. Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. 4. If the startup probe never succeeds, the container is killed after 300s and subject to the pod's restartPolicy. ## Step-03: Review Startup Probe YAML - **File Name:** 05-UserMgmtWebApp-Deployment.yaml ```yaml # Startup Probe - Wait for 5 minutes till the application starts startupProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 periodSeconds: 10 failureThreshold: 30 # The application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. successThreshold: 1 # Default value ``` ## Step-04: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests-startup-probe # List Pods kubectl get pods Observation: # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ## Step-05: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests-startup-probe ``` ================================================ FILE: 60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. ## We are going to use this in our MySQL k8s Deployment # 3. YAML Notation ## YAML Notation: |-: "strip": remove the line feed, remove the trailing blank lines. ## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines ================================================ FILE: 60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 #env: # - name: MYSQL_ROOT_PASSWORD # value: dbpassword11 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ================================================ FILE: 60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" #- name: DB_PASSWORD # value: "dbpassword11" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password # Liveness Probe HTTP Request livenessProbe: httpGet: path: /login port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value # Startup Probe - Wait for 5 minutes till the application starts startupProbe: httpGet: path: /login port: 8080 initialDelaySeconds: 60 periodSeconds: 10 failureThreshold: 30 # The application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. successThreshold: 1 # Default value # Understand Startup Probe ? # 1. Sometimes, you have to deal with legacy applications that might require an additional startup time # on their first initialization. # 2. The application will have a maximum of 5 minutes (30 * 10 = 300s) to # finish its startup. # 3. Once the startup probe has succeeded once, the liveness probe takes # over to provide a fast response to container deadlocks. # 4. If the startup probe never succeeds, the container is killed after # 300s and subject to the pod's restartPolicy. ================================================ FILE: 60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/07-kubernetes-secret.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password #type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured. #It can contain arbitrary key-value pairs. type: Opaque data: # Output of echo -n 'dbpassword11' | base64 db-password: ZGJwYXNzd29yZDEx ================================================ FILE: 61-Kubernetes-Readiness-Probe/README.md ================================================ --- title: GCP Google Kubernetes Engine Kubernetes Readiness Probes description: Implement GCP Google Kubernetes Engine Kubernetes Readiness Probes --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement `Readiness Probe` and Test it ## Step-02: Understand Readiness Probe 1. Sometimes, applications are temporarily unable to serve traffic. 2. For example, an application might need to load large data or configuration files during startup, or depend on external services after startup. 3. In such cases, you don't want to kill the application, but you don't want to send it requests either. 4. Kubernetes provides readiness probes to detect and mitigate these situations. 5. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services. 6. Readiness probes runs on the container during its whole lifecycle. 7. Liveness probes do not wait for readiness probes to succeed. 8. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe. 9. Readiness and liveness probes can be used in parallel for the same container. 10. Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail. ## Step-03: Review Readiness Probe YAML - **File Name:** 05-UserMgmtWebApp-Deployment.yaml ```yaml # Readiness Probe HTTP Request readinessProbe: httpGet: path: /login port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 60 periodSeconds: 10 failureThreshold: 3 # Default Value successThreshold: 1 # Default value ``` ## Step-04: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests-readiness-probe # List Pods kubectl get pods Observation: 1. You can see that Pod is running but it will not be ready for 60 seconds. 2. "initialDelaySeconds=60" is defined in readiness probe so it will mark the pod as ready only after 60 seconds 3. Liveness probe will start working after "initialDelaySeconds: 120" 4. This way first Readiness probe will run later liveness probe will run. # List Services kubectl get svc # Access Application http:// Username: admin101 Password: password101 ``` ## Step-05: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests-readiness-probe ``` ================================================ FILE: 61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/01-persistent-volume-claim.yaml ================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ================================================ FILE: 61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/02-UserManagement-ConfigMap.yaml ================================================ apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; # CONFIG MAP # 1. A ConfigMap is an API object used to store non-confidential data in # key-value pairs. # 2. Pods can consume ConfigMaps as ## 2.1: environment variables, ## 2.2: command-line arguments, ## 2.3: or as configuration files in a volume. ## We are going to use this in our MySQL k8s Deployment # 3. YAML Notation ## YAML Notation: |-: "strip": remove the line feed, remove the trailing blank lines. ## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines ================================================ FILE: 61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/03-mysql-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 #env: # - name: MYSQL_ROOT_PASSWORD # value: dbpassword11 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ================================================ FILE: 61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/04-mysql-clusterip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ================================================ FILE: 61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/05-UserMgmtWebApp-Deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" #- name: DB_PASSWORD # value: "dbpassword11" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password # Liveness Probe HTTP Request livenessProbe: httpGet: path: /login port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 # initialDelaySeconds field tells the kubelet that it should wait 120 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value # Readiness Probe HTTP Request readinessProbe: httpGet: path: /login port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 60 periodSeconds: 10 failureThreshold: 3 # Default Value successThreshold: 1 # Default value # Understand Readiness Probe ? # 1. Sometimes, applications are temporarily unable to serve traffic. # 2. For example, an application might need to load large data or configuration # files during startup, or depend on external services after startup. # 3. In such cases, you don't want to kill the application, but you don't # want to send it requests either. # 4. Kubernetes provides readiness probes to detect and mitigate these # situations. # 5. A pod with containers reporting that they are not ready does not # receive traffic through Kubernetes Services. # 6. Readiness probes runs on the container during its whole lifecycle. # 7. Liveness probes do not wait for readiness probes to succeed. # 8. If you want to wait before executing a liveness probe you should use # initialDelaySeconds or a startupProbe. # 9. Readiness and liveness probes can be used in parallel for the same # container. # 10. Using both can ensure that traffic does not reach a container that # is not ready for it, and that containers are restarted when they fail. ================================================ FILE: 61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/06-UserMgmtWebApp-LoadBalancer-Service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ================================================ FILE: 61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/07-kubernetes-secret.yaml ================================================ apiVersion: v1 kind: Secret metadata: name: mysql-db-password #type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured. #It can contain arbitrary key-value pairs. type: Opaque data: # Output of echo -n 'dbpassword11' | base64 db-password: ZGJwYXNzd29yZDEx ================================================ FILE: 62-Kubernetes-Requests-and-Limits/README.md ================================================ --- title: GCP Google Kubernetes Engine Kubernetes Requests and Limits description: Implement GCP Google Kubernetes Engine Kubernetes Requests and Limits --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - We can specify how much each container a pod needs the resources like CPU & Memory. - When we provide this information in our pod, the scheduler uses this information to decide which node to place the Pod on. - When you specify a resource limit for a Container, the kubelet enforces those `limits` so that the running container is not allowed to use more of that resource than the limit you set. - The kubelet also reserves at least the `request` amount of that system resource specifically for that container to use. ## Step-02: Add Requests & Limits ```yaml resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ``` ## Step-03: Create k8s objects & Test ```t # Create All Objects kubectl apply -f kube-manifests/ # List Pods kubectl get pods # Watch List Pods screen kubectl get pods -w # Describe Pod kubectl describe pod # Access Application http:/// # List Nodes & Describe Node kubectl get nodes kubectl describe node ``` ## Step-04: Clean-Up - No Clean-Up. - We are going to use this app in next demo which is Cluster Autoscaling ## References: - https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ================================================ FILE: 62-Kubernetes-Requests-and-Limits/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 5 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ================================================ FILE: 62-Kubernetes-Requests-and-Limits/kube-manifests/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIP, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 63-GKE-Cluster-Autoscaling/README.md ================================================ --- title: GCP Google Kubernetes Engine Cluster Autoscaling description: Implement GKE Cluster Autoscaler concept --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Test Cluster Autoscaler feature ## Step-02: Verify Cluster Autoscaler enabled for Node Pool - Go to Kubernetes Engine -> standard-cluster-private-1 -> NODES Tab -> default-pool -> Click on **Edit** - Check **Enable cluster autoscaler** - Size limits type - Check **Per zone limits** - **Minimum number of nodes (per zone):** 0 - **Maximum number of nodes (per zone):** 3 ## Step-03: Verify the 5th Pod from previous Demo is still in Pending State ```t # List Pods kubectl get pods # Describe Pod (PENDING POD) kubectl describe pod Observation: 1. Verify the pod events where we can find the autoscaling event triggered # List Kubernetes Nodes kubectl get nodes Observation: 1. Nodes in NodePools will be increased from 3 to 4 (2 per zone max we configured) # Scale-In the demo application to 1 pod kubectl get pods kubectl get nodes kubectl scale --replicas=1 deploy myapp1-deployment kubectl get pods # List Kubernetes Nodes kubectl get nodes 1. Nodes in NodePools will be decreased from 4 to 3 (Wait for 10 minutes for Nodes Scale-In) ``` ## Step-04: Clean-up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests ``` ================================================ FILE: 63-GKE-Cluster-Autoscaling/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 5 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ================================================ FILE: 63-GKE-Cluster-Autoscaling/kube-manifests/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 64-Kubernetes-Namespaces/01-kube-manifests-imperative/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ================================================ FILE: 64-Kubernetes-Namespaces/01-kube-manifests-imperative/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 64-Kubernetes-Namespaces/02-kube-manifests-declarative/00-kubernetes-namespace.yaml ================================================ apiVersion: v1 kind: Namespace metadata: name: qa ================================================ FILE: 64-Kubernetes-Namespaces/02-kube-manifests-declarative/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment namespace: qa spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ================================================ FILE: 64-Kubernetes-Namespaces/02-kube-manifests-declarative/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service namespace: qa spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 64-Kubernetes-Namespaces/README.md ================================================ --- title: GCP Google Kubernetes Engine Kubernetes Namespaces Imperative description: Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Namespaces allow to split-up resources into different groups. - Resource names should be unique in a namespace - We can use namespaces to create multiple environments like dev, staging and production etc - Kubernetes will always list the resources from `default namespace` unless we provide exclusively from which namespace we need information from. ## Step-02: Namespaces Imperative - Create dev Namespace ### Step-02-01: Create Namespace ```t # List Namespaces kubectl get ns # Craete Namespace kubectl create namespace kubectl create namespace dev # List Namespaces kubectl get ns ``` ### Step-02-02: Deploy All k8s Objects ```t # Deploy All k8s Objects kubectl apply -f 01-kube-manifests-imperative/ -n dev # List Namespaces kubectl get ns # List Deployments from dev Namespace kubectl get deploy -n dev # List Pods from dev Namespace kubectl get pods -n dev # List Services from dev Namespace kubectl get svc -n dev # List all objects from dev Namespaces kubectl get all -n dev # Access Application http:/// ``` ## Step-03: Namespace Declarative - Create qa Namespace ### Step-03-01: Namespace Kubernetes YAML Manifest - **File Name:** 00-kubernetes-namespace.yaml ```yaml apiVersion: v1 kind: Namespace metadata: name: qa ``` ### Step-03-02: Update Namespace in Deployment and Service YAML Manifest - We are going to update the `namespace: qa` in `metadata` section of Deployment and Service ```yaml # Deployment YAML Manifest apiVersion: apps/v1 kind: Deployment metadata: name: myapp1-deployment namespace: qa spec: # Service YAML Manifest apiVersion: v1 kind: Service metadata: name: myapp1-lb-service namespace: qa spec: ``` ### Step-03-03: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 02-kube-manifests-declarative # List Namespaces kubectl get ns # List Deployments from qa Namespace kubectl get deploy -n qa # List Pods from qa Namespace kubectl get pods -n qa # List Services from qa Namespace kubectl get svc -n qa # List all objects from qa Namespaces kubectl get all -n qa # Access Application http:/// ``` ## Step-04: Clean-Up Resources - If we delete Namespace, all resources associated with namespace will get deleted. ```t # Delete dev Namespace kubectl delete ns dev # List Namespaces kubectl get ns Observation: 1. dev namespace should not be present # Verify Pods from dev Namespace kubectl get pods -n dev Observation: We should not find any pods because namespace itself doesnt exists # Delete qa Namespace Resources (only) kubectl delete -f 02-kube-manifests-declarative # List Namespaces kubectl get ns # Delete qa Namespace kubectl delete ns qa # List Namespaces kubectl get ns ``` ## References: - https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/ ================================================ FILE: 65-Kubernetes-Namespaces-ResourceQuota/README.md ================================================ --- title: GCP Google Kubernetes Engine Kubernetes Resource Quota description: Implement GCP Google Kubernetes Engine Kubernetes Resource Quota --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` # Step-01: Introduction 1. Kubernetes Namespaces - ResourceQuota 2. Kubernetes Namespaces - Declarative using YAML ## Step-02: Create Namespace manifest - **Important Note:** File name starts with `01-` so that when creating k8s objects namespace will get created first so it don't throw an error. ```yaml apiVersion: v1 kind: Namespace metadata: name: qa ``` ## Step-03: Create Kubernetes ResourceQuota manifest - **File Name:** 02-kubernetes-resourcequota.yaml ```yaml apiVersion: v1 kind: ResourceQuota metadata: name: ns-resource-quota namespace: qa spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "3" configmaps: "3" persistentvolumeclaims: "3" secrets: "3" services: "3" ``` ## Step-04: Create Kubernetes objects & Test ```t # Create All Objects kubectl apply -f kube-manifests/ # List Pods kubectl get pods -n qa -w # View Pod Specification (CPU & Memory) kubectl describe pod -n qa # Get Resource Quota - default Namespace kubectl get resourcequota kubectl describe resourcequota gke-resource-quotas Observation: 1. gke-resource-quotas will be precreated by GKE Cluster for each namespace. 2. Any new quotas we define below the GKE Resource quota limits, that quota will be overrided by default GKE Resource Quota in a Namespace. # Get Resource Quota - qa Namespace kubectl get resourcequota -n qa # Describe Resource Quota - qa Namespace kubectl describe resourcequota qa-namespace-resource-quota -n qa # Test Quota by increasing the pods to 4 where in resource quota is 3 pods only kubectl get deploy -n qa kubectl get pods -n qa kubectl scale --replicas=4 deployment/myapp1-deployment -n qa kubectl get pods -n qa kubectl get deploy -n qa # Verify Deployment and ReplicaSet Events kubectl describe deploy -n qa kubectl describe rs -n qa Observation: In ReplicaSet Events we should find the error ## WARNING MESSAGE IN REPLICASET EVENTS ABOUT RESOURCE QUOTA Warning FailedCreate 77s replicaset-controller Error creating: pods "myapp1-deployment-5b4bdfc49d-92t9z" is forbidden: exceeded quota: qa-namespace-resource-quota, requested: pods=1, used: pods=3, limited: pods=3 # List Services kubectl get svc -n qa # Access Application http:// ``` ## Step-05: Clean-Up - Delete all Kubernetes objects created as part of this section ```t # Delete All kubectl delete -f kube-manifests/ -n qa # List Namespaces kubectl get ns ``` ## References: - https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/ ## Additional References: - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/ ================================================ FILE: 65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/01-kubernetes-namespace.yaml ================================================ apiVersion: v1 kind: Namespace metadata: name: qa ================================================ FILE: 65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/02-kubernetes-resourcequota.yaml ================================================ apiVersion: v1 kind: ResourceQuota metadata: name: qa-namespace-resource-quota namespace: qa spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "3" configmaps: "3" persistentvolumeclaims: "3" secrets: "3" services: "3" ================================================ FILE: 65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/03-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment namespace: qa spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ================================================ FILE: 65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/04-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service namespace: qa spec: type: LoadBalancer # ClusterIP, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/01-kubernetes-namespace.yaml ================================================ apiVersion: v1 kind: Namespace metadata: name: qa ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/02-kubernetes-resourcequota-limitrange.yaml ================================================ apiVersion: v1 kind: ResourceQuota metadata: name: qa-namespace-resource-quota namespace: qa spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "3" configmaps: "3" persistentvolumeclaims: "3" secrets: "3" services: "3" --- apiVersion: v1 kind: LimitRange metadata: name: default-cpu-mem-limit-range namespace: qa spec: limits: - default: cpu: "400m" # If not specified default limit is 1 vCPU per container memory: "256Mi" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace. defaultRequest: cpu: "200m" # If not specified default it will take from whatever specified in limits.default.cpu memory: "128Mi" # If not specified default it will take from whatever specified in limits.default.memory max: cpu: "500m" memory: "500Mi" min: cpu: "100m" memory: "100Mi" type: Container ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/03-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment namespace: qa spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 #resources: # requests: # memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) # cpu: "200m" # `m` means milliCPU # limits: # memory: "256Mi" # cpu: "400m" # 1000m is equal to 1 VCPU core ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/04-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service namespace: qa spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/01-kubernetes-namespace.yaml ================================================ apiVersion: v1 kind: Namespace metadata: name: qa ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/02-kubernetes-resourcequota-limitrange.yaml ================================================ apiVersion: v1 kind: ResourceQuota metadata: name: qa-namespace-resource-quota namespace: qa spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "3" configmaps: "3" persistentvolumeclaims: "3" secrets: "3" services: "3" --- apiVersion: v1 kind: LimitRange metadata: name: default-cpu-mem-limit-range namespace: qa spec: limits: - default: cpu: "400m" # If not specified default limit is 1 vCPU per container memory: "256Mi" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace. defaultRequest: cpu: "200m" # If not specified default it will take from whatever specified in limits.default.cpu memory: "128Mi" # If not specified default it will take from whatever specified in limits.default.memory max: cpu: "500m" memory: "500Mi" min: cpu: "100m" memory: "100Mi" type: Container ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/03-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment namespace: qa spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "450m" # `m` means milliCPU limits: memory: "256Mi" #cpu: "600m" # This is above the max value defined in Limit Range, Pods will not be scheduled and error thrown when we refer ReplicaSet Events cpu: "500m" # This is equal to Max value defined in LimitRange, Pods will be scheduled. ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/04-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service namespace: qa spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 66-Kubernetes-Namespaces-LimitRange/README.md ================================================ --- title: GCP Google Kubernetes Engine Kubernetes Limit Range description: Implement GCP Google Kubernetes Engine Kubernetes Limit Range --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` # Step-01: Introduction 1. Kubernetes Namespaces - LimitRange 2. Kubernetes Namespaces - Declarative using YAML ## Step-02: Create Namespace manifest - **Important Note:** File name starts with `01-` so that when creating k8s objects namespace will get created first so it don't throw an error. ```yaml apiVersion: v1 kind: Namespace metadata: name: qa ``` ## Step-03: Create LimitRange manifest - Instead of specifying `resources like cpu and memory` in every container spec of a pod defintion, we can provide the default CPU & Memory for all containers in a namespace using `LimitRange` ```yaml apiVersion: v1 kind: LimitRange metadata: name: default-cpu-mem-limit-range namespace: qa spec: limits: - default: cpu: "400m" # If not specified default limit is 1 vCPU per container memory: "256Mi" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace. defaultRequest: cpu: "200m" # If not specified default it will take from whatever specified in limits.default.cpu memory: "128Mi" # If not specified default it will take from whatever specified in limits.default.memory max: cpu: "500m" memory: "500Mi" min: cpu: "100m" memory: "100Mi" type: Container ``` ## Step-04: Demo-01: Create Kubernetes Resources & Test ```t # Create Kubernetes Resources kubectl apply -f 01-kube-manifests-LimitRange-defaults # List Pods kubectl get pods -n qa -w # View Pod Specification (CPU & Memory) kubectl describe pod -n qa Observation: 1. We will find the "Limits" in pod container equals to "defaults" from LimitRange 2. We will find the "Requests" in pod container equals to "defaultRequest" # Sample from Pod description Limits: cpu: 400m memory: 256Mi Requests: cpu: 200m memory: 128Mi # Get & Describe Limits kubectl get limits -n qa kubectl describe limits default-cpu-mem-limit-range -n qa # List Services kubectl get svc -n qa # Access Application http:/// ``` ## Step-05: Demo-01: Clean-Up - Delete all Kubernetes objects created as part of this section ```t # Delete All kubectl delete -f 01-kube-manifests-LimitRange-defaults/ ``` ## Step-06: Demo-02: Update Demo-02 Deployment Manifest with Requests & Limits - Negative case testing - When deployed with these `Requests & Limits` where `cpu=600m in limits` which is above the `max cpu = 500m` in LimitRange `default-cpu-mem-limit-range` it should not schedule the pods and throw error in `ReplicaSet Events`. - **File Name:** 03-kubernetes-deployment.yaml ```t # Update Demo-02 Deployment Manifest with Requests & Limits resources: requests: memory: "128Mi" cpu: "450m" limits: memory: "256Mi" cpu: "600m" ``` ## Step-07: Demo-02: Create Kubernetes Resources & Test ```t # Create Kubernetes Resources kubectl apply -f 02-kube-manifests-LimitRange-MinMax # List Pods kubectl get pods -n qa Observation: 1. No Pod should be scheduled # List Deployments kubectl get deploy -n qa Observation: 0/2 ready which means no pods scheduled. Verify ReplicaSet Events # List & Describe ReplicaSets kubectl get rs -n qa kubectl describe rs -n qa Observation: Below error will be displayed Warning FailedCreate 18s (x5 over 56s) replicaset-controller (combined from similar events): Error creating: pods "myapp1-deployment-5dd9f78fd8-k5th6" is forbidden: maximum cpu usage per Container is 500m, but limit is 600m # Get & Describe Limits kubectl get limits -n qa kubectl describe limits default-cpu-mem-limit-range -n qa # List Services kubectl get svc -n qa # Access Application http:/// ``` ## Step-08: Demo-02: Update Deployment resources.limit=500m - **File Name:** 03-kubernetes-deployment.yaml ```t # Demo-02: Update Deployment resources.limit=500m resources: requests: memory: "128Mi" cpu: "450m" limits: memory: "256Mi" cpu: "500m" # This is equal to Max value defined in LimitRange, Pods will be scheduled. ``` ## Step-09: Demo-02: Deploy the updated Deployment ```t # Deploy the Updated Deployment kubectl apply -f 02-kube-manifests-LimitRange-MinMax/03-kubernetes-deployment.yaml # List Pods kubectl get pods -n qa Observation: 1. Pods should be scheduled now. ``` ## Step-10: Demo-02: Clean-Up ```t # Delete Demo-02 Kubernetes Resources kubectl delete -f 02-kube-manifests-LimitRange-MinMax ``` ## References: - https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/ ================================================ FILE: 67-GKE-Horizontal-Pod-Autoscaler/README.md ================================================ --- title: GCP Google Kubernetes Engine Horizontal Pod Autoscaling description: Implement GKE Cluster Horizontal Pod Autoscaling --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials --region --project # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - Implement a Sample Demo with Horizontal Pod Autoscaler ## Step-02: Review Kubernetes Manifests - Primarily review `HorizontalPodAutoscaler` Resource in file `03-kubernetes-hpa.yaml` 1. 01-kubernetes-deployment.yaml 2. 02-kubernetes-cip-service.yaml 3. 03-kubernetes-hpa.yaml ```yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa-myapp1 spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp1-deployment minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50 ``` ## Step-03: Deploy Sample App and Verify using kubectl ```t # Deploy Sample kubectl apply -f kube-manifests # List Pods kubectl get pods Observation: 1. Currently only 1 pod is running # List HPA kubectl get hpa # Run Load Test (New Terminal) kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://myapp1-cip-service; done" # List Pods (SCALE UP EVENT) kubectl get pods Observation: 1. New pods will be created to reduce the CPU spikes # List HPA (after few mins - approx 10 mins) kubectl get hpa # List Pods (SCALE IN EVENT) kubectl get pods Observation: 1. Only 1 pod should be running ``` ## Step-04: Clean-Up ```t # Delete Load Generator Pod which is in Error State kubectl delete pod load-generator # Delete Sample App kubectl delete -f kube-manifests ``` ================================================ FILE: 67-GKE-Horizontal-Pod-Autoscaler/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: name: myapp1-deployment spec: replicas: 1 selector: matchLabels: app: myapp1 template: metadata: name: myapp1-pod labels: app: myapp1 spec: containers: - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "5Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "5m" # `m` means milliCPU limits: memory: "50Mi" cpu: "50m" # 1000m is equal to 1 VCPU core ================================================ FILE: 67-GKE-Horizontal-Pod-Autoscaler/kube-manifests/02-kubernetes-cip-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-cip-service spec: type: ClusterIP # ClusterIP, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 67-GKE-Horizontal-Pod-Autoscaler/kube-manifests/03-kubernetes-hpa.yaml ================================================ apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa-myapp1 spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp1-deployment minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50 ================================================ FILE: 68-GKE-AutoPilot-Cluster/README.md ================================================ --- title: GCP Google Kubernetes Engine Autopilot Cluster description: Implement GCP Google Kubernetes Engine GKE Autopilot Cluster --- ## Step-01: Introduction - Create GKE Autopilot Cluster - Understand in detail about GKE Autopilot cluster ## Step-02: Pre-requisite: Verify if Cloud NAT Gateway created - Verify if Cloud NAT Gateway created in `Region:us-central1` where you are planning to create GKE Autopilot Private Cluster - This is required for Workload in Private subnets to connect to Internet. - Primarily to Connect to Docker Hub to pull the Docker Images - Go to Network Services -> Cloud NAT ## Step-03: Create GKE Autopilot Private Cluster - Go to Kubernetes Engine -> Clusters -> **CREATE** - Create Cluster -> GKE Autopilot -> **CONFIGURE** - **Name:** autopilot-cluster-private-1 - **Region:** us-central1 - **Network access:** Private Cluster - **Access control plane using its external IP address:** CHECK - **Control plane ip range:** 172.18.0.0/28 - **Enable control plane authorized networks:** CHECK - **Authorized networks:** - **Name:** internet-access - **Network:** 0.0.0.0/0 - Click on **DONE** - **Network:** default (LEAVE TO DEFAULTS) - **Node subnet:** default (LEAVE TO DEFAULTS) - **Cluster default pod address range:** /17 (LEAVE TO DEFAULTS) - **Service Address range:** /22 (LEAVE TO DEFAULTS) - **Release Channel:** Regular Channel (Default) - REST ALL LEAVE TO DEFAULTS - Click on **CREATE** ## Step-04: Configure kubectl for kubeconfig ```t # Configure kubectl for kubeconfig gcloud container clusters get-credentials CLUSTER-NAME --region REGION --project PROJECT-NAME # Replace values CLUSTER-NAME, REGION, PROJECT-NAME gcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes kubectl get nodes -o wide ``` ## Step-05: Review Kubernetes Manifests ### Step-05-01: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 5 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ``` ### Step-05-02: 02-kubernetes-loadbalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Access Application http:// ``` ## Step-07: Scale your Application ```t # Scale your Application kubectl scale --replicas=15 deployment/myapp1-deployment # List Pods kubectl get pods # List Nodes kubectl get nodes ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete GKE Autopilot Cluster # NOTE: Dont delete this cluster, as we are going to use this in next demo. Go to Kubernetes Engine > Clusters -> autopilot-cluster-private-1 -> DELETE ``` ## References - https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#default_container_resource_requests - https://cloud.google.com/kubernetes-engine/quotas#limits_per_cluster ================================================ FILE: 68-GKE-AutoPilot-Cluster/kube-manifests/01-kubernetes-deployment.yaml ================================================ apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 5 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ================================================ FILE: 68-GKE-AutoPilot-Cluster/kube-manifests/02-kubernetes-loadbalancer-service.yaml ================================================ apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ================================================ FILE: 69-Access-To-Multiple-Clusters/README.md ================================================ --- title: GCP Google Kubernetes Engine Access to Multiple Clusters description: Implement GCP Google Kubernetes Engine Access to Multiple Clusters --- ## Step-00: Pre-requisites - We should have the two clusters created and ready - standard-cluster-private-1 - autopilot-cluster-private-1 ## Step-01: Introduction - Configure access to Multiple Clusters - Understand kube config file $HOME/.kube/config - Understand kubectl config command - kubectl config view - kubectl config current-context - kubectl config use-context - kubectl config get-context - kubectl config get-clusters ## Step-02: Pre-requisite - Verify if you have any two GKE Clusters created and ready for use - standard-cluster-private-1 - autopilot-cluster-private-1 ## Step-03: Clean-Up kube config file ```t # Clean existing kube configs cd $HOME/.kube >config cat config ``` ## Step-04: Configure Standard Cluster Access for kubectl - Understand commands - kubectl config view - kubectl config current-context ```t # View kubeconfig kubectl config view # Configure kubeconfig for kubectl: standard-cluster-private-1 gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # View kubeconfig kubectl config view # View Cluster Information kubectl cluster-info # View the current context for kubectl kubectl config current-context ``` ## Step-05: Configure Autopilot Cluster Access for kubectl ```t # Configure kubeconfig for kubectl: autopilot-cluster-private-1 gcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123 # View the current context for kubectl kubectl config current-context # View Cluster Information kubectl cluster-info # View kubeconfig kubectl config view ``` ## Step-06: Switch Contexts between clusters - Understand the kubectl config command **use-context** ```t # View the current context for kubectl kubectl config current-context # View kubeconfig kubectl config view Get contexts.context.name to which you want to switch # Switch Context kubectl config use-context gke_kdaida123_us-central1_standard-cluster-private-1 # View the current context for kubectl kubectl config current-context # View Cluster Information kubectl cluster-info ``` ## Step-07: List Contexts configured in kubeconfig ```t # List Contexts kubectl config get-contexts ``` ## Step-08: List Clusters configured in kubeconfig ```t # List Clusters kubectl config get-clusters ``` ================================================ FILE: README.md ================================================ # [GCP GKE Google Kubernetes Engine DevOps 75 Real-World Demos](https://stacksimplify.com/courses/gcp-gke-kubernetes/) [![Image](images/course-title.png "Google Kubernetes Engine GKE with DevOps 75 Real-World Demos")](https://stacksimplify.com/courses/gcp-gke-kubernetes/) ## Course Modules 01. Google Cloud Account Creation 02. Create GKE Standard Public Cluster 03. Install gcloud CLI on mac OS 04. Install gcloud CLI on Windows OS 05. Docker Fundamentals 06. Kubernetes Pods 07. Kubernetes ReplicaSets 08. Kubernetes Deployment - CREATE 09. Kubernetes Deployment - UPDATE 10. Kubernetes Deployment - ROLLBACK 11. Kubernetes Deployments - Pause and Resume 12. Kubernetes ClusterIP and Load Balancer Service 13. YAML Basics 14. Kubernetes Pod & Service using YAML 15. Kubernetes ReplicaSets using YAML 16. Kubernetes Deployment using YAML 17. Kubernetes Services using YAML 18. GKE Kubernetes NodePort Service 19. GKE Kubernetes Headless Service 20. GKE Private Cluster 21. How to use GCP Persistent Disks in GKE ? 22. How to use Balanced Persistent Disk in GKE ? 23. How to use Custom Storage Class in GKE for Persistent Disks ? 24. How to use Pre-existing Persistent Disks in GKE ? 25. How to use Regional Persistent Disks in GKE ? 26. How to perform Persistent Disk Volume Snapshots and Volume Restore ? 28. GKE Workloads and Cloud SQL with Public IP 29. GKE Workloads and Cloud SQL with Private IP 30. GKE Workloads and Cloud SQL with Private IP and No ExternalName Service 31. How to use Google Cloud File Store in GKE ? 32. How to use Custom Storage Class for File Store in GKE ? 33. How to perform File Store Instance Volume Snapshots and Volume Restore ? 34. Ingress Service Basics 35. Ingress Context Path based Routing 36. Ingress Custom Health Checks using Readiness Probes 37. Register a Google Cloud Domain for some advanced Ingress Service Demos 38. Ingress with Static External IP and Cloud DNS 39. Google Managed SSL Certificates for Ingress 40. Ingress HTTP to HTTPS Redirect 41. GKE Workload Identity 42. External DNS Controller Install 43. External DNS - Ingress Service 44. External DNS - Kubernetes Service 45. Ingress Name based Virtual Host Routing 46. Ingress SSL Policy 47. Ingress with Identity-Aware Proxy 48. Ingress with Self Signed SSL Certificates 49. Ingress with Pre-shared SSL Certificates 50. Ingress with Cloud CDN, HTTP Access Logging and Timeouts 51. Ingress with Client IP Affinity 52. Ingress with Cookie Affinity 53. Ingress with Custom Health Checks using BackendConfig CRD 54. Ingress Internal Load Balancer 55. Ingress with Google Cloud Armor 56. Google Artifact Registry 57. GKE Continuous Integration 58. GKE Continuous Delivery 59. Kubernetes Liveness Probes 60. Kubernetes Startup Probes 61. Kubernetes Readiness Probe 62. Kubernetes Requests and Limits 63. GKE Cluster Autoscaling 64. Kubernetes Namespaces 65. Kubernetes Namespaces Resource Quota 66. Kubernetes Namespaces Limit Range 67. Kubernetes Horizontal Pod Autoscaler 68. GKE Autopilot Cluster 69. How to manage Multiple Cluster access in kubeconfig ? ## Kubernetes Concepts Covered 01. Kubernetes Deployments (Create, Update, Rollback, Pause, Resume) 02. Kubernetes Pods 03. Kubernetes Service of Type LoadBalancer 04. Kubernetes Service of Type ClusterIP 05. Kubernetes Ingress Service 06. Kubernetes Storage Class 07. Kubernetes Storage Persistent Volume 08. Kubernetes Storage Persistent Volume Claim 09. Kubernetes Cluster Autoscaler 10. Kubernetes Horizontal Pod Autoscaler 11. Kubernetes Namespaces 12. Kubernetes Namespaces Resource Quota 13. Kubernetes Namespaces Limit Range 14. Kubernetes Service Accounts 15. Kubernetes ConfigMaps 16. Kubernetes Requests and Limits 17. Kubernetes Worker Nodes 18. Kubernetes Service of Type NodePort 19. Kubernetes Service of Type Headless 20. Kubernetes ReplicaSets ## Google Services Covered 01. Google GKE Standard Cluster 02. Google GKE Autopilot Cluster 03. Compute Engine - Virtual Machines 04. Compute Engine - Storage Disks 05. Compute Engine - Storage Snapshots 06. Compute Engine - Storage Images 07. Compute Engine - Instance Groups 08. Compute Engine - Health Checks 09. Compute Engine - Network Endpoint Groups 10. VPC Networks - VPC 11. VPC Network - External and Internal IP Addresses 12. VPC Network - Firewall 13. Network Services - Load Balancing 14. Network Services - Cloud DNS 15. Network Services - Cloud CDN 16. Network Services - Cloud NAT 17. Network Services - Cloud Domains 18. Network Services - Private Service Connection 19. Network Security - Cloud Armor 20. Network Security - SSL Policies 21. IAM & Admin - IAM 22. IAM & Admin - Service Accounts 23. IAM & Admin - Roles 24. IAM & Admin - Identity-Aware Proxy 25. DevOps - Cloud Source Repositories 26. DevOps - Cloud Build 27. DevOps - Cloud Storage 28. SQL - Cloud SQL 29. Storage - Filestore 30. Google Artifact Registry 31. Operations Logging 32. GCP Monitoring ## What will students learn in your course? - You will learn to master Kubernetes on Google GKE with 75 Real-world demo's on Google Cloud Platform with 20+ Kubernetes and 30+ Google Cloud Services - You will learn Kubernetes Basics for 4.5 hours - You will create GKE Standard and Autopilot clusters with public and private networks - You will learn to implement Kubernetes Storage with Google Persistent Disks and Google File Store - You will also use Google Cloud SQL, Cloud Load Balancing to deploy a sample application outlining LB to DB usecase in GKE Cluster - You will master Kubernetes Ingress concepts in detail on GKE with 22 Real-world Demos - You will implement Ingress Context Path Routing and Name based vhost routing - You will implement Ingress with Google Managed SSL Certificates - You will master Google GKE Workload Identity with a detailed dedicated demo. - You will implement External DNS Controller to automatically add, delete DNS records automatically in Google Cloud DNS Service - You will implement Ingress with Preshared SSL and Self Signed Certificates - You will implement Ingress with Cloud CDN, Cloud Armor, Internal Load Balancer, Cookie Affinity, IP Affinity, HTTP Access Logging. - You will implement Ingress with Google Identity-Aware Proxy - You will learn to use Google Artifact Registry with GKE - You will implement DevOps Continuous Integration (CI) and Continuous Delivery (CD) with Cloud Build and Cloud Source Services - You will learn to master Kubernetes Probes (Readiness, Startup, Liveness) - You will implement Kubernetes Requests, Limits, Namespaces, Resource Quota and Limit Range - You will implement GKE Cluster Autoscaler and Horizontal Pod Autoscaler ## What are the requirements or prerequisites for taking your course? - You must have an Google Cloud account to follow with me for hands-on activities. - You don't need to have any basic knowledge of Kubernetes. Course will get started from very very basics of Kubernetes and take you to very advanced levels - Any Cloud Platform basics is required to understand the terminology ## Who is this course for? - Infrastructure Architects or Sysadmins or Developers who are planning to master Kubernetes from Real-World perspective on Google Cloud Platform (GCP) - Any beginner who is interested in learning Kubernetes with Google Cloud Platform (GCP) - Any beginner who is planning their career in DevOps ## Github Repositories used for this course - [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https://github.com/stacksimplify/terraform-on-aws-eks) - [Course Presentation](https://github.com/stacksimplify/terraform-on-aws-eks/tree/main/course-presentation) - [Kubernetes Fundamentals](https://github.com/stacksimplify/kubernetes-fundamentals) - **Important Note:** Please go to these repositories and FORK these repositories and make use of them during the course. ## Each of my courses come with - Amazing Hands-on Step By Step Learning Experiences - Real Implementation Experience - Friendly Support in the Q&A section - "30-Day "No Questions Asked" Money Back Guaranteed by Udemy" ## My Other AWS Courses - [Udemy Enroll](https://www.stacksimplify.com/azure-aks/courses/stacksimplify-best-selling-courses-on-udemy/) ## Instructor Profile - [Kalyan Reddy Daida - StackSimplify](https://stacksimplify.com/about/) # HashiCorp Certified: Terraform Associate - 50 Practical Demos [![Image](https://stacksimplify.com/course-images/hashicorp-certified-terraform-associate-highest-rated.png "HashiCorp Certified: Terraform Associate - 50 Practical Demos")](https://stacksimplify.com/courses/hashicorp-terraform-associate-aws/) # AWS EKS - Elastic Kubernetes Service - Masterclass [![Image](https://stacksimplify.com/course-images/AWS-EKS-Kubernetes-Masterclass-DevOps-Microservices-course.png "AWS EKS Kubernetes - Masterclass")](https://stacksimplify.com/courses/aws-eks-masterclass/) # Azure Kubernetes Service with Azure DevOps and Terraform [![Image](https://stacksimplify.com/course-images/azure-kubernetes-service-with-azure-devops-and-terraform.png "Azure Kubernetes Service with Azure DevOps and Terraform")](https://stacksimplify.com/courses/azure-aks-devops-terraform/) # Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos [![Image](https://stacksimplify.com/course-images/terraform-on-aws-best-seller.png "Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos")](https://stacksimplify.com/courses/terraform-on-aws-sre/) # Azure - HashiCorp Certified: Terraform Associate - 70 Demos [![Image](https://stacksimplify.com/course-images/azure-hashicorp-certified-terraform-associate-highest-rated.png "Azure - HashiCorp Certified: Terraform Associate - 70 Demos")](https://stacksimplify.com/courses/hashicorp-terraform-associate-azure/) # Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos [![Image](https://stacksimplify.com/course-images/terraform-on-azure-with-iac-azure-devops-sre-1.png "Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos")](https://stacksimplify.com/courses/terraform-on-azure/) # [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https://stacksimplify.com/courses/terraform-aws-eks/) [![Image](https://stacksimplify.com/course-images/terraform-on-aws-eks-kubernetes.png "Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos ")](https://stacksimplify.com/courses/terraform-aws-eks/) --- ## My Other Courses (383,000+ Students, 20 Courses) > All courses available at [stacksimplify.com/courses](https://stacksimplify.com/courses/) ### AWS Courses | Course | Students | Rating | |--------|----------|--------| | [AWS EKS Kubernetes Masterclass](https://stacksimplify.com/courses/aws-eks-masterclass/) | 70,041+ | 4.6 (5,495 ratings) | | [AWS VPC Transit Gateway](https://stacksimplify.com/courses/aws-vpc-transit-gateway/) | 52,243+ | 4.6 (790 ratings) | | [Terraform on AWS with SRE and IaC DevOps](https://stacksimplify.com/courses/terraform-on-aws-sre/) | 31,006+ | 4.6 (3,347 ratings) | | [Terraform on AWS EKS Kubernetes IaC SRE](https://stacksimplify.com/courses/terraform-aws-eks/) | 26,929+ | 4.5 (2,238 ratings) | | [HashiCorp Certified: Terraform Associate (AWS)](https://stacksimplify.com/courses/hashicorp-terraform-associate-aws/) | 16,835+ | 4.6 (1,754 ratings) | | [AWS CloudFormation Simplified](https://stacksimplify.com/courses/aws-cloudformation/) | 16,223+ | 4.3 (1,469 ratings) | | [AWS Fargate and ECS Masterclass](https://stacksimplify.com/courses/aws-fargate-ecs/) | 15,208+ | 4.4 (1,051 ratings) | | [AWS CodePipeline CI/CD](https://stacksimplify.com/courses/aws-codepipeline/) | 9,832+ | 4.0 (966 ratings) | | [AWS Elastic Beanstalk Master Class](https://stacksimplify.com/courses/aws-elastic-beanstalk/) | 7,588+ | 4.3 (373 ratings) | | [Ultimate DevOps Real-World Project on AWS](https://stacksimplify.com/courses/ultimate-devops-real-world-project-on-aws/) | 4,772+ | 4.72 (358 ratings) | ### Azure Courses | Course | Students | Rating | |--------|----------|--------| | [Azure Kubernetes Service with Azure DevOps and Terraform](https://stacksimplify.com/courses/azure-aks-devops-terraform/) | 48,551+ | 4.6 (6,196 ratings) | | [Terraform on Azure with IaC DevOps SRE](https://stacksimplify.com/courses/terraform-on-azure/) | 17,918+ | 4.7 (1,911 ratings) | | [Azure HashiCorp Certified: Terraform Associate](https://stacksimplify.com/courses/hashicorp-terraform-associate-azure/) | 16,938+ | 4.5 (1,985 ratings) | | [Azure Kubernetes Service AGIC Ingress](https://stacksimplify.com/courses/azure-aks-agic/) | 2,012+ | 4.6 (112 ratings) | ### GCP Courses | Course | Students | Rating | |--------|----------|--------| | [GCP Google Kubernetes Engine GKE with DevOps](https://stacksimplify.com/courses/gcp-gke-kubernetes/) | 8,769+ | 4.4 (779 ratings) | | [GCP Associate Cloud Engineer Certification](https://stacksimplify.com/courses/gcp-associate-cloud-engineer/) | 6,007+ | 4.6 (599 ratings) | | [GCP Terraform on Google Cloud](https://stacksimplify.com/courses/gcp-terraform/) | 2,600+ | 4.4 (213 ratings) | | [GCP GKE Terraform on Google Kubernetes Engine](https://stacksimplify.com/courses/gcp-gke-terraform/) | 2,040+ | 4.6 (155 ratings) | ### DevOps and General | Course | Students | Rating | |--------|----------|--------| | [Helm Masterclass: 50 Practical Demos](https://stacksimplify.com/courses/helm-masterclass/) | 12,069+ | 4.7 (915 ratings) | | [Docker in a Weekend: 40 Practical Demos](https://stacksimplify.com/courses/docker-weekend/) | 3,802+ | 4.6 (361 ratings) | --- ## Instructor Profile - [Kalyan Reddy Daida - StackSimplify](https://stacksimplify.com/about/) --- ## Connect with Me - [YouTube - Cloud & DevOps Tutorials](https://www.youtube.com/@stacksimplify) - [LinkedIn - Kalyan Reddy](https://www.linkedin.com/in/kalyan-reddy/) - [GitHub - StackSimplify](https://github.com/stacksimplify) --- ================================================ FILE: course-presentation/Google-Kubernetes-Engine-GKE-GCP-v3R.pptx ================================================ [File too large to display: 15.7 MB] ================================================ FILE: git-deploy.sh ================================================ #!/bin/sh echo "Add files and do local commit" git add . git commit -am "Welcome to StackSimplify" echo "Pushing to Github Repository" git push