[
  {
    "path": ".gitignore",
    "content": "# Local .terraform directories\n**/.terraform/*\n.DS_Store \n\n# .tfstate files\n*.tfstate\n*.tfstate.*\n\n# Crash log files\ncrash.log\n\n# Ignore any .tfvars files that are generated automatically for each Terraform run. Most\n# .tfvars files are managed as part of configuration and so should be included in\n# version control.\n#\n# example.tfvars\n\n# Ignore override files as they are usually used to override resources locally and so\n# are not checked in\noverride.tf\noverride.tf.json\n*_override.tf\n*_override.tf.json\n\n# Include override files you do wish to add to version control using negated pattern\n#\n# !example_override.tf\n\n# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan\n# example: *tfplan*\n"
  },
  {
    "path": "01-Create-GCP-Account/README.md",
    "content": "---\ntitle: Create GCP Cloud Account\ndescription: Learn to create GCP Cloud Account\n---\n\n## Step-01: Introduction\n- Create GCP Cloud Account\n\n## Step-02: Create a Google Account\n- We should have a google account (gmail account) before creating GCP cloud Account\n- Create one Google Account if not having one.\n\n## Step-03: Create GCP Account\n- Go to https://cloud.google.com\n- Follow presentation slides to create the GCP Account\n\n## Step-04: Create Budget Alerts\n- Go to Billing and Create Budget Alerts\n"
  },
  {
    "path": "02-Create-GKE-Cluster/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine - Create GKE Cluster\ndescription: Learn to create Google Kubernetes Engine GKE Cluster\n---\n\n## Step-01: Introduction\n- Create GKE Standard GKE Cluster \n- Configure Google CloudShell to access GKE Cluster\n- Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test \n- Clean-Up\n\n## Step-02: Create Standard GKE Cluster \n- Go to Kubernetes Engine -> Clusters -> CREATE\n- Select **GKE Standard -> CONFIGURE**\n- **Cluster Basics**\n  - **Name:** standard-public-cluster-1\n  - **Location type:** Regional\n  - **Region:** us-central1\n  - **Specify default node locations:** us-central1-a, us-central1-b, us-central1-c\n  - **Release Channel**\n    - **Release Channel:** Rapid Channel\n    - **Version:** LATEST AVAIALABLE ON THAT DAY\n  - REST ALL LEAVE TO DEFAULTS\n- **NODE POOLS: default-pool**\n- **Node pool details**\n  - **Name:** default-pool\n  - **Number of Nodes (per zone):** 1\n  - **Node Pool Upgrade Strategy:** Surge Upgrade\n- **Nodes: Configure node settings** \n  - **Image type:** Containerized Optimized OS\n  - **Machine configuration**\n    - **GENERAL PURPOSE SERIES:** E2\n    - **Machine Type:** e2-small\n  - **Boot disk type:** Balanced persistent disk\n  - **Boot disk size(GB):** 20\n  - **Boot disk encryption:** Google-managed encryption key (default )\n  - **Enable Node on Spot VMs:** CHECKED\n- **Node Networking:** LEAVE TO DEFAULTS  \n- **Node Security:** \n  - **Access scopes:** Allow default access (LEAVE TO DEFAULT)\n  - REST ALL REVIEW AND LEAVE TO DEFAULTS\n- **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS\n- **CLUSTER** \n  - **Automation:** REVIEW AND LEAVE TO DEFAULTS\n  - **Networking:** REVIEW AND LEAVE TO DEFAULTS\n    - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Security:** REVIEW AND LEAVE TO DEFAULTS\n    - **CHECK THIS BOX: Enable Workload Identity** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Metadata:** REVIEW AND LEAVE TO DEFAULTS\n  - **Features:** REVIEW AND LEAVE TO DEFAULTS\n- CLICK ON **CREATE**\n\n## Step-03: Verify Cluster Details\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Review\n  - Details Tab\n  - Nodes Tab\n    - Review same nodes **Compute Engine**\n  - Storage Tab\n    - Review Storage Classes\n  - Logs Tab\n    - Review Cluster Logs\n    - Review Cluster Logs **Filter By Severity**\n\n## Step-04: Verify Additional Features in GKE on a High-Level\n### Step-04-01: Verify Workloads Tab\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Workloads -> **SHOW SYSTEM WORKLOADS**\n\n### Step-04-02: Verify Services & Ingress\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Services & Ingress -> **SHOW SYSTEM OBJECTS**\n\n### Step-04-03: Verify Applications, Secrets & ConfigMaps\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Applications\n- Secrets & ConfigMaps\n\n### Step-04-04: Verify Storage\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Storage Classes\n  - premium-rwo\n  - standard\n  - standard-rwo\n\n### Step-04-05: Verify the below\n1. Object Browser\n2. Migrate to Containers\n3. Backup for GKE\n4. Config Management\n5. Protect\n\n## Step-05: Google CloudShell: Connect to GKE Cluster using kubectl\n- [kubectl Authentication in GKE](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke)\n```t\n# Verify gke-gcloud-auth-plugin Installation (if not installed, install it)\ngke-gcloud-auth-plugin --version \n\n# Install Kubectl authentication plugin for GKE\nsudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin\n\n# Verify gke-gcloud-auth-plugin Installation\ngke-gcloud-auth-plugin --version \n\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT-NAME>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Run kubectl with the new plugin prior to the release of v1.25\nvi ~/.bashrc\nUSE_GKE_GCLOUD_AUTH_PLUGIN=True\n\n# Reload the environment value\nsource ~/.bashrc\n\n# Check if Environment variable loaded in Terminal\necho $USE_GKE_GCLOUD_AUTH_PLUGIN\n\n# Verify kubectl version\nkubectl version --short\n\n# Install kubectl (if not installed)\ngcloud components install kubectl\n\n# Configure kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --zone <ZONE> --project <PROJECT-ID>\ngcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123\n\n# Verify Kubernetes Worker Nodes\nkubectl get nodes\n\n# Verify System Pod in kube-system Namespace\nkubectl -n kube-system get pods\n\n# Verify kubeconfig file\ncat $HOME/.kube/config\nkubectl config view\n```\n\n## Step-06: Review Sample Application: 01-kubernetes-deployment.yaml\n- **Folder:** kube-manifests\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n    \n```\n\n## Step-07: Review Sample Application: 02-kubernetes-loadbalancer-service.yaml\n- **Folder:** kube-manifests\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-08: Upload Sample App to Google CloudShell\n```t\n# Upload Sample App to Google CloudShell\nGo to Google CloudShell -> 3 Dots -> Upload -> Folder -> google-kubernetes-engine\n\n# Change Directory\ncd google-kubernetes-engine/02-Create-GKE-Cluster\n\n# Verify folder uploaded\nls kube-manifests/\n\n# Verify Files\ncat kube-manifests/01-kubernetes-deployment.yaml\ncat kube-manifests/02-kubernetes-loadbalancer-service.yaml\n```\n\n## Step-09: Deploy Sample Application and Verify\n```t\n# Change Directory\ncd google-kubernetes-engine/02-Create-GKE-Cluster\n\n# Deploy Sample App using kubectl\nkubectl apply -f kube-manifests/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pod\n\n# List Services\nkubectl get svc\n\n# Access Sample Application\nhttp://<EXTERNAL-IP>\n```\n\n## Step-10: Verify Workloads in GKE Dashboard\n- Go to GCP Console -> Kubernetes Engine -> Workloads\n- Click on  **myapp1-deployment**\n- Review all tabs\n\n## Step-11: Verify Services in GKE Dashboard\n- Go to GCP Console -> Kubernetes Engine -> Services & Ingress\n- Click on **myapp1-lb-service**\n- Review all tabs\n\n## Step-13: Verify Load Balancer\n- Go to GCP Console -> Networking Services -> Load Balancing\n- Review all tabs\n\n## Step-14: Clean-Up\n- Go to Google Cloud Shell\n```t\n# Change Directory\ncd google-kubernetes-engine/02-Create-GKE-Cluster\n\n# Delete Kubernetes Deployment and Service\nkubectl delete -f kube-manifests/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pod\n\n# List Services\nkubectl get svc\n```\n\n\n\n"
  },
  {
    "path": "02-Create-GKE-Cluster/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0\n          ports: \n            - containerPort: 8080  \n    "
  },
  {
    "path": "02-Create-GKE-Cluster/kube-manifests/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 8080 # Container Port\n"
  },
  {
    "path": "03-gcloud-cli-install-macos/README.md",
    "content": "---\ntitle: gcloud cli install on macOS\ndescription: Learn to install gcloud cli on MacOS\n---\n\n## Step-01: Introduction\n- Install gcloud CLI on MacOS\n- Configure kubeconfig for kubectl on your local terminal\n- Verify if you are able to reach GKE Cluster using kubectl from your local terminal\n\n## Step-02: Install gcloud cli on MacOS\n- [Install gcloud cli](https://cloud.google.com/sdk/docs/install-sdk#mac)\n```t\n# Verify Python Version (Supported versions are Python 3 (3.5 to 3.8, 3.7 recommended)\npython3 -V\n\n# Determine your machine hardware \nuname -m\n\n# Create Folder\nmkdir gcloud-cli-software\n\n# Download gcloud cli based on machine hardware \n## Important Note: Download the latest version available on that respective day\nDowload Link: https://cloud.google.com/sdk/docs/install-sdk#mac\n\n## As on today the below is the latest version (x86_64 bit)\ncurl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-418.0.0-darwin-x86_64.tar.gz\n\n# Unzip binary\nls -lrta\ntar -zxf google-cloud-cli-418.0.0-darwin-x86_64.tar.gz\n\n# Run the install script with screen reader mode on:\n./google-cloud-sdk/install.sh --screen-reader=true\n```\n\n## Step-03: Verify gcloud cli version\n```t\n# Open new terminal\nAS PATH is updated, open new terminal\n\n# gcloud cli version\ngcloud version\n\n## Sample Output\nKalyans-Mac-mini:gcloud-cli-software kalyanreddy$ gcloud version\nGoogle Cloud SDK 418.0.0\nbq 2.0.85\ncore 2023.02.13\ngcloud-crc32c 1.0.0\ngsutil 5.20\nKalyans-Mac-mini:gcloud-cli-software kalyanreddy$\n```\n\n## Step-04: Intialize gcloud CLI in local Terminal \n```t\n# Initialize gcloud CLI\n./google-cloud-sdk/bin/gcloud init\n\n# gcloud config Configurations Commands (For Reference)\ngcloud config list\ngcloud config configurations list\ngcloud config configurations activate\ngcloud config configurations create\ngcloud config configurations delete\ngcloud config configurations describe\ngcloud config configurations rename\n```\n\n## Step-05: Verify gke-gcloud-auth-plugin \n```t\n# Change Directroy\ngcloud-cli-software\n\n## Important Note about gke-gcloud-auth-plugin: \n1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n\n# Install gke-gcloud-auth-plugin\ngcloud components install gke-gcloud-auth-plugin\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n```\n\n## Step-06: Remove any existing kubectl clients\n```t\n# Verify kubectl version\nkubectl version --short\nwhich kubectl \nObservation: \n1. We are not using kubectl from gcloud CLI and we need to fix that. \n\n# Removing existing kubectl\nwhich kubectl\nrm /usr/local/bin/kubectl\n```\n\n## Step-07: Install kubectl client from gcloud CLI\n```t\n# List gcloud components\ngcloud components list\n\n## SAMPLE OUTPUT\nStatus: Not Installed\nName: kubectl\nID: kubectl\nSize: < 1 MiB\n\n# Install kubectl client\ngcloud components install kubectl\n\n# Verify kubectl version\nOPEN NEW TERMINAL AS PATH IS UPDATED\nkubectl version --short\nwhich kubectl\n```\n\n\n## Step-08: Fix kubectl client version equal to GKE Cluster version\n- **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. \n- For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters.\n- As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26\n```t\n# Verify kubectl version\nOPEN NEW TERMINAL AS PATH IS UPDATED\nkubectl version --short\nwhich kubectl\n\n# Change Directroy \ncd /Users/kalyanreddy/Documents/course-repos/gcloud-cli-software/google-cloud-sdk/bin/\n\n# List files\nls -lrta\n\n# Backup existing kubectl\ncp kubectl kubectl_bkup_1.24\n\n# Copy latest kubectl\ncp kubectl.1.26 kubectl\n\n# Verify kubectl version\nkubectl version --short\nwhich kubectl\n```\n\n## Step-09: Configure kubeconfig for kubectl in local desktop terminal\n```t\n# Clean-Up kubeconfig file (if any older configs exists)\nrm $HOME/.kube/config\n\n# Configure kubeconfig for kubectl \ngcloud container clusters get-credentials <GKE-CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Verify Kubernetes Worker Nodes\nkubectl get nodes\n\n\n# Verify System Pod in kube-system Namespace\nkubectl -n kube-system get pods\n\n# Verify kubeconfig file\ncat $HOME/.kube/config\nkubectl config view\n```\n\n\n\n## References\n- [gcloud CLI](https://cloud.google.com/sdk/gcloud)\n- [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk#mac)"
  },
  {
    "path": "04-gcloud-cli-install-windowsos/README.md",
    "content": "---\ntitle: gcloud cli install on macOS\ndescription: Learn to install gcloud cli on WindowsOS\n---\n\n## Step-01: Introduction\n- Install gcloud CLI on WindowsOS\n- Configure kubeconfig for kubectl on your local terminal\n- Verify if you are able to reach GKE Cluster using kubectl from your local terminal\n- Fix kubectl version to match with GKE Cluster Server Version. \n\n## Step-02: Install gcloud cli on WindowsOS\n- [Install gcloud cli on WindowsOS](https://cloud.google.com/sdk/docs/install-sdk#windows)\n```t\n## Important Note: Download the latest version available on that respective day\nDowload Link: https://cloud.google.com/sdk/docs/install-sdk#windows\n\n## Run the Installer\nGoogleCloudSDKInstaller.exe\n```\n\n## Step-03: Verify gcloud cli version\n```t\n# gcloud cli version\ngcloud version\n```\n\n## Step-04: Intialize gcloud CLI in local Terminal \n```t\n# Initialize gcloud CLI\ngcloud init\n\n# List accounts whose credentials are stored on the local system:\ngcloud auth list\n\n# List the properties in your active gcloud CLI configuration\ngcloud config list\n\n# View information about your gcloud CLI installation and the active configuration\ngcloud info\n\n# gcloud config Configurations Commands (For Reference)\ngcloud config list\ngcloud config configurations list\ngcloud config configurations activate\ngcloud config configurations create\ngcloud config configurations delete\ngcloud config configurations describe\ngcloud config configurations rename\n```\n\n## Step-05: Verify gke-gcloud-auth-plugin \n```t\n## Important Note about gke-gcloud-auth-plugin: \n1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n\n# Install gke-gcloud-auth-plugin\ngcloud components install gke-gcloud-auth-plugin\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n```\n\n## Step-06: Remove any existing kubectl clients\n```t\n# Verify kubectl version\nkubectl version --output=yaml\nObservation: \n1. If any kubectl exists before installing it from gcloud then uninstall it.\n2. Usually if docker is installed on our desktop, its equivalent kubectl package mostly will be installed and set on PATH. If exists please remove it.  \n\n```\n\n## Step-07: Install kubectl client from gcloud CLI\n```t\n# List gcloud components\ngcloud components list\n\n## SAMPLE OUTPUT\nStatus: Not Installed\nName: kubectl\nID: kubectl\nSize: < 1 MiB\n\n# Install kubectl client\ngcloud components install kubectl\n\n# Verify kubectl version\nkubectl version --output=yaml\n```\n\n\n## Step-08: Configure kubeconfig for kubectl in local desktop terminal\n```t\n# Verify kubeconfig file\nkubectl config view\n\n# Configure kubeconfig for kubectl \ngcloud container clusters get-credentials <GKE-CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Verify kubeconfig file\nkubectl config view\n\n# Verify Kubernetes Worker Nodes\nkubectl get nodes\nObservation: \n1. It should throw warning at the end about huge difference in kubectl client version and GKE Cluster Server Version\n2. Lets fix that in next step. \n\n```\n## Step-09: Fix kubectl client version equal to GKE Cluster version\n- **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. \n- For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters.\n- As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26\n```t\n# Verify kubectl version\nkubectl version --output=yaml\n\n# Change Directroy \nGo to Google Cloud SDK \"bin\" directory\n\n# Backup existing kubectl\nBackup \"kubectl\" to \"kubectl_bkup_1.24\"\n\n# Copy latest kubectl\nCOPY  \"kubectl.1.26\" as \"kubectl\"\n\n# Verify kubectl version\nkubectl version --output=yaml\n```\n\n## References\n- [gcloud CLI](https://cloud.google.com/sdk/gcloud)\n- [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk#mac)"
  },
  {
    "path": "05-Docker-For-Beginners/README.md",
    "content": "---\ntitle: Docker Fundamentals\ndescription: Learn Docker Fundamentals\n---\n\n## Docker Fundamentals\n- For Docker Fundamentals github repository, please click on below link\n- https://github.com/stacksimplify/docker-fundamentals\n\n"
  },
  {
    "path": "06-kubectl-imperative-k8s-pods/README.md",
    "content": "---\ntitle: Kubernetes PODs\ndescription: Learn about Kubernetes Pods\n---\n\n## Step-01: PODs Introduction\n- What is a POD ?\n- What is a Multi-Container POD?\n\n## Step-02: PODs Demo\n### Step-02-01: Get Worker Nodes Status\n- Verify if kubernetes worker nodes are ready. \n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT-NAME>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Get Worker Node Status\nkubectl get nodes\n\n# Get Worker Node Status with wide option\nkubectl get nodes -o wide\n```\n\n### Step-02-02:  Create a Pod\n- Create a Pod\n```t\n# Template\nkubectl run <desired-pod-name> --image <Container-Image> \n\n# Replace Pod Name, Container Image\nkubectl run my-first-pod --image stacksimplify/kubenginx:1.0.0\n```  \n\n### Step-02-03: List Pods\n- Get the list of pods\n```t\n# List Pods\nkubectl get pods\n\n# Alias name for pods is po\nkubectl get po\n```\n\n### Step-02-04: List Pods with wide option\n- List pods with wide option which also provide Node information on which Pod is running\n```t\n# List Pods with Wide Option\nkubectl get pods -o wide\n```\n\n### Step-02-05: What happened in the backgroup when above command is run?\n1. Kubernetes created a pod\n2. Pulled the docker image from docker hub\n3. Created the container in the pod\n4. Started the container present in the pod\n\n### Step-02-06: Describe Pod\n- Describe the POD, primarily required during troubleshooting. \n- Events shown will be of a great help during troubleshooting. \n```t\n# To get list of pod names\nkubectl get pods\n\n# Describe the Pod\nkubectl describe pod <Pod-Name>\nkubectl describe pod my-first-pod \nObservation:\n1. Review Events - thats the key for troubleshooting, understanding what happened\n```\n\n### Step-02-07: Access Application\n- Currently we can access this application only inside worker nodes. \n- To access it externally, we need to create a **NodePort or Load Balancer Service**. \n- **Services** is one very very important concept in Kubernetes. \n\n### Step-02-08: Delete Pod\n```t\n# To get list of pod names\nkubectl get pods\n\n# Delete Pod\nkubectl delete pod <Pod-Name>\nkubectl delete pod my-first-pod\n```\n\n## Step-03: Load Balancer Service Introduction\n- What are Services in k8s?\n- What is a Load Balancer Service?\n- How it works?\n\n## Step-04: Demo - Expose Pod with a Service\n- Expose pod with a service (Load Balancer Service) to access the application externally (from internet)\n- **Ports**\n  - **port:** Port on which node port service listens in Kubernetes cluster internally\n  - **targetPort:** We define container port here on which our application is running.\n- Verify the following before LB Service creation\n  - Azure Standard Load Balancer created for Azure AKS Cluster\n    - Frontend IP Configuration\n    - Load Balancing Rules\n  - Azure Public IP \n```t\n# Create  a Pod\nkubectl run <desired-pod-name> --image <Container-Image> \nkubectl run my-first-pod --image stacksimplify/kubenginx:1.0.0 \n\n# Expose Pod as a Service\nkubectl expose pod <Pod-Name>  --type=LoadBalancer --port=80 --name=<Service-Name>\nkubectl expose pod my-first-pod  --type=LoadBalancer --port=80 --name=my-first-service\n\n# Get Service Info\nkubectl get service\nkubectl get svc\nObservation:\n1. Initially External-IP will show as pending and slowly it will get the external-ip assigned and displayed.\n2. It will take 2 to 3 minutes to get the external-ip listed\n\n# Describe Service\nkubectl describe service my-first-service\n\n# Access Application\nhttp://<External-IP-from-get-service-output>\ncurl http://<External-IP-from-get-service-output>\n```\n- Verify the following after LB Service creation\n- Google Load Balancer created, verify it. \n  - Verify Backends \n  - Verify Frontends\n- Verify **Workloads and Services** on Google GKE Dashboard GCP Console\n\n\n## Step-05: Interact with a Pod\n### Step-05-01: Verify Pod Logs\n```t\n# Get Pod Name\nkubectl get po\n\n# Dump Pod logs\nkubectl logs <pod-name>\nkubectl logs my-first-pod\n\n# Stream pod logs with -f option and access application to see logs\nkubectl logs <pod-name>\nkubectl logs -f my-first-pod\n```\n- **Important Notes**\n- Refer below link and search for **Interacting with running Pods** for additional log options\n- Troubleshooting skills are very important. So please go through all logging options available and master them.\n- **Reference:** https://kubernetes.io/docs/reference/kubectl/cheatsheet/\n\n### Step-05-02: Connect to a Container in POD and execute command\n```t\n# Connect to Nginx Container in a POD\nkubectl exec -it <pod-name> -- /bin/bash\nkubectl exec -it my-first-pod -- /bin/bash\n\n# Execute some commands in Nginx container\nls\ncd /usr/share/nginx/html\ncat index.html\nexit\n```\n### Step-05-03: Running individual commands in a Container\n```t\n# Template\nkubectl exec -it <pod-name> -- <COMMAND>\n\n# Sample Commands\nkubectl exec -it my-first-pod -- env\nkubectl exec -it my-first-pod -- ls\nkubectl exec -it my-first-pod -- cat /usr/share/nginx/html/index.html\n```\n\n## Step-06: Get YAML Output of Pod & Service\n### Get YAML Output\n```t\n# Get pod definition YAML output\nkubectl get pod my-first-pod -o yaml   \n\n# Get service definition YAML output\nkubectl get service my-first-service -o yaml   \n```\n\n## Step-07: Clean-Up\n```t\n# Get all Objects in default namespace\nkubectl get all\n\n# Delete Services\nkubectl delete svc my-first-service\n\n# Delete Pod\nkubectl delete pod my-first-pod\n\n# Get all Objects in default namespace\nkubectl get all\n```\n\n\n## LOGS - More Options\n\n```t\n# Return snapshot logs from pod nginx with only one container\nkubectl logs nginx\n\n# Return snapshot of previous terminated ruby container logs from pod web-1\nkubectl logs -p -c ruby web-1\n\n# Begin streaming the logs of the ruby container in pod web-1\nkubectl logs -f -c ruby web-1\n\n# Display only the most recent 20 lines of output in pod nginx\nkubectl logs --tail=20 nginx\n\n# Show all logs from pod nginx written in the last hour\nkubectl logs --since=1h nginx\n```\n"
  },
  {
    "path": "07-kubectl-declarative-k8s-ReplicaSets/README.md",
    "content": "---\ntitle: Kubernetes ReplicaSets\ndescription: Learn about Kubernetes ReplicaSets\n---\n\n## Step-01: Introduction to ReplicaSets\n- What are ReplicaSets?\n- What is the advantage of using ReplicaSets?\n\n## Step-02: Create ReplicaSet\n\n### Step-02-01: Create ReplicaSet\n- Create ReplicaSet\n```t\n# Kubernetes ReplicaSet\nkubectl create -f replicaset-demo.yml\n```\n- **replicaset-demo.yml**\n```yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n  name: my-helloworld-rs\n  labels:\n    app: my-helloworld\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: my-helloworld\n  template:\n    metadata:\n      labels:\n        app: my-helloworld\n    spec:\n      containers:\n      - name: my-helloworld-app\n        image: stacksimplify/kube-helloworld:1.0.0\n```\n\n### Step-02-02: List ReplicaSets\n- Get list of ReplicaSets\n```t\n# List ReplicaSets\nkubectl get replicaset\nkubectl get rs\n```\n\n### Step-02-03: Describe ReplicaSet\n- Describe the newly created ReplicaSet\n```t\n# Describe ReplicaSet\nkubectl describe rs/<replicaset-name>\n\nkubectl describe rs/my-helloworld-rs\n[or]\nkubectl describe rs my-helloworld-rs\n```\n\n### Step-02-04: List of Pods\n- Get list of Pods\n```t\n# Get list of Pods\nkubectl get pods\nkubectl describe pod <pod-name>\n\n# Get list of Pods with Pod IP and Node in which it is running\nkubectl get pods -o wide\n```\n\n### Step-02-05: Verify the Owner of the Pod\n- Verify the owner reference of the pod.\n- Verify under **\"name\"** tag under **\"ownerReferences\"**. We will find the replicaset name to which this pod belongs to. \n```t\n# List Pod with Output as YAML\nkubectl get pods <pod-name> -o yaml\nkubectl get pods my-helloworld-rs-c8rrj -o yaml \n```\n\n## Step-03: Expose ReplicaSet as a Service\n- Expose ReplicaSet with a service (Load Balancer Service) to access the application externally (from internet)\n```t\n# Expose ReplicaSet as a Service\nkubectl expose rs <ReplicaSet-Name>  --type=LoadBalancer --port=80 --target-port=8080 --name=<Service-Name-To-Be-Created>\nkubectl expose rs my-helloworld-rs  --type=LoadBalancer --port=80 --target-port=8080 --name=my-helloworld-rs-service\n\n# List Services\nkubectl get service\nkubectl get svc\n```\n- **Access the Application using External or Public IP**\n```t\n# Access Application\nhttp://<External-IP-from-get-service-output>/hello\ncurl http://<External-IP-from-get-service-output>/hello\n\n# Observation\n1. Each time we access the application, request will be sent to different pod and pods id will be displayed for us. \n```\n\n## Step-04: Test Replicaset Reliability or High Availability \n- Test how the high availability or reliability concept is achieved automatically in Kubernetes\n- Whenever a POD is accidentally terminated due to some application issue, ReplicaSet should auto-create that Pod to maintain desired number of Replicas configured to achive High Availability.\n```t\n# To get Pod Name\nkubectl get pods\n\n# Delete the Pod\nkubectl delete pod <Pod-Name>\n\n# Verify the new pod got created automatically\nkubectl get pods   (Verify Age and name of new pod)\n``` \n\n## Step-05: Test ReplicaSet Scalability feature \n- Test how scalability is going to seamless & quick\n- Update the **replicas** field in **replicaset-demo.yml** from 3 to 6.\n```yaml\n# Before change\nspec:\n  replicas: 3\n\n# After change\nspec:\n  replicas: 6\n```\n- Update the ReplicaSet\n```t\n# Apply latest changes to ReplicaSet\nkubectl replace -f replicaset-demo.yml\n\n# Verify if new pods got created\nkubectl get pods -o wide\n```\n\n## Step-06: Delete ReplicaSet & Service\n### Step-06-01: Delete ReplicaSet\n```t\n# Delete ReplicaSet\nkubectl delete rs <ReplicaSet-Name>\n\n# Sample Commands\nkubectl delete rs/my-helloworld-rs\n[or]\nkubectl delete rs my-helloworld-rs\n\n# Verify if ReplicaSet got deleted\nkubectl get rs\n```\n\n### Step-06-02: Delete Service created for ReplicaSet\n```t\n# Delete Service\nkubectl delete svc <service-name>\n\n# Sample Commands\nkubectl delete svc my-helloworld-rs-service\n[or]\nkubectl delete svc/my-helloworld-rs-service\n\n# Verify if Service got deleted\nkubectl get svc\n```\n"
  },
  {
    "path": "07-kubectl-declarative-k8s-ReplicaSets/replicaset-demo.yml",
    "content": "apiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n  name: my-helloworld-rs\n  labels:\n    app: my-helloworld\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: my-helloworld\n  template:\n    metadata:\n      labels:\n        app: my-helloworld\n    spec:\n      containers:\n      - name: my-helloworld-app\n        image: stacksimplify/kube-helloworld:1.0.0\n"
  },
  {
    "path": "08-kubectl-imperative-k8s-deployment-CREATE/README.md",
    "content": "---\ntitle: Kubernetes - Deployment\ndescription: Learn and Implement Kubernetes Deployment\n---\n\n## Kubernetes Deployment - Topics\n1. Create Deployment\n2. Scale the Deployment\n3. Expose Deployment as a Service\n4. Update Deployment\n5. Rollback Deployment\n6. Rolling Restarts\n7. Pause & Resume Deployments\n8. Canary Deployments (Will be covered at Declarative section of Deployments)\n\n## Step-01: Introduction to Deployments\n- What is a Deployment?\n- What all we can do using Deployment?\n- Create a Deployment\n- Scale the Deployment\n- Expose the Deployment as a Service\n\n## Step-02: Create Deployment\n- Create Deployment to rollout a ReplicaSet\n- Verify Deployment, ReplicaSet & Pods\n- **Docker Image Location:** https://hub.docker.com/repository/docker/stacksimplify/kubenginx\n```t\n# Create Deployment\nkubectl create deployment <Deplyment-Name> --image=<Container-Image>\nkubectl create deployment my-first-deployment --image=stacksimplify/kubenginx:1.0.0 \n\n# Verify Deployment\nkubectl get deployments\nkubectl get deploy \n\n# Describe Deployment\nkubectl describe deployment <deployment-name>\nkubectl describe deployment my-first-deployment\n\n# Verify ReplicaSet\nkubectl get rs\n\n# Verify Pod\nkubectl get po\n```\n### Update Change-Cause for the Kubernetes Deployment - Rollout History\n- **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us\n```t\n# Verify Rollout History\nkubectl rollout history deployment/my-first-deployment\n\n# Update REVISION CHANGE-CAUSE for Kubernetes Deployment\nkubectl annotate deployment/my-first-deployment kubernetes.io/change-cause=\"Deployment CREATE - App Version 1.0.0\"\n\n# Verify Rollout History\nkubectl rollout history deployment/my-first-deployment\n```\n## Step-03: Scaling a Deployment\n- Scale the deployment to increase the number of replicas (pods)\n```t\n# Scale Up the Deployment\nkubectl scale --replicas=10 deployment/<Deployment-Name>\nkubectl scale --replicas=10 deployment/my-first-deployment \n\n# Verify Deployment\nkubectl get deploy\n\n# Verify ReplicaSet\nkubectl get rs\n\n# Verify Pods\nkubectl get po\n\n# Scale Down the Deployment\nkubectl scale --replicas=2 deployment/my-first-deployment \nkubectl get deploy\n```\n\n## Step-04: Expose Deployment as a Service\n- Expose **Deployment** with a service (LoadBalancer Service) to access the application externally (from internet)\n```t\n# Expose Deployment as a Service\nkubectl expose deployment <Deployment-Name>  --type=LoadBalancer --port=80 --target-port=80 --name=<Service-Name-To-Be-Created>\nkubectl expose deployment my-first-deployment --type=LoadBalancer --port=80 --target-port=80 --name=my-first-deployment-service\n\n# Get Service Info\nkubectl get svc\n```\n- **Access the Application using Public IP**\n```t\n# Access Application\nhttp://<External-IP-from-get-service-output>\ncurl http://<External-IP-from-get-service-output>\n```"
  },
  {
    "path": "09-kubectl-imperative-k8s-deployment-UPDATE/README.md",
    "content": "---\ntitle: Kubernetes - Update Deployment\ndescription: Learn and Implement Kubernetes Update Deployment\n---\n## Step-00: Introduction\n- We can update deployments using two options\n  - Set Image\n  - Edit Deployment\n\n## Step-01: Updating Application version V1 to V2 using \"Set Image\" Option\n### Update Deployment\n- **Observation:** Please Check the container name in `spec.container.name` yaml output and make a note of it and \nreplace in `kubectl set image` command <Container-Name>\n```t\n# Get Container Name from current deployment\nkubectl get deployment my-first-deployment -o yaml\n\n# Update Deployment - SHOULD WORK NOW\nkubectl set image deployment/<Deployment-Name> <Container-Name>=<Container-Image> \nkubectl set image deployment/my-first-deployment kubenginx=stacksimplify/kubenginx:2.0.0 \n```\n\n### Verify Rollout Status (Deployment Status)\n- **Observation:** By default, rollout happens in a rolling update model, so no downtime.\n```t\n# Verify Rollout Status \nkubectl rollout status deployment/my-first-deployment\n\n# Verify Deployment\nkubectl get deploy\n```\n### Describe Deployment\n- **Observation:**\n  - Verify the Events and understand that Kubernetes by default do  \"Rolling Update\"  for new application releases. \n  - With that said, we will not have downtime for our application.\n```t\n# Descibe Deployment\nkubectl describe deployment my-first-deployment\n```\n### Verify ReplicaSet\n- **Observation:** New ReplicaSet will be created for new version\n```t\n# Verify ReplicaSet\nkubectl get rs\n```\n\n### Verify Pods\n- **Observation:** Pod template hash label of new replicaset should be present for PODs letting us \nknow these pods belong to new ReplicaSet.\n```t\n# List Pods\nkubectl get po\n```\n### Access the Application using Public IP\n- We should see `Application Version:V2` whenever we access the application in browser\n```t\n# Get Load Balancer IP\nkubectl get svc\n\n# Application URL\nhttp://<External-IP-from-get-service-output>\n```\n\n### Update Change-Cause for the Kubernetes Deployment - Rollout History\n- **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us.  \n```t\n# Verify Rollout History\nkubectl rollout history deployment/my-first-deployment\n\n# Update REVISION CHANGE-CAUSE\nkubectl annotate deployment/my-first-deployment kubernetes.io/change-cause=\"Deployment UPDATE - App Version 2.0.0 - SET IMAGE OPTION\"\n\n# Verify Rollout History\nkubectl rollout history deployment/my-first-deployment\n```\n\n\n## Step-02: Update the Application from V2 to V3 using \"Edit Deployment\" Option\n### Edit Deployment\n```t\n# Edit Deployment\nkubectl edit deployment/<Deployment-Name> \nkubectl edit deployment/my-first-deployment \n```\n\n```yaml\n# Change From 2.0.0\n    spec:\n      containers:\n      - image: stacksimplify/kubenginx:2.0.0\n\n# Change To 3.0.0\n    spec:\n      containers:\n      - image: stacksimplify/kubenginx:3.0.0\n```\n\n\n### Verify Rollout Status\n- **Observation:** Rollout happens in a rolling update model, so no downtime.\n```t\n# Verify Rollout Status \nkubectl rollout status deployment/my-first-deployment\n\n# Describe Deployment\nkubectl describe deployment/my-first-deployment\n```\n### Verify Replicasets\n- **Observation:**  We should see 3 ReplicaSets now, as we have updated our application to 3rd version 3.0.0\n```t\n# Verify ReplicaSet and Pods\nkubectl get rs\nkubectl get po\n```\n\n### Access the Application using Public IP\n- We should see `Application Version:V3` whenever we access the application in browser\n```t\n# Get Load Balancer IP\nkubectl get svc\n\n# Application URL\nhttp://<External-IP-from-get-service-output>\n```\n\n### Update Change-Cause for the Kubernetes Deployment - Rollout History\n- **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us. \n```t\n# Verify Rollout History\nkubectl rollout history deployment/my-first-deployment\n\n# Update REVISION CHANGE-CAUSE\nkubectl annotate deployment/my-first-deployment kubernetes.io/change-cause=\"Deployment UPDATE - App Version 3.0.0 - EDIT DEPLOYMENT OPTION\"\n\n# Verify Rollout History\nkubectl rollout history deployment/my-first-deployment\n```"
  },
  {
    "path": "10-kubectl-imperative-k8s-deployment-ROLLBACK/README.md",
    "content": "---\ntitle: Kubernetes - Rollback Deployment\ndescription: Learn and Implement Kubernetes Rollback Deployment\n---\n\n## Step-00: Introduction\n- We can rollback a deployment in two ways.\n  - Previous Version\n  - Specific Version\n\n## Step-01: Rollback a Deployment to previous version\n\n### Check the Rollout History of a Deployment\n```t\n# List Deployment Rollout History\nkubectl rollout history deployment/<Deployment-Name>\nkubectl rollout history deployment/my-first-deployment  \n```\n\n### Verify changes in each revision\n- **Observation:** Review the \"Annotations\" and \"Image\" tags for clear understanding about changes.\n```t\n# List Deployment History with revision information\nkubectl rollout history deployment/my-first-deployment --revision=1\nkubectl rollout history deployment/my-first-deployment --revision=2\nkubectl rollout history deployment/my-first-deployment --revision=3\n```\n\n\n### Rollback to previous version\n- **Observation:** If we rollback, it will go back to revision-2 and its number increases to revision-4\n```t\n# Undo Deployment\nkubectl rollout undo deployment/my-first-deployment\n\n# List Deployment Rollout History\nkubectl rollout history deployment/my-first-deployment  \n```\n\n### Verify Deployment, Pods, ReplicaSets\n```t\n# Verify Deployment, Pods, ReplicaSets\nkubectl get deploy\nkubectl get rs\nkubectl get po\nkubectl describe deploy my-first-deployment\n```\n\n### Access the Application using Public IP\n- We should see `Application Version:V2` whenever we access the application in browser\n```t\n# Get Load Balancer IP\nkubectl get svc\n\n# Application URL\nhttp://<External-IP-from-get-service-output>\n```\n\n\n## Step-02: Rollback to specific revision\n### Check the Rollout History of a Deployment\n```t\n# List Deployment Rollout History\nkubectl rollout history deployment/<Deployment-Name>\nkubectl rollout history deployment/my-first-deployment \n```\n### Rollback to specific revision\n```t\n# Rollback Deployment to Specific Revision\nkubectl rollout undo deployment/my-first-deployment --to-revision=3\n```\n\n### List Deployment History\n- **Observation:** If we rollback to revision 3, it will go back to revision-3 and its number increases to revision-5 in rollout history\n```t\n# List Deployment Rollout History\nkubectl rollout history deployment/my-first-deployment\n```\n\n\n### Access the Application using Public IP\n- We should see `Application Version:V3` whenever we access the application in browser\n```t\n# Get Load Balancer IP\nkubectl get svc\n\n# Application URL\nhttp://<Load-Balancer-IP>\n```\n\n## Step-03: Rolling Restarts of Application\n- Rolling restarts will kill the existing pods and recreate new pods in a rolling fashion. \n```t\n# Rolling Restarts\nkubectl rollout restart deployment/<Deployment-Name>\nkubectl rollout restart deployment/my-first-deployment\n\n# Get list of Pods\nkubectl get po\n```"
  },
  {
    "path": "11-kubectl-imperative-k8s-deployment-PAUSE-RESUME/README.md",
    "content": "---\ntitle: Kubernetes - Pause & Resume Deployments\ndescription: Implement Kubernetes - Pause & Resume Deployments\n---\n## Step-00: Introduction\n- Why do we need Pausing & Resuming Deployments?\n  - If we want to make multiple changes to our Deployment, we can pause the deployment make all changes and resume it. \n- We are going to update our Application Version from **V3 to V4** as part of learning \"Pause and Resume Deployments\"  \n\n## Step-01: Pausing & Resuming Deployments\n### Check current State of Deployment & Application\n ```t\n# Check the Rollout History of a Deployment\nkubectl rollout history deployment/my-first-deployment  \nObservation: Make a note of last version number\n\n# Get list of ReplicaSets\nkubectl get rs\nObservation: Make a note of number of replicaSets present.\n\n# Access the Application \nhttp://<External-IP-from-get-service-output>\nObservation: Make a note of application version\n```\n\n### Pause Deployment and Two Changes\n```t\n# Pause the Deployment\nkubectl rollout pause deployment/<Deployment-Name>\nkubectl rollout pause deployment/my-first-deployment\n\n# Update Deployment - Application Version from V3 to V4\nkubectl set image deployment/my-first-deployment kubenginx=stacksimplify/kubenginx:4.0.0 \n\n# Check the Rollout History of a Deployment\nkubectl rollout history deployment/my-first-deployment  \nObservation: No new rollout should start, we should see same number of versions as we check earlier with last version number matches which we have noted earlier.\n\n# Get list of ReplicaSets\nkubectl get rs\nObservation: No new replicaSet created. We should have same number of replicaSets as earlier when we took note. \n\n# Make one more change: set limits to our container\nkubectl set resources deployment/my-first-deployment -c=kubenginx --limits=cpu=20m,memory=30Mi\n```\n### Resume Deployment \n```t\n# Resume the Deployment\nkubectl rollout resume deployment/my-first-deployment\n\n# Check the Rollout History of a Deployment\nkubectl rollout history deployment/my-first-deployment  \nObservation: You should see a new version got created\n\n# Update REVISION CHANGE-CAUSE\nkubectl annotate deployment/my-first-deployment kubernetes.io/change-cause=\"Deployment PAUSE RESUME Demo - App Version 4.0.0 \"\n\n# Check the Rollout History of a Deployment\nkubectl rollout history deployment/my-first-deployment\n\n# Get list of ReplicaSets\nkubectl get rs\nObservation: You should see new ReplicaSet.\n\n# Get Load Balancer IP\nkubectl get svc\n```\n### Access Application\n```t\n# Access the Application \nhttp://<External-IP-from-get-service-output>\nObservation: You should see Application V4 version\n```\n\n\n## Step-02: Clean-Up\n```t\n# Delete Deployment\nkubectl delete deployment my-first-deployment\n\n# Delete Service\nkubectl delete svc my-first-deployment-service\n\n# Get all Objects from Kubernetes default namespace\nkubectl get all\n```"
  },
  {
    "path": "12-kubectl-imperative-k8s-services/README.md",
    "content": "---\ntitle: Kubernetes Services\ndescription: Learn about Kubernetes ClusterIP and Load Balancer Services\n---\n## Step-01: Introduction to Services\n- **Service Types**\n  1. ClusterIp\n  2. NodePort\n  3. LoadBalancer\n  4. ExternalName\n  5. Ingress\n- We are going to look in to ClusterIP and LoadBalancer Service in this section with a detailed example. \n- LoadBalancer Type is primarily for cloud providers and it will differ cloud to cloud, so we will do it accordingly (per cloud basis)\n- ExternalName doesn't have Imperative commands and we need to write YAML definition for the same, so we will look in to it as and when it is required in our course. \n\n## Step-02: ClusterIP Service - Backend Application Setup\n- Create a deployment for Backend Application (Spring Boot REST Application)\n- Create a ClusterIP service for load balancing backend application. \n```t\n# Create Deployment for Backend Rest App\nkubectl create deployment my-backend-rest-app --image=stacksimplify/kube-helloworld:1.0.0 \nkubectl get deploy\n\n# Create ClusterIp Service for Backend Rest App\nkubectl expose deployment my-backend-rest-app --port=8080 --target-port=8080 --name=my-backend-service\nkubectl get svc\nObservation: We don't need to specify \"--type=ClusterIp\" because default setting is to create ClusterIp Service. \n```\n- **Important Note:** If backend application port (Container Port: 8080) and Service Port (8080) are same we don't need to use **--target-port=8080** but for avoiding the confusion i have added it. Same case applies to frontend application and service. \n\n- **Backend HelloWorld Application Source** [kube-helloworld](https://github.com/stacksimplify/kubernetes-fundamentals/tree/master/00-Docker-Images/02-kube-backend-helloworld-springboot/kube-helloworld)\n\n\n## Step-03: LoadBalancer Service - Frontend Application Setup\n- We have implemented **LoadBalancer Service** multiple times so far (in pods, replicasets and deployments), even then we are going to implement one more time to get a full architectural view in relation with ClusterIp service. \n- Create a deployment for Frontend Application (Nginx acting as Reverse Proxy)\n- Create a LoadBalancer service for load balancing frontend application. \n- **Important Note:** In Nginx reverse proxy, ensure backend service name `my-backend-service` is updated when you are building the frontend container. We already built it and put ready for this demo (stacksimplify/kube-frontend-nginx:1.0.0)\n- **Nginx Conf File**\n```conf\nserver {\n    listen       80;\n    server_name  localhost;\n    location / {\n    # Update your backend application Kubernetes Cluster-IP Service name  and port below      \n    # proxy_pass http://<Backend-ClusterIp-Service-Name>:<Port>;      \n    proxy_pass http://my-backend-service:8080;\n    }\n    error_page   500 502 503 504  /50x.html;\n    location = /50x.html {\n        root   /usr/share/nginx/html;\n    }\n}\n```\n- **Docker Image Location:** https://hub.docker.com/repository/docker/stacksimplify/kube-frontend-nginx\n- **Frontend Nginx Reverse Proxy Application Source** [kube-frontend-nginx](https://github.com/stacksimplify/kubernetes-fundamentals/tree/master/00-Docker-Images/03-kube-frontend-nginx)\n```t\n# Create Deployment for Frontend Nginx Proxy\nkubectl create deployment my-frontend-nginx-app --image=stacksimplify/kube-frontend-nginx:1.0.0 \nkubectl get deploy\n\n# Create LoadBalancer Service for Frontend Nginx Proxy\nkubectl expose deployment my-frontend-nginx-app  --type=LoadBalancer --port=80 --target-port=80 --name=my-frontend-service\nkubectl get svc\n\n# Get Load Balancer IP\nkubectl get svc\nhttp://<External-IP-from-get-service-output>/hello\ncurl http://<External-IP-from-get-service-output>/hello\n\n# Scale backend with 10 replicas\nkubectl scale --replicas=10 deployment/my-backend-rest-app\n\n# Test again to view the backend service Load Balancing\nhttp://<External-IP-from-get-service-output>/hello\ncurl http://<External-IP-from-get-service-output>/hello\n```\n\n## Step-04: Clean-Up Kubernetes Deployment and Services\n```t\n# List Services\nkubectl get svc \n\n# Delete Services\nkubectl delete service my-backend-service \nkubectl delete service my-frontend-service \n\n# List Deployments\nkubectl get deploy\n\n# Delete Deployments\nkubectl delete deployment my-backend-rest-app   \nkubectl delete deployment my-frontend-nginx-app\n```\n"
  },
  {
    "path": "13-YAML-Basics/README.md",
    "content": "---\ntitle: YAML Basics for Kubernetes\ndescription: Learn YAML Basics\n---\n\n## Step-01: Comments & Key Value Pairs\n- Space after colon is mandatory to differentiate key and value\n```yml\n# Defining simple key value pairs\nname: kalyan\nage: 23\ncity: Hyderabad\n```\n\n## Step-02: Dictionary / Map\n- Set of properties grouped together after an item\n- Equal amount of blank space required for all the items under a dictionary\n```yml\nperson:\n  name: kalyan\n  age: 23\n  city: Hyderabad\n```\n\n## Step-03: Array / Lists\n- Dash indicates an element of an array\n```yml\nperson: # Dictionary\n  name: kalyan\n  age: 23\n  city: Hyderabad\n  hobbies: # List  \n    - cycling\n    - cookines\n  hobbies: [cycling, cooking]   # List with a differnt notation  \n```  \n\n## Step-04: Multiple Lists\n- Dash indicates an element of an array\n```yml\nperson: # Dictionary\n  name: kalyan\n  age: 23\n  city: Hyderabad\n  hobbies: # List  \n    - cycling\n    - cooking\n  hobbies: [cycling, cooking]   # List with a differnt notation  \n  friends: # Multiple Lists\n    - name: friend1\n      age: 22\n    - name: friend2\n      age: 25            \n```  \n\n\n## Step-05: Sample Pod Tempalte for Reference\n```yml\napiVersion: v1 # String\nkind: Pod  # String\nmetadata: # Dictionary\n  name: myapp-pod\n  labels: # Dictionary \n    app: myapp         \nspec:\n  containers: # List\n    - name: myapp\n      image: stacksimplify/kubenginx:1.0.0\n      ports: # Multiple Lists\n        - containerPort: 80\n          protocol: \"TCP\"\n        - containerPort: 81\n          protocol: \"TCP\"\n```\n\n\n\n\n"
  },
  {
    "path": "13-YAML-Basics/sample-file.yml",
    "content": "# Simple Key value Pairs\nperson: # Dictionary\n  name: kalyan\n  age: 23\n  city: Hyderabd\n  hobbies: # List\n    - cooking\n    - cycling\n  friends: # Multiple lists\n    - name: friend1\n      age: 23\n    - name: friend2\n      age: 22\n--- # YAML Document Separator\napiVersion: v1 # String\nkind: Pod  # String\nmetadata: # Dictionary\n  name: myapp-pod\n  labels: # Dictionary \n    app: myapp    \n    tier: frontend     \nspec:\n  containers: # List\n    - name: myapp\n      image: stacksimplify/kubenginx:1.0.0\n      ports: # Multiple Lists\n        - containerPort: 80\n          protocol: \"TCP\"\n        - containerPort: 81\n          protocol: \"TCP\"  \n\n\n                     \n\n  "
  },
  {
    "path": "13-YAML-Basics/yaml-demo.yaml",
    "content": "# Simple Key Value Pairs\nperson: # Dictionary\n  name: kalyan\n  age: 23\n  city: Hyderabad\n  hobbies: # List \n    - cooking\n    - cycling \n  hobbies: [cooking, cycling]   # Another Notation for Lists\n  friends: # Multiple Lists\n    - name: friend1\n      age: 23\n    - name: friend2\n      age: 22   \n--- # YAML Document Separator         \napiVersion: v1 # String\nkind: Pod  # String\nmetadata: # Dictionary\n  name: myapp-pod\n  labels: # Dictionary \n    app: myapp         \nspec:\n  containers: # List\n    - name: myapp\n      image: stacksimplify/kubenginx:1.0.0\n      ports: # Multiple Lists\n        - containerPort: 80\n          protocol: \"TCP\"\n        - containerPort: 81\n          protocol: \"TCP\""
  },
  {
    "path": "14-yaml-declarative-k8s-pods/README.md",
    "content": "---\ntitle: Kubernetes Pods with YAML\ndescription: Learn to write and test Kubernetes Pods with YAML\n---\n\n## Step-01: Kubernetes YAML Top level Objects\n- Discuss about the k8s YAML top level objects\n- **kube-base-definition.yml**\n```yml\napiVersion:\nkind:\nmetadata:\n  \nspec:\n```\n- [Kubernetes Reference](https://kubernetes.io/docs/reference/)\n- [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/)\n-  [Pod API Objects Reference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core)\n\n## Step-02: Create Simple Pod Definition using YAML \n- We are going to create a very basic pod definition\n- **01-pod-definition.yml**\n```yaml\napiVersion: v1 # String\nkind: Pod  # String\nmetadata: # Dictionary\n  name: myapp-pod\n  labels: # Dictionary \n    app: myapp         \nspec:\n  containers: # List\n    - name: myapp\n      image: stacksimplify/kubenginx:1.0.0\n      ports:\n        - containerPort: 80\n```\n- **Create Pod**\n```t\n# Change Directory\ncd kube-manifests\n\n# Create Pod\nkubectl create -f 01-pod-definition.yml\n[or]\nkubectl apply -f 01-pod-definition.yml\n\n# List Pods\nkubectl get pods\n```\n\n## Step-03: Create a LoadBalancer Service\n- **02-pod-LoadBalancer-service.yml**\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: myapp-pod-loadbalancer-service  # Name of the Service\nspec:\n  type: LoadBalancer\n  selector:\n  # Loadbalance traffic across Pods matching this label selector\n    app: myapp\n  # Accept traffic sent to port 80    \n  ports: \n    - name: http\n      port: 80    # Service Port\n      targetPort: 80 # Container Port\n```\n- **Create LoadBalancer Service for Pod**\n```t\n# Create Service\nkubectl apply -f 02-pod-LoadBalancer-service.yml\n\n# List Service\nkubectl get svc\n\n# Access Application\nhttp://<Load-Balancer-Service-IP>\ncurl http://<Load-Balancer-Service-IP>\n```\n\n## Step-04: Clean-Up Kubernetes Pod and Service\n```t\n# Change Directory\ncd kube-manifests\n\n# Delete Pod\nkubectl delete -f 01-pod-definition.yml\n\n# Delete Service\nkubectl delete -f  02-pod-LoadBalancer-service.yml\n```\n\n\n## API Object References\n- [Kubernetes API Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/)\n- [Pod Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core)\n- [Service Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core)\n- [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/)\n\n\n"
  },
  {
    "path": "14-yaml-declarative-k8s-pods/kube-base-definition.yml",
    "content": "apiVersion: \nkind: \nmetadata:\n\nspec:\n    \n# Types of Kubernetes Objects\n# Pod, ReplicaSet, Deployment, Service and many more\n\n# apiVersion: version of k8s objects\n# kind: k8s objects \n# metadata: define name and labels for k8s objects\n# spec: specification or real definition for k8s objects\n"
  },
  {
    "path": "14-yaml-declarative-k8s-pods/kube-manifests/01-pod-definition.yml",
    "content": "apiVersion: v1 # String\nkind: Pod # String\nmetadata: # Dictionary\n  name: myapp-pod\n  labels: # Dictionary\n    app: myapp # Key Value Pairs\nspec:\n  containers: # List\n    - name: myapp\n      image: stacksimplify/kubenginx:1.0.0\n      ports: # List\n        - containerPort: 80\n\n\n    "
  },
  {
    "path": "14-yaml-declarative-k8s-pods/kube-manifests/02-pod-LoadBalancer-service.yml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: myapp-pod-loadbalancer-service\nspec:\n  type: LoadBalancer\n  # Loadbalance traffic across Pods matching this label selector\n  selector: \n    app: myapp \n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port"
  },
  {
    "path": "15-yaml-declarative-k8s-replicasets/README.md",
    "content": "---\ntitle: Kubernetes ReplicaSets with YAML\ndescription: Learn to write and test Kubernetes ReplicaSets with YAML\n---\n\n## Step-01: Create ReplicaSet Definition\n- **01-replicaset-definition.yml**\n```yaml\napiVersion: apps/v1\nkind: ReplicaSet\nmetadata:\n  name: myapp2-rs\nspec:\n  replicas: 3 # 3 Pods should exist at all times.\n  selector:  # Pods label should be defined in ReplicaSet label selector\n    matchLabels:\n      app: myapp2\n  template:\n    metadata:\n      name: myapp2-pod\n      labels:\n        app: myapp2 # Atleast 1 Pod label should match with ReplicaSet Label Selector\n    spec:\n      containers:\n      - name: myapp2\n        image: stacksimplify/kubenginx:2.0.0\n        ports:\n          - containerPort: 80\n```\n## Step-02: Create ReplicaSet\n- Create ReplicaSet with 3 Replicas\n```t\n# Create ReplicaSet\nkubectl apply -f 01-replicaset-definition.yml\n\n# List Replicasets\nkubectl get rs\n```\n- Delete a pod\n- ReplicaSet immediately creates the pod. \n```t\n# List Pods\nkubectl get pods\n\n# Delete Pod\nkubectl delete pod <Pod-Name>\n```\n\n## Step-03: Create LoadBalancer Service for ReplicaSet\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: replicaset-loadbalancer-service\nspec:\n  type: LoadBalancer \n  selector: \n    app: myapp2 \n  ports: \n    - name: http\n      port: 80\n      targetPort: 80\n     \n```\n- **Create LoadBalancer Service for ReplicaSet & Test**\n```t\n# Create LoadBalancer Service\nkubectl apply -f 02-replicaset-LoadBalancer-servie.yml\n\n# List LoadBalancer Service\nkubectl get svc\n\n# Access Application\nhttp://<Load-Balancer-Service-IP>\n```\n\n\n## Step-04: Clean-Up Kubernetes ReplicaSet and Service\n```t\n# Change Directory\ncd kube-manifests\n\n# Delete Pod\nkubectl delete -f 01-replicaset-definition.yml\n\n# Delete Service\nkubectl delete -f  02-replicaset-LoadBalancer-servie.yml\n```\n\n\n## API References\n- [ReplicaSet](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#replicaset-v1-apps)"
  },
  {
    "path": "15-yaml-declarative-k8s-replicasets/kube-base-definition.yml",
    "content": "apiVersion:\nkind:\nmetadata:\n  \nspec:\n    "
  },
  {
    "path": "15-yaml-declarative-k8s-replicasets/kube-manifests/01-replicaset-definition.yml",
    "content": "apiVersion: apps/v1\nkind: ReplicaSet  \nmetadata: # Dictionary\n  name: myapp2-rs\nspec: # Dictionary\n  replicas: 3\n  selector: \n    matchLabels: \n      app: myapp2\n  template:\n    metadata: # Dictionary\n      name: myapp2-pod\n      labels:\n        app: myapp2 # Key Value Pairs   \n    spec:\n      containers: # List\n        - name: myapp2-container\n          image: stacksimplify/kubenginx:2.0.0\n          ports: \n            - containerPort: 80          "
  },
  {
    "path": "15-yaml-declarative-k8s-replicasets/kube-manifests/02-replicaset-LoadBalancer-servie.yml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: replicaset-loadbalancer-service\nspec:\n  type: LoadBalancer\n  # Loadbalance traffic across Pods matching this label selector\n  selector: \n    app: myapp2 \n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port"
  },
  {
    "path": "16-yaml-declarative-k8s-deployments/README.md",
    "content": "---\ntitle: Kubernetes Deployments with YAML\ndescription: Learn to write and test Kubernetes Deployments with YAML\n---\n\n## Step-01: Copy templates from ReplicaSet\n- Copy templates from ReplicaSet and change the `kind: Deployment` \n- Update Container Image version to `3.0.0`\n- Change all names to Deployment\n- Change all labels and selectors to `myapp3`\n\n```t\n# Change Directory\ncd kube-manifests\n\n# Create Deployment\nkubectl apply -f 01-deployment-definition.yml\nkubectl get deploy\nkubectl get rs\nkubectl get po\n\n# Create LoadBalancer Service\nkubectl apply -f 02-deployment-LoadBalancer-service.yml\n\n# List Service\nkubectl get svc\n\n# Get Public IP\nkubectl get nodes -o wide\n\n# Access Application\nhttp://<Load-Balancer-Service-IP>\n```\n\n## Step-02: Clean-Up Kubernetes Deployment and Service\n```t\n# Change Directory\ncd kube-manifests\n\n# Delete Deployment\nkubectl delete -f 01-deployment-definition.yml\n\n# Delete LoadBalancer Service\nkubectl delete -f 02-deployment-LoadBalancer-service.yml\n```\n\n\n## API References\n- [Deployment](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#deployment-v1-apps)\n"
  },
  {
    "path": "16-yaml-declarative-k8s-deployments/kube-base-definition.yml",
    "content": "apiVersion:\nkind:\nmetadata:\n  \nspec:\n    "
  },
  {
    "path": "16-yaml-declarative-k8s-deployments/kube-manifests/01-deployment-definition.yml",
    "content": "apiVersion: apps/v1\nkind: Deployment  \nmetadata: # Dictionary\n  name: myapp3-deployment\nspec: # Dictionary\n  replicas: 3\n  selector: \n    matchLabels: \n      app: myapp3\n  template:\n    metadata: # Dictionary\n      name: myapp3-pod\n      labels:\n        app: myapp3 # Key Value Pairs   \n    spec:\n      containers: # List\n        - name: myapp3-container\n          image: stacksimplify/kubenginx:3.0.0\n          ports: \n            - containerPort: 80          "
  },
  {
    "path": "16-yaml-declarative-k8s-deployments/kube-manifests/02-deployment-LoadBalancer-servie.yml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: deployment-loadbalancer-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp3\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port"
  },
  {
    "path": "17-yaml-declarative-k8s-services/README.md",
    "content": "---\ntitle: Kubernetes Services with YAML\ndescription: Learn to write and test Kubernetes Services with YAML\n---\n\n## Step-01: Introduction to Services\n- We are going to look in to below two services in detail with a frotnend and backend example\n  - LoadBalancer Service\n  - ClusterIP Service\n\n## Step-02: Create Backend Deployment & Cluster IP Service\n- Write the Deployment template for backend REST application.\n- Write the Cluster IP service template for backend REST application.\n- **Important Notes:** \n  - Name of Cluster IP service should be `name: my-backend-service` because  same is configured in frontend nginx reverse proxy `default.conf`. \n  - Test with different name and understand the issue we face\n  - We have also discussed about in our  [Section-12](https://github.com/stacksimplify/google-kubernetes-engine/tree/main/12-kubectl-imperative-k8s-services)\n```t\n# Change Directory\ncd kube-manifests\n\n# Deploy Backend Kubernetes Deployment and ClusterIP Service \nkubectl get all\nkubectl apply -f 01-backend-deployment.yml -f 02-backend-clusterip-service.yml\nkubectl get all\n```\n\n\n## Step-03: Create Frontend Deployment & LoadBalancer Service\n- Write the Deployment template for frontend Nginx Application\n- Write the LoadBalancer service template for frontend Nginx Application\n```t\n# Change Directory\ncd kube-manifests\n\n# Deploy Frontend Kubernetes Deployment and LoadBalancer Service \nkubectl get all\nkubectl apply -f 03-frontend-deployment.yml -f 04-frontend-LoadBalancer-service.yml\nkubectl get all\n```\n- **Access REST Application**\n```t\n# Get Service IP\nkubectl get svc\n\n# Access REST Application \nhttp://<Load-Balancer-Service-IP>/hello\ncurl http://<Load-Balancer-Service-IP>/hello\n```\n\n## Step-04: Delete & Recreate Objects using kubectl apply\n### Delete Objects (file by file)\n```t\n# Change Directory \ncd kube-manifests/\n\n# Delete Objects File by file\nkubectl delete -f 01-backend-deployment.yml -f 02-backend-clusterip-service.yml -f 03-frontend-deployment.yml -f 04-frontend-LoadBalancer-service.yml\nkubectl get all\n```\n### Recreate Objects using YAML files in a folder\n```t\n# Change Directory \ncd 17-yaml-declarative-k8s-services/\n\n# Recreate Objects by referencing a folder\nkubectl apply -f kube-manifests/\nkubectl get all\n```\n\n### Delete Objects using YAML files in folder\n```t\n# Change Directory \ncd 17-yaml-declarative-k8s-services/\n\n# Delete Objects by just referencing a folder\nkubectl delete -f kube-manifests/\nkubectl get all\n```\n\n\n## Additional References - Use Label Selectors for get and delete\n- [Labels](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively)\n- [Labels-Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors)"
  },
  {
    "path": "17-yaml-declarative-k8s-services/kube-base-definition.yml",
    "content": "apiVersion: \nkind: \nmetadata:\n\nspec:\n"
  },
  {
    "path": "17-yaml-declarative-k8s-services/kube-manifests/01-backend-deployment.yml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: backend-restapp\n  labels:\n    app: backend-restapp\n    tier: backend \nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: backend-restapp\n  template:\n    metadata:\n      labels:\n        app: backend-restapp\n        tier: backend \n    spec: \n      containers:\n        - name: backend-restapp\n          image: stacksimplify/kube-helloworld:1.0.0\n          ports:\n            - containerPort: 8080        "
  },
  {
    "path": "17-yaml-declarative-k8s-services/kube-manifests/02-backend-clusterip-service.yml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: my-backend-service ## VERY VERY IMPORTANT - NGINX PROXYPASS needs this name\n  labels:\n    app: backend-restapp\n    tier: backend   \nspec:\n  #type: ClusterIP is a default service in k8s\n  selector:\n    app: backend-restapp\n  ports:\n    - name: http\n      port: 8080 # ClusterIP Service Port\n      targetPort: 8080 # Container Port\n"
  },
  {
    "path": "17-yaml-declarative-k8s-services/kube-manifests/03-frontend-deployment.yml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: frontend-nginxapp\n  labels:\n    app: frontend-nginxapp\n    tier: frontend\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: frontend-nginxapp\n  template: \n    metadata:\n      labels: \n        app: frontend-nginxapp\n        tier: frontend\n    spec: \n      containers: \n        - name: frontend-nginxapp\n          image: stacksimplify/kube-frontend-nginx:1.0.0\n          ports:\n            - containerPort: 80"
  },
  {
    "path": "17-yaml-declarative-k8s-services/kube-manifests/04-frontend-LoadBalancer-service.yml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: frontend-nginxapp-loadbalancer-service\n  labels:\n    app: frontend-nginxapp\n    tier: frontend  \nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: frontend-nginxapp\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port"
  },
  {
    "path": "18-GKE-NodePort-Service/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE NodePort Service\ndescription: Implement GCP Google Kubernetes Engine GKE NodePort Service\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# List GKE Kubernetes Worker Nodes\nkubectl get nodes\n\n# List GKE Kubernetes Worker Nodes with -o wide option\nkubectl get nodes -o wide\nObservation: \n1. You should see External-IP Address (Public IP accesible via internet)\n2. That is the key thing for testing the Kubernetes NodePort Service on GKE Cluster\n```\n## Step-01: Introduction\n- Implement Kubernetes NodePort Service \n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80      \n```\n\n## Step-03: 02-kubernetes-nodeport-service.yaml\n- If you don't speciy `nodePort: 30080` it will dynamically assign one port from range `30000-32768`\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-nodeport-service\nspec:\n  type: NodePort # clusterIP, # NodePort, # LoadBalancer, # ExternalName\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n      nodePort: 30080 # NodePort (Optional)(Node Port Range: 30000-32768)\n```\n\n\n## Step-04: Deply Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get po\n\n# List Services\nkubectl get svc\n```\n\n## Step-05: Access Application\n```t\n# List Kubernetes Worker Node with -0 wide\nkubectl get nodes -o wide\nObservation: \n1. Make a note of any one Node External-IP (Public IP Address)\n\n# Access Application\nhttp://<NODE-EXTERNAL-IP>:<NodePort>\nhttp://104.154.52.12:30080\nObservation:\n1. This should fail\n```\n\n## Step-06: Create Firewall Rule\n```t\n# Create Firewall Rule\ngcloud compute firewall-rules create fw-rule-gke-node-port \\\n    --allow tcp:NODE_PORT\n\n# Replace NODE_PORT\ngcloud compute firewall-rules create fw-rule-gke-node-port \\\n    --allow tcp:30080   \n\n# List Firewall Rules\ngcloud compute firewall-rules list    \n```\n\n## Step-07:Access Application\n```t\n# List Kubernetes Worker Node with -0 wide\nkubectl get nodes -o wide\nObservation: \n1. Make a note of any one Node External-IP (Public IP Address)\n\n# Access Application\nhttp://<NODE-EXTERNAL-IP>:<NodePort>\nhttp://104.154.52.12:30080\nObservation:\n1. This should Pass\n```\n\n\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete NodePort Service Firewall Rule\ngcloud compute firewall-rules delete fw-rule-gke-node-port\n\n# List Firewall Rules\ngcloud compute firewall-rules list \n```\n\n\n"
  },
  {
    "path": "18-GKE-NodePort-Service/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n    "
  },
  {
    "path": "18-GKE-NodePort-Service/kube-manifests/02-kubernetes-nodeport-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-nodeport-service\nspec:\n  type: NodePort # ClusterIP, # NodePort, # LoadBalancer, # ExternalName\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n      nodePort: 30080 # NodePort (Optional)(Node Port Range: 30000-32768)\n"
  },
  {
    "path": "19-GKE-Headless-Service/01-kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 4\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          #image: stacksimplify/kubenginx:1.0.0\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n          ports: \n            - containerPort: 8080  \n    "
  },
  {
    "path": "19-GKE-Headless-Service/01-kube-manifests/02-kubernetes-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-cip-service\nspec:\n  type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 8080 # Container Port\n\n"
  },
  {
    "path": "19-GKE-Headless-Service/01-kube-manifests/03-kubernetes-headless-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-headless-service\nspec:\n  #type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName\n  clusterIP: None\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 8080 # Service Port\n      targetPort: 8080 # Container Port\n\n## VERY IMPORTANT NODE\n# 1. When using Headless Service, we should use both the  \"Service Port and Target Port\" same. \n# 2. Headless Service directly sends traffic to Pod with Pod IP and Container Port. \n# 3. DNS resolution directly happens from headless service to Pod IP.\n\n\n\n"
  },
  {
    "path": "19-GKE-Headless-Service/02-kube-manifests-curl/01-curl-pod.yml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: curl-pod\nspec:\n  containers:\n  - name: curl\n    image: curlimages/curl \n    command: [ \"sleep\", \"600\" ]"
  },
  {
    "path": "19-GKE-Headless-Service/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Headless Service\ndescription: Implement GCP Google Kubernetes Engine GKE Headless Service\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# List GKE Kubernetes Worker Nodes\nkubectl get nodes\n```\n## Step-01: Introduction\n- Implement Kubernetes ClusterIP and Headless Service\n- Understand Headless Service in detail\n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 4\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          #image: stacksimplify/kubenginx:1.0.0\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0\n          ports: \n            - containerPort: 8080          \n```\n\n## Step-03: 02-kubernetes-clusterip-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-cip-service\nspec:\n  type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n\n## Step-04: 03-kubernetes-headless-service.yaml\n- Add `spec.clusterIP: None`\n###  VERY IMPORTANT NODE\n1. When using Headless Service, we should use both the  \"Service Port and Target Port\" same. \n2. Headless Service directly sends traffic to Pod with Pod IP and Container Port. \n3. DNS resolution directly happens from headless service to Pod IP.\n\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-headless-service\nspec:\n  #type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName\n  clusterIP: None\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 8080 # Service Port\n      targetPort: 8080 # Container Port\n\n```\n\n## Step-05: Deply Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\nkubectl get pods -o wide\nObservation: make a note of Pod IP\n\n# List Services\nkubectl get svc\nObservation: \n1. \"CLUSTER-IP\" will be \"NONE\" for Headless Service\n\n## Sample \nKalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ kubectl get svc\nNAME                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE\nkubernetes                ClusterIP   10.24.0.1    <none>        443/TCP   135m\nmyapp1-cip-service        ClusterIP   10.24.2.34   <none>        80/TCP    4m9s\nmyapp1-headless-service   ClusterIP   None         <none>        80/TCP    4m9s\nKalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ \n\n```\n\n\n## Step-06: Review Curl Kubernetes Manifests\n- **Project Folder:** 02-kube-manifests-curl\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: curl-pod\nspec:\n  containers:\n  - name: curl\n    image: curlimages/curl \n    command: [ \"sleep\", \"600\" ]\n```\n\n## Step-07: Deply Curl-pod and Verify ClusterIP and Headless Services\n```t\n# Deploy curl-pod\nkubectl apply -f 02-kube-manifests-curl\n\n# List Services\nkubectl get svc\n\n# GKE Cluster Kubernetes Service Full DNS Name format\n<svc>.<ns>.svc.cluster.local\n\n# Will open up a terminal session into the container\nkubectl exec -it curl-pod -- sh\n\n# ClusterIP Service: nslookup and curl Test\nnslookup myapp1-cip-service.default.svc.cluster.local\ncurl myapp1-cip-service.default.svc.cluster.local\n\n### ClusterIP Service nslookup Outptu\n $ nslookup myapp1-cip-service.default.svc.cluster.local\nServer:\t\t10.24.0.10\nAddress:\t10.24.0.10:53\n\nName:\tmyapp1-cip-service.default.svc.cluster.local\nAddress: 10.24.2.34\n\n# Headless Service: nslookup and curl Test\nnslookup myapp1-headless-service.default.svc.cluster.local\ncurl myapp1-headless-service.default.svc.cluster.local:8080\nObservation:\n1. There is no specific IP for Headless Service\n2. It will be directly dns resolved to Pod IP\n3. That said we should use the same port as Container Port for Headless Service (VERY VERY IMPORTANT)\n\n### Headless Service nslookup Output\n$ nslookup myapp1-headless-service.default.svc.cluster.local\nServer:\t\t10.24.0.10\nAddress:\t10.24.0.10:53\n\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.0.25\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.0.26\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.1.28\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.1.29\n```\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 01-kube-manifests\n\n# Delete Kubernetes Resources - Curl Pod\nkubectl delete -f 02-kube-manifests-curl\n```\n\n\n"
  },
  {
    "path": "20-GKE-Private-Cluster/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Private Cluster\ndescription: Implement GCP Google Kubernetes Engine GKE Private Cluster\n---\n\n## Step-01: Introduction\n- Create GKE Private Cluster\n- Create Cloud NAT\n- Deploy Sample App and Test\n- Perform Authorized Network Tests\n \n## Step-02: Create Standard GKE Cluster \n- Go to Kubernetes Engine -> Clusters -> CREATE\n- Select **GKE Standard -> CONFIGURE**\n- **Cluster Basics**\n  - **Name:** standard-cluster-private-1\n  - **Location type:** Regional\n  - **Zone:** us-central1-a, us-central1-b, us-central1-c\n  - **Release Channel**\n    - **Release Channel:** Rapid Channel\n    - **Version:** LATEST AVAIALABLE ON THAT DAY\n  - REST ALL LEAVE TO DEFAULTS\n- **NODE POOLS: default-pool**\n- **Node pool details**\n  - **Name:** default-pool\n  - **Number of Nodes (per Zone):** 1\n- **Nodes: Configure node settings** \n  - **Image type:** Containerized Optimized OS\n  - **Machine configuration**\n    - **GENERAL PURPOSE SERIES:** e2\n    - **Machine Type:** e2-small\n  - **Boot disk type:** standard persistent disk\n  - **Boot disk size(GB):** 20\n  - **Enable Nodes on Spot VMs:** CHECKED\n- **Node Networking:** REVIEW AND LEAVE TO DEFAULTS    \n- **Node Security:** \n  - **Access scopes:** Allow full access to all Cloud APIs\n  - REST ALL REVIEW AND LEAVE TO DEFAULTS\n- **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS\n- **CLUSTER** \n  - **Automation:** REVIEW AND LEAVE TO DEFAULTS\n  - **Networking:** \n    - **Network Access:** Private Cluster\n    - **Access control plane using its external IP address:** BY DEFAULT CHECKED\n      - **Important Note:** Disabling this option locks down external access to the cluster control plane. There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone. This setting is  permanent\n    - **Enable Control Plane Global Access:** CHECKED\n    - **Control Plane IP Range:** 172.16.0.0/28\n    - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Security:** REVIEW AND LEAVE TO DEFAULTS\n    - **CHECK THIS BOX: Enable Workload Identity** IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Metadata:** REVIEW AND LEAVE TO DEFAULTS\n  - **Features:** REVIEW AND LEAVE TO DEFAULTS\n    - **Enable Compute Engine Persistent Disk CSI Driver:** SHOULD BE CHECKED BY DEFAULT - VERIFY\n    - **Enable File Store CSI Driver:** CHECKED \n- CLICK ON **CREATE**\n\n## Step-03: Review kube-manifests: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80      \n          imagePullPolicy: Always            \n```\n\n## Step-04: Review kube-manifest: 02-kubernetes-loadbalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port      \n```\n\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# Change Directory\ncd 20-GKE-Private-Cluster\n\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# Verify Pods \nkubectl get pods \nObservation: SHOULD FAIL - UNABLE TO DOWNLOAD DOCKER IMAGE FROM DOCKER HUB\n\n# Describe Pod\nkubectl describe pod <POD-NAME>\n\n# Clean-Up\nkubectl delete -f kube-manifests/\n```\n\n## Step-06: Create Cloud NAT\n- Go to Network Services -> CREATE CLOUD NAT GATEWAY\n- **Gateway Name:** gke-us-central1-default-cloudnat-gw\n- **Select Cloud Router:** \n  - **Network:** default\n  - **Region:** us-central1\n  - **Cloud Router:** CREATE NEW ROUTER\n    - **Name:** gke-us-central1-cloud-router\n    - **Description:** GKE Cloud Router Region us-central1\n    - **Network:** default (POPULATED by default)\n    - **Region:** us-central1 (POPULATED by default)\n    - **BGP Peer keepalive interval:** 20 seconds (LEAVE TO DEFAULT)\n    - Click on **CREATE**\n- **Cloud NAT Mapping:** LEAVE TO DEFAULTS\n- **Destination (external):** LEAVE TO DEFAULTS\n- **Stackdriver logging:**  LEAVE TO DEFAULTS\n- **Port allocation:** \n  - CHECK **Enable Dynamic Port Allocation**\n- **Timeouts for protocol connections:** LEAVE TO DEFAULTS\n- CLICK on **CREATE**  \n\n## Step-07: Deploy Kubernetes Manifests\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# Verify Pods \nkubectl get pods \nObservation: SHOULD BE ABLE TO DOWNLOAD THE DOCKER IMAGE\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<External-IP>\n\n# Clean-Up\nkubectl delete -f kube-manifests\n```\n\n## Step-08: Authorized Network Test1: My Network\n- Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING\n- Control plane authorized networks\t-> EDIT\n- **Enable control plane authorized networks:** CHECKED\n- CLICK ON **ADD AUTHORIZED NETWORK**\n- **NAME:** MY-NETWORK-1\n- **NETWORK:** 10.10.10.0/24 \n- Click on **DONE**\n- Click on **SAVE CHANGES**\n```t\n# List Kubernetes Nodes\nkubectl get nodes\nObservation:\n1. Access to GKE API Service from our local desktop kubectl cli is lost\n2. Access to GKE API Service is now allowed only from \"10.10.10.0/24\" network\n3. In short even though our GKE API Server has Internet enabled endpoint, its access is restricted to specific network of IPs\n\n## Sample Output\nKalyan-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes\nUnable to connect to the server: dial tcp 34.70.169.161:443: i/o timeout\nKalyan-Mac-mini:google-kubernetes-engine kalyan$ \n```\n\n## Step-09: Authorized Network Test2: My Desktop\n- Go to link [whatismyip](https://www.whatismyip.com/) and get desktop public IP \n- Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING\n- Control plane authorized networks\t-> EDIT\n- **Enable control plane authorized networks:** CHECKED\n- CLICK ON **ADD AUTHORIZED NETWORK**\n- **NAME:** MY-DESKTOP-1\n- **NETWORK:** 10.10.10.0/24 \n- Click on **DONE**\n- Click on **SAVE CHANGES**\n```t\n# List Kubernetes Nodes\nkubectl get nodes\nObservation:\n1. Access to GKE API Service from our local desktop kubectl cli should be success\n\n## Sample Output\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes\nNAME                                                  STATUS   ROLES    AGE   VERSION\ngke-standard-cluster-pri-default-pool-90b1f67b-4z71   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-6xn6   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-dggg   Ready    <none>   55m   v1.24.3-gke.900\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ \n```\n\n## Step-10: Authorized Network Test2: Delete both network rules (Roll back to old state)\n- Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING\n- Control plane authorized networks\t-> EDIT\n- **Enable control plane authorized networks:** UN-CHECKED\n- AUTHORIZED NETWORKS -> DELETE -> MY-NETWORK-1, MY-DESKTOP-1\n- Click on **SAVE CHANGES**\n```t\n# List Kubernetes Nodes\nkubectl get nodes\nObservation:\n1. Access to GKE API Service from our local desktop kubectl cli should be success\n\n## Sample Output\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes\nNAME                                                  STATUS   ROLES    AGE   VERSION\ngke-standard-cluster-pri-default-pool-90b1f67b-4z71   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-6xn6   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-dggg   Ready    <none>   55m   v1.24.3-gke.900\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ \n```\n\n## Additional Reference\n- [GKE Private Cluster with Terraform](https://github.com/GoogleCloudPlatform/gke-private-cluster-demo)"
  },
  {
    "path": "20-GKE-Private-Cluster/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          imagePullPolicy: Always            \n    "
  },
  {
    "path": "20-GKE-Private-Cluster/kube-manifests/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "21-GKE-PD-existing-SC-standard-rwo/README.md",
    "content": "---\ntitle: GKE Persistent Disks Existing StorageClass standard-rwo\ndescription: Use existing storageclass standard-rwo in Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. \n\n\n## Step-01: Introduction\n- Understand Kubernetes Objects\n01. Kubernetes PersistentVolumeClaim\n02. Kubernetes ConfigMap\n03. Kubernetes Deployment\n04. Kubernetes Volumes\n05. Kubernetes Volume Mounts\n06. Kubernetes Environment Variables\n07. Kubernetes ClusterIP Service\n08. Kubernetes Init Containers\n09. Kubernetes Service of Type LoadBalancer\n10. Kubernetes StorageClass \n\n- Use predefined Storage Class `standard-rwo`\n- `standard-rwo` uses balanced persistent disk\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n```\n\n## Step-04: 02-UserManagement-ConfigMap.yaml\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n```\n\n## Step-05: 03-mysql-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful.\n```\n\n## Step-06: 04-mysql-clusterip-service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    \n```\n## Step-07: 05-UserMgmtWebApp-Deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            \n```\n## Step-08: 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Storage Classes\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n\n# Sample Message for Successful Start of JVM\n2022-06-20 09:34:32.519  INFO 1 --- [ost-startStop-1] .r.SpringbootSecurityInternalApplication : Started SpringbootSecurityInternalApplication in 14.891 seconds (JVM running for 23.283)\n20-Jun-2022 09:34:32.593 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive /usr/local/tomcat/webapps/ROOT.war has finished in 21,016 ms\n20-Jun-2022 09:34:32.623 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [\"http-apr-8080\"]\n20-Jun-2022 09:34:32.688 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [\"ajp-apr-8009\"]\n20-Jun-2022 09:34:32.713 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 21275 ms\n```\n\n## Step-10: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB` Persistent Disk\n\n## Step-11: Verify Kubernetes Workloads, Services ConfigMaps on Kubernetes Engine Dashboard\n```t\n# Verify Workloads\nGo to Kubernetes Engine -> Workloads\nObservation:\n1. You should see \"mysql\" and \"usermgmt-webapp\" deployments\n\n# Verify Services\nGo to Kubernetes Engine -> Services & Ingress\nObservation:\n1. You should \"mysql ClusterIP Service\" and \"usermgmt-webapp-lb-service\"\n\n# Verify ConfigMaps\nGo to Kubernetes Engine -> Secrets & ConfigMaps\nObservation: \n1. You should find the ConfigMap \"usermanagement-dbcreation-script\"\n\n# Verify Persistent Volume Claim\nGo to Kubernetes Engine -> Storage -> PERSISTENT VOLUME CLAIMS TAB\nObservation: \n1. You should see PVC \"mysql-pv-claim\"\n\n# Verify StorageClass\nGo to Kubernetes Engine -> Storage -> STORAGE CLASSES TAB\nObservation: \n1. You should see 3 Storage Classes out of which \"standard-rwo\" and \"premium-rwo\" are part of Compute Engine Persistent Disks (latest and greatest - Recommended for use)\n2. Not recommended to use Storage Class with name \"standard\" (Older version)\n```\n## Step-13: Connect to MySQL Database\n```t\n# Template: Connect to MySQL Database using kubectl\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ClusterIP-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace ClusterIP Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n\n## Step-12: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# Verify this user in MySQL DB\n# Template: Connect to MySQL Database using kubectl\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ClusterIP-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace ClusterIP Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> select * from user;\nObservation:\n1. You should find the newly created user from browser successfully created in MySQL DB.\n2. In simple terms, we have done the following\na. Created MySQL k8s Deployment in GKE CLuster\nb. Created Java WebApplication  k8s Deployment in GKE Cluster\nc. Accessed Application using GKE Load Balancer IP using browser\nd. Created a new user in this application and that user successfully stored in MySQL DB.\ne. END TO END FLOW from Browser to DB using GKE Cluster we have seen.\n```\n\n## Step-13: Verify GCE PD CSI Driver Logging\n- https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver\n```t\n# Cloud Logging Query \n resource.type=\"k8s_container\"\n resource.labels.project_id=\"PROJECT_ID\"\n  resource.labels.cluster_name=\"CLUSTER_NAME\"\n resource.labels.namespace_name=\"kube-system\"\n resource.labels.container_name=\"gce-pd-driver\"\n\n# Cloud Logging Query (Replace Values)\n resource.type=\"k8s_container\"\n resource.labels.project_id=\"kdaida123\"\n resource.labels.cluster_name=\"standard-cluster-private-1\"\n resource.labels.namespace_name=\"kube-system\"\n resource.labels.container_name=\"gce-pd-driver\"\n```\n\n## Step-14: Clean-Up\n```t\n# Delete kube-manifests\nkubectl delete -f kube-manifests/\n```\n\n## Reference\n- [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver)\n\n\n## Addtional-Data-01\n1. It enables the automatic deployment and management of the persistent disk driver without having to manually set it up.\n2. You can use customer-managed encryption keys (CMEKs). These keys are used to encrypt the data encryption keys that encrypt your data. \n3. You can use volume snapshots with the Compute Engine persistent disk CSI Driver. Volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume.\n4. Bug fixes and feature updates are rolled out independently from minor Kubernetes releases. This release schedule typically results in a faster release cadence.\n\n## Addtional-Data-02\n- For Standard Clusters: The Compute Engine persistent disk CSI Driver is enabled by default on newly created clusters \n  - Linux clusters: GKE version 1.18.10-gke.2100 or later, or 1.19.3-gke.2100 or later.\n  - Windows clusters: GKE version 1.22.6-gke.300 or later, or 1.23.2-gke.300 or later.\n- For Autopilot clusters: The Compute Engine persistent disk CSI Driver is enabled by default and cannot be disabled or edited.\n\n## Addtional-Data-03\n- GKE automatically installs the following StorageClasses:\n  - standard-rwo:  using balanced persistent disk\n  - premium-rwo: using SSD persistent disk\n- For Autopilot clusters: The default StorageClass is standard-rwo, which uses the Compute Engine persistent disk CSI Driver. \n- For Standard clusters: The default StorageClass uses the Kubernetes in-tree gcePersistentDisk volume plugin.\n```t\n# You can find the name of your installed StorageClasses by running the following command:\nkubectl get sc\nor\nkubectl get storageclass\n```\n"
  },
  {
    "path": "21-GKE-PD-existing-SC-standard-rwo/kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "21-GKE-PD-existing-SC-standard-rwo/kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. \n## We are going to use this in our MySQL k8s Deployment  \n\n# 3. YAML Notation\n## YAML Notation: |-: \"strip\": remove the line feed, remove the trailing blank lines.\n## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines"
  },
  {
    "path": "21-GKE-PD-existing-SC-standard-rwo/kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful."
  },
  {
    "path": "21-GKE-PD-existing-SC-standard-rwo/kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "21-GKE-PD-existing-SC-standard-rwo/kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "21-GKE-PD-existing-SC-standard-rwo/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "22-GKE-PD-existing-SC-premium-rwo/README.md",
    "content": "---\ntitle: GKE Persistent Disks Existing StorageClass premium-rwo\ndescription: Use existing storageclass premium-rwo in Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n## Step-01: Introduction\n- Understand Kubernetes Objects\n01. Kubernetes PersistentVolumeClaim\n02. Kubernetes ConfigMap\n03. Kubernetes Deployment\n04. Kubernetes Volumes\n05. Kubernetes Volume Mounts\n06. Kubernetes Environment Variables\n07. Kubernetes ClusterIP Service\n08. Kubernetes Init Containers\n09. Kubernetes Service of Type LoadBalancer\n10. Kubernetes StorageClass \n\n- Use the predefined Storage class `premium-rwo`\n- By default, dynamically provisioned PersistentVolumes use the default StorageClass and are backed by `standard hard disks`. \n- If you need faster SSDs, you can use the `premium-rwo` storage class from the Compute Engine persistent disk CSI Driver to provision your volumes. \n- This can be done by setting the storageClassName field to `premium-rwo` in your PersistentVolumeClaim \n- `premium-rwo Storage Class` will provision `SSD Persistent Disk`\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: premium-rwo \n  resources: \n    requests:\n      storage: 4Gi\n```\n\n## Step-04: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n1. 01-persistent-volume-claim.yaml\n2. 02-UserManagement-ConfigMap.yaml\n3. 03-mysql-deployment.yaml\n4. 04-mysql-clusterip-service.yaml\n5. 05-UserMgmtWebApp-Deployment.yaml\n6. 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-05: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Storage Classes\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-06: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB` Persistent Disk\n- **Observation:** You should see the disk type as **SSD persistent disk**\n\n\n## Step-07: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-08: Clean-Up\n```t\n# Delete kube-manifests\nkubectl delete -f kube-manifests/\n```\n\n## Reference\n- [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver)"
  },
  {
    "path": "22-GKE-PD-existing-SC-premium-rwo/kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: premium-rwo \n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "22-GKE-PD-existing-SC-premium-rwo/kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "22-GKE-PD-existing-SC-premium-rwo/kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "22-GKE-PD-existing-SC-premium-rwo/kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "22-GKE-PD-existing-SC-premium-rwo/kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "22-GKE-PD-existing-SC-premium-rwo/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/README.md",
    "content": "---\ntitle: GKE Persistent Disks Custom StorageClass \ndescription: Use Custom storageclass to provision Google Disks for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n\n## Step-01: Introduction\n- **Feaute-1:** Create custom Kubernetes StorageClass instead of using predefined one in GKE Cluster. custom storage class `gke-pd-standard-rwo-sc`\n- **Feature-2:** Test `allowVolumeExpansion: true` in Storage Class\n- **Feature-3:** Use `reclaimPolicy: Retain` in Storage Class and Test it \n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 00-storage-class.yaml\n```yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata: \n  name: gke-pd-standard-rwo-sc\nprovisioner: pd.csi.storage.gke.io\nvolumeBindingMode: WaitForFirstConsumer \nallowVolumeExpansion: true\nreclaimPolicy: Retain \nparameters:\n  type: pd-balanced\n\n# STORAGE CLASS \n# 1. A StorageClass provides a way for administrators \n# to describe the \"classes\" of storage they offer.\n# 2. Here we are offering GCP PD Storage for GKE Cluster\n```\n\n## Step-04: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gke-pd-standard-rwo-sc\n  resources: \n    requests:\n      storage: 4Gi\n```\n\n\n## Step-05: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-06: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Storage Classes\nkubectl get sc\nObservation: \n1. You should find the new custom storage class object created with name as \"gke-pd-standard-rwo-sc\"\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-07: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB` Persistent Disk\n- **Observation:** You should see the disk type as **Balanced persistent disk**\n\n\n\n## Step-08: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User (Used for testing  `allowVolumeExpansion: true` Option)\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n```\n\n## Step-09: Update 01-persistent-volume-claim.yaml from 4Gi to 8Gi\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gke-pd-standard-rwo-sc\n  resources: \n    requests:\n      #storage: 4Gi # Commment at Step-09\n      storage: 8Gi # UnCommment at Step-09\n```\n\n## Step-10: Deploy updated kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List PVC\nkubectl get pvc\nObservation:\n1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi\n\n# List PV\nkubectl get pv\nObservation:\n1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\nObservation:\n1. No impact to underlying MySQL Database data.\n2. VolumeExpansion is seamless without impacting the real data. \n3. We should find the two users which are present before VolumeExpansion as-is.\n```\n## Step-11: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB` Persistent Disk, as 4GB disk expaned to 8GB now.\n- **Observation:** You should see the disk type as **Balanced persistent disk**\n\n\n## Step-12: Verify reclaimPolicy: Retain\n```t\n# Delete kube-manifests\nkubectl delete -f kube-manifests/\n\n# List Storage Class\nkubectl get sc\nObservation:\n1. Custom storage class deleted\n\n# List PVC\nkubectl get pvc\nObservation:\n1. PVC deleted\n\n# List PV\nkubectl get pv\nObservation:\n1. PV still present\n2. PV STATUS will be in \"Released\", not used by anyoe.\n```\n\n## Step-13: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB` Persistent Disk.\n- **Observation:** You should see the disk is still present even after all kube-manifests (storageclass, pvc) all deleted.\n- This is due to we have used **reclaimPolicy: Retain** in Custom Storage Class\n\n\n## Step-14: Clone Persistent Disk\n- **Question:** Why we are cloning the disk ?\n- **Answer:** In the next demo, we are going use the **pre-existing persistent disk** in our demo. For that purpose we are cloning it. \n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB` Persistent Disk.\n- Click on **Clone Disk**\n- **Name:** preexisting-pd\n- **Description:** preexisting-pd Demo with GKE\n- **Location:** Single\n- **Snapshot Schedule:** UNCHECK\n- Click on **CREATE**\n\n## Step-15: Delete Retained Persistent Disk from this Demo\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB` Persistent Disk.\n- **Disk Name:**  pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14\n- Click on **DELETE DISK**\n```t\n# List PV\nkubectl get pv\n\n# Delete  PV \nkubectl delete pv pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14 \n\n# List PV\nkubectl get pv\n```\n\n## Step-16: Change PVC 8Gi to 4Gi: 01-persistent-volume-claim.yaml\n- Change PVC 8Gi to 4Gi so that `kube-manifests` will be demo ready for students. \n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gke-pd-standard-rwo-sc\n  resources: \n    requests:\n      storage: 4Gi # Commment at Step-09\n      #storage: 8Gi # UnCommment at Step-09\n```\n\n\n## Reference\n- [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver)\n"
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/kube-manifests/00-storage-class.yaml",
    "content": "apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata: \n  name: gke-pd-standard-rwo-sc\nprovisioner: pd.csi.storage.gke.io\nvolumeBindingMode: WaitForFirstConsumer \nallowVolumeExpansion: true\nreclaimPolicy: Retain \nparameters:\n  type: pd-balanced # Other Options supported are pd-ssd, pd-standard\n\n# STORAGE CLASS \n# 1. A StorageClass provides a way for administrators \n# to describe the \"classes\" of storage they offer.\n# 2. Here we are offering GCP PD Storage for GKE Cluster"
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gke-pd-standard-rwo-sc\n  resources: \n    requests:\n      storage: 4Gi # Commment at Step-09\n      #storage: 8Gi # UnCommment at Step-09\n      \n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "23-GKE-PD-Custom-StorageClass/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "24-GKE-PD-preexisting-PD/README.md",
    "content": "---\ntitle: GKE Persistent Disks Preexisting PD\ndescription: Use Google Disks Preexisting PD for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. \n\n\n## Step-01: Introduction\n- Use the **pre-existing Persistent Disk** created in previous demo.\n- As part of this demo, we are going to provision the **Persistent Volume (PV)** manually. We call this as Static Provisioning. \n\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 00-persistent-volume.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: preexisting-pd\nspec:\n  storageClassName: standard-rwo\n  capacity:\n    storage: 8Gi\n  accessModes:\n    - ReadWriteOnce\n  claimRef:\n    namespace: default\n    name: mysql-pv-claim\n  gcePersistentDisk:\n    pdName: preexisting-pd\n    fsType: ext4\n```\n\n## Step-04: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 8Gi\n```\n\n## Step-05: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-06: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-07: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB` Persistent Disk\n- **Observation:** You should see the disk type **In Use By** updated and bound to **gke-standard-cluster-1-default-pool-db7b638f-j5lk**\n\n\n\n## Step-08: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\nObservation:\n1. You should see admin102 already present.\n2. This is because in previous demo, we already created admin102 and that data disk we have mounted here using \"Static Provisioning PV\" concept.\n```\n\n## Step-09: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# Delete Persistent Disk: preexisting-pd\n1. \"preexisting-pd\" will not get deleted automatically\n2. We should manually delete it \n3. We should observe that its \"In Use By\" field is empty (Not associated to anything)\n4. Go to Compute Engine -> Disks -> preexisting-pd -> DELETE \n```\n\n"
  },
  {
    "path": "24-GKE-PD-preexisting-PD/kube-manifests/00-persistent-volume.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: preexisting-pd\nspec:\n  storageClassName: standard-rwo\n  capacity:\n    storage: 8Gi\n  accessModes:\n    - ReadWriteOnce\n  claimRef:\n    namespace: default\n    name: mysql-pv-claim\n  gcePersistentDisk:\n    pdName: preexisting-pd\n    fsType: ext4"
  },
  {
    "path": "24-GKE-PD-preexisting-PD/kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 8Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "24-GKE-PD-preexisting-PD/kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "24-GKE-PD-preexisting-PD/kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "24-GKE-PD-preexisting-PD/kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "24-GKE-PD-preexisting-PD/kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "24-GKE-PD-preexisting-PD/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "25-GKE-PD-Regional-PD/README.md",
    "content": "---\ntitle: GKE Persistent Disks - Use Regional PD\ndescription: Use Google Disks Regional PD for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n\n## Step-01: Introduction\n- Use Regional Persistent Disks\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 00-storage-class.yaml\n```yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: regionalpd-storageclass\nprovisioner: pd.csi.storage.gke.io\nparameters:\n  #type: pd-standard # Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.\n  type: pd-ssd \n  replication-type: regional-pd\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n  - key: topology.gke.io/zone\n    values:\n    - us-central1-c\n    - us-central1-b\n\n## Important Note - Regional PD \n# If using a regional cluster, you can leave allowedTopologies unspecified. If you do this, when you create a Pod that consumes a PersistentVolumeClaim which uses this StorageClass a regional persistent disk is provisioned with two zones. One zone is the same as the zone that the Pod is scheduled in. The other zone is randomly picked from the zones available to the cluster.\n# When using a zonal cluster, allowedTopologies must be set.    \n\n# STORAGE CLASS \n# 1. A StorageClass provides a way for administrators \n# to describe the \"classes\" of storage they offer.\n# 2. Here we are offering GCP PD Storage for GKE Cluster\n```\n\n## Step-04: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: regionalpd-storageclass\n  resources: \n    requests:\n      storage: 4Gi\n```\n\n## Step-05: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n\n## Step-06: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-07: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB` Persistent Disk\n- **Observation:** Review the below items\n  - **Zones:** us-central1-b, us-central1-c\n  - **Type:** Regional SSD persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n\n\n## Step-08: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-09: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Verify if PD is deleted\nGo to Compute Engine -> Disks -> Search for 4GB Regional SSD persistent disk.\nIt should be deleted. \n```\n\n\n\n## References \n- [Regional PD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd)"
  },
  {
    "path": "25-GKE-PD-Regional-PD/kube-manifests/00-storage-class.yaml",
    "content": "apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: regionalpd-storageclass\nprovisioner: pd.csi.storage.gke.io\nparameters:\n  #type: pd-standard # Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.\n  type: pd-ssd \n  replication-type: regional-pd\nvolumeBindingMode: WaitForFirstConsumer\n#allowedTopologies:  ##-->COMMENTED BECAUSE WE ARE USING REGIONAL GKE CLUSTER\n#- matchLabelExpressions:\n#  - key: topology.gke.io/zone\n#    values:\n#    - us-central1-c\n#    - us-central1-b\n\n## Important Note - Regional PD \n# 1. If using a regional GKE cluster, you can leave allowedTopologies unspecified. \n# 2. If you do this, when you create a Pod that consumes a \n#PersistentVolumeClaim which uses this StorageClass a regional persistent \n#disk is provisioned with two zones. One zone is the same as the zone \n#that the Pod is scheduled in. The other zone is randomly picked from \n#the zones available to the cluster.\n# 3. When using a zonal cluster, allowedTopologies must be set.    \n\n# STORAGE CLASS \n# 1. A StorageClass provides a way for administrators \n# to describe the \"classes\" of storage they offer.\n# 2. Here we are offering GCP PD Storage for GKE Cluster"
  },
  {
    "path": "25-GKE-PD-Regional-PD/kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: regionalpd-storageclass\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "25-GKE-PD-Regional-PD/kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "25-GKE-PD-Regional-PD/kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "25-GKE-PD-Regional-PD/kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "25-GKE-PD-Regional-PD/kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "25-GKE-PD-Regional-PD/kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/01-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/02-Volume-Snapshot/01-VolumeSnapshotClass.yaml",
    "content": "apiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n  name: my-snapshotclass\ndriver: pd.csi.storage.gke.io\ndeletionPolicy: Delete\n#parameters: \n#  storage-locations: us-east2\n\n# Optional Note: \n# To use a custom storage location, add a storage-locations parameter to the snapshot class. \n# To use this parameter, your clusters must use version 1.21 or later.\n\n\n"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/02-Volume-Snapshot/02-VolumeSnapshot.yaml",
    "content": "apiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n  name: my-snapshot1\nspec:\n  volumeSnapshotClassName: my-snapshotclass\n  source:\n    persistentVolumeClaimName: mysql-pv-claim"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/03-Volume-Restore/01-restore-pvc.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: pvc-restore\nspec:\n  dataSource:\n    name: my-snapshot1\n    kind: VolumeSnapshot\n    apiGroup: snapshot.storage.k8s.io\n  storageClassName: standard-rwo\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 4Gi"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/03-Volume-Restore/02-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            #claimName: mysql-pv-claim\n            claimName: pvc-restore\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "26-GKE-PD-Volume-Snapshots-and-Restore/README.md",
    "content": "---\ntitle: GKE Persistent Disks - Volume Snapshots and Restore\ndescription: Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n## Step-01: Introduction\n1. Deploy UMS WebApp with `01-kube-manifests`\n2. Create new User (admin102, admin103)\n3. Create Volume Snapshot Kubernetes Objects and Deploy them\n4. Delete User (admin102, admin103)\n5. Deploy PVC Restore `03-Volume-Restore`\n6. Verify if after restore 2 more users what we deleted got restored in our UMS App\n7. Clean Up (kubectl delete -R -f <Folder>)\n\n## Step-02:  Kubernetes YAML Manifests\n- **Project Folder:** 01-kube-manifests\n- No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo`\n- 01-persistent-volume-claim.yaml\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-03: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-04: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB` Persistent Disk\n- **Observation:** Review the below items\n  - **Zones:** us-central1-c\n  - **Type:** Balanced persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n## Step-05: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User admin102\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# Create New User admin103\nUsername: admin103\nPassword: password103\nFirst Name: fname103\nLast Name: lname103\nEmail Address: admin103@stacksimplify.com\nSocial Security Address: ssn103\n```\n\n## Step-06: 02-Volume-Snapshot: Create Volume Snapshots\n- **Project Folder:** 02-Volume-Snapshot\n### Step-06-01: 01-VolumeSnapshotClass.yaml\n```yaml\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n  name: my-snapshotclass\ndriver: pd.csi.storage.gke.io\ndeletionPolicy: Delete\n#parameters: \n#  storage-locations: us-east2\n\n# Optional Note: \n# To use a custom storage location, add a storage-locations parameter to the snapshot class. \n# To use this parameter, your clusters must use version 1.21 or later.\n```\n### Step-06-02: 02-VolumeSnapshot.yaml\n```yaml\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n  name: my-snapshot1\nspec:\n  volumeSnapshotClassName: my-snapshotclass\n  source:\n    persistentVolumeClaimName: mysql-pv-claim\n```\n### Step-06-03: Deploy Volume Snapshot Kubernetes Manifests\n```t\n# Deploy Volume Snapshot Kubernetes Manifests\nkubectl apply -f 02-Volume-Snapshot/\n\n# List VolumeSnapshotClass\nkubectl get volumesnapshotclass\n\n# Describe VolumeSnapshotClass\nkubectl describe volumesnapshotclass my-snapshotclass\n\n# List VolumeSnapshot\nkubectl get volumesnapshot\n\n# Describe VolumeSnapshot\nkubectl describe volumesnapshot my-snapshot1\n\n# Verify the Snapshots\nGo to Compute Engine -> Storage -> Snapshots\nObservation:\n1. You should find the new snapshot created\n2. Review the \"Creation Time\"\n3. Review the \"Disk Size: 4GB\"\n```\n\n## Step-07: Delete users admin102, admin103\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Delete Users\nadmin102\nadmin103\n```\n\n\n## Step-08: 03-Volume-Restore: Create Volume Restore\n- **Project Folder:** 03-Volume-Restore\n### Step-08-01: 01-restore-pvc.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: pvc-restore\nspec:\n  dataSource:\n    name: my-snapshot1\n    kind: VolumeSnapshot\n    apiGroup: snapshot.storage.k8s.io\n  storageClassName: standard-rwo\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 4Gi\n```\n### Step-08-02: 02-mysql-deployment.yaml\n- Update Claim Name from `claimName: mysql-pv-claim` to `claimName: pvc-restore` \n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:5.6\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                        \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            #claimName: mysql-pv-claim\n            claimName: pvc-restore\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n```\n### Step-08-03: Deploy Volume Restore Kubernetes Manifests\n```t\n# Deploy Volume Restore Kubernetes Manifests\nkubectl apply -f 03-Volume-Restore/\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n\n# Restart Deployments (Optional - If ERRORS)\nkubectl rollout restart deployment mysql\nkubectl rollout restart deployment usermgmt-webapp\n\n# Review Persistent Disk\n1. Go to Compute Engine -> Storage -> Disks\n2. You should find a new \"Balanced persistent disk\" created as part of new PVC \"pvc-restore\"\n3. To get the exact Disk name for \"pvc-restore\" PVC run command \"kubectl get pvc\"\n\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\nObservation:\n1. You should find admin102, admin103 present\n2. That proves, we have restored the MySQL Data using VolumeSnapshots and PVC\n```\n\n## Step-09: Clean-Up\n```t\n# Delete All (Disks, Snapshots)\nkubectl delete -f 01-kube-manifests -f 02-Volume-Snapshot -f 03-Volume-Restore\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List VolumeSnapshotClass\nkubectl get volumesnapshotclass\n\n# List VolumeSnapshot\nkubectl get volumesnapshot\n\n# Verify Persistent Disks\n1. Go to Compute Engine -> Storage -> Disks -> REFRESH\n2. Two disks created as part of this demo is deleted\n\n# Verify Disk Snapshots\n1. Go to Compute Engine -> Storage -> Snapshots -> REFRESH\n2. There should not be any snapshot which we created as part of this demo. \n```\n\n\n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/01-kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/01-kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/01-kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/01-kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/01-kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/01-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/01-podpvc-clone.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: podpvc-clone\nspec:\n  dataSource:\n    name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App\n    kind: PersistentVolumeClaim\n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo  # same as the StorageClass of the source PersistentVolumeClaim.   \n  resources:\n    requests:\n      storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script2\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql2\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql2\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql2\n    spec: \n      containers:\n        - name: mysql2\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            #claimName: mysql-pv-claim\n            claimName: podpvc-clone\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script2\n\n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql2\nspec:\n  selector:\n    app: mysql2 \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp2\n  labels:\n    app: usermgmt-webapp2\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp2\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp2\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql2 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp2\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql2\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/02-Use-Cloned-Volume-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp2-lb-service\n  labels: \n    app: usermgmt-webapp2\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp2\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      nodeSelector:\n        nodetype: db\n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/01-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/01-podpvc-clone.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: podpvc-clone\nspec:\n  dataSource:\n    name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App\n    kind: PersistentVolumeClaim\n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo  # same as the StorageClass of the source PersistentVolumeClaim.   \n  resources:\n    requests:\n      storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script2\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. We are going to use this in our MySQL Deployment)  \n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql2\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql2\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql2\n    spec: \n      nodeSelector:\n        nodetype: db\n      containers:\n        - name: mysql2\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            #claimName: mysql-pv-claim\n            claimName: podpvc-clone\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script2\n\n"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql2\nspec:\n  selector:\n    app: mysql2 \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp2\n  labels:\n    app: usermgmt-webapp2\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp2\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp2\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql2 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp2\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql2\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            "
  },
  {
    "path": "27-GKE-PD-Volume-Clone/03-With-NodeSelectors/02-Use-Cloned-Volume-kube-manifests/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp2-lb-service\n  labels: \n    app: usermgmt-webapp2\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp2\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "27-GKE-PD-Volume-Clone/README.md",
    "content": "---\ntitle: GKE Persistent Disks - Volume Clone\ndescription: Use Google Disks Volume Clone for GKE Workloads\n---\n\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n\n## Step-01: Introduction\n- Understand how to implement cloned Disks in GKE\n\n## Step-02:  Kubernetes YAML Manifests\n- **Project Folder:** 01-kube-manifests\n- No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo`\n- 01-persistent-volume-claim.yaml\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-03: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-04: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB` Persistent Disk\n- **Observation:** Review the below items\n  - **Zones:** us-central1-c\n  - **Type:** Balanced persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n## Step-05: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User admin102\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# Create New User admin103\nUsername: admin103\nPassword: password103\nFirst Name: fname103\nLast Name: lname103\nEmail Address: admin103@stacksimplify.com\nSocial Security Address: ssn103\n```\n\n## Step-06: Volume Clone: 01-podpvc-clone.yaml\n- **Project Folder:** 02-Use-Cloned-Volume-kube-manifests\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: podpvc-clone\nspec:\n  dataSource:\n    name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App\n    kind: PersistentVolumeClaim\n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo  # same as the StorageClass of the source PersistentVolumeClaim.   \n  resources:\n    requests:\n      storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim\n```\n\n## Step-07: 03-mysql-deployment.yaml\n- **Change-1:** Change the `claimName: mysql-pv-claim` to `claimName: podpvc-clone`\n- \n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql2\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql2\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql2\n    spec: \n      containers:\n        - name: mysql2\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                         \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            #claimName: mysql-pv-claim\n            claimName: podpvc-clone\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script2\n```\n\n## Step-08:  Kubernetes YAML Manifests\n- **Project Folder:** 02-Use-Cloned-Volume-kube-manifests\n- No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo`\n- For all the resource names and labels append with 2 (Example: mysql to mysql2, usermgmt-webapp to usermgmt-webapp2)\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 02-Use-Cloned-Volume-kube-manifests/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp2-6ff7d7d849-7lrg5\n```\n\n## Step-10: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB` Persistent Disk\n- **Observation:** Review the below items\n  - **Type:** Balanced persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n## Step-11: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\nObservation:\n1. You should see both \"admin102\" and \"admin103\" users already present.\n2. This is because we have used the cloned disk from \"01-kube-manifests\"\n```\n\n## Step-12: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f 01-kube-manifests -f 02-Use-Cloned-Volume-kube-manifests\n```\n\n\n```t\n# Reference\nhttps://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/\n\n# Get Nodes\nkubectl get nodes \n\n# Show Node Labels\nkubectl get nodes --show-labels\n\n# Label Node\nkubectl label nodes <your-node-name> nodetype=db\nkubectl label nodes gke-standard-cluster-pri-default-pool-4f7ab141-p0gz nodetype=db\n\n# Show Node Labels\nkubectl get nodes --show-labels\n```\n"
  },
  {
    "path": "28-GKE-Storage-with-GCP-CloudSQL-Public/README.md",
    "content": "---\ntitle: GKE Storage with GCP Cloud SQL - MySQL Public Instance\ndescription: Use GCP Cloud SQL MySQL DB for GKE Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --zone <ZONE> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Private Cluster \n- GCP Cloud SQL with Public IP and Authorized Network for DB as entire internet (0.0.0.0/0)\n\n## Step-02: Create Google Cloud SQL MySQL Instance\n- Go to SQL -> Choose MySQL\n- **Instance ID:** ums-db-public-instance\n- **Password:** KalyanReddy13\n- **Database Version:** MYSQL 8.0\n- **Choose a configuration to start with:** Development\n- **Choose region and zonal availability**\n  - **Region:** US-central1(IOWA)\n  - **Zonal availability:** Single Zone\n  - **Primary Zone:** us-central1-a\n- **Customize your instance**\n- **Machine Type**\n  - **Machine Type:** LightWeight (1 vCPU, 3.75GB)\n- **STORAGE**  \n  - **Storage Type:** HDD\n  - **Storage Capacity:** 10GB \n  - **Enable automatic storage increases:** CHECKED\n- **CONNECTIONS**  \n  - **Instance IP Assignment:** \n    - **Private IP:** UNCHECKED\n    - **Public IP:** CHECKED\n  - **Authorized networks**\n    - **Name:** All-Internet\n    - **Network:**  0.0.0.0/0     \n    - Click on **DONE**\n- **DATA PROTECTION**\n  - **Automatic Backups:** UNCHECKED\n  - **Enable Deletion protection:** UNCHECKED\n- **Maintenance:** Leave to defaults\n- **Flags:** Leave to defaults\n- **Labels:** Leave to defaults\n- Click on **CREATE INSTANCE**      \n\n## Step-03: Perform Telnet Test from local desktop\n```t\n# Telnet Test\ntelnet <MYSQL-DB-PUBLIC-IP> 3306\n\n# Replace Public IP\ntelnet 35.184.228.151 3306\n\n## SAMPLE OUTPUT\nKalyans-Mac-mini:25-GKE-Storage-with-GCP-Cloud-SQL kalyanreddy$ telnet 35.184.228.151 3306\nTrying 35.184.228.151...\nConnected to 151.228.184.35.bc.googleusercontent.com.\nEscape character is '^]'.\nQ\n8.0.26-google?h'Sxcr+?nd'h<a(X`z=mysql_native_password2#08S01Got timeout reading communication packetsConnection closed by foreign host.\nKalyans-Mac-mini:25-GKE-Storage-with-GCP-Cloud-SQL kalyanreddy$\n```\n\n\n## Step-04: Create DB Schema webappdb \n- Go to SQL ->  ums-db-public-instance -> Databases -> **CREATE DATABASE**\n- **Database Name:** webappdb\n- **Character set:** utf8\n- **Collation:** Default collation\n- Click on **CREATE**\n\n\n## Step-05: 01-MySQL-externalName-Service.yaml\n- Update Cloud SQL MySQL DB `Public IP` in ExternalName Service\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-externalname-service\nspec:\n  type: ExternalName\n  externalName: 35.184.228.151\n```\n\n## Step-06: 02-Kubernetes-Secrets.yaml\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\ntype: Opaque\ndata: \n  db-password: S2FseWFuUmVkZHkxMw==\n\n# Base64 of KalyanReddy13\n# https://www.base64encode.org/\n# Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw==\n```\n\n## Step-07: 03-UserMgmtWebApp-Deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql-externalname-service\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   \n```\n\n## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n\n## Step-10: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl\n```t\n## Verify from Kubernetes Cluster, we are able to connect to MySQL DB\n# Template\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ExternalName-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb\n```t\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User (Used for testing  `allowVolumeExpansion: true` Option)\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-13: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Delete Cloud SQL MySQL Instance\n1. Go to SQL ->  ums-db-public-instance -> DELETE\n2. Instance ID: ums-db-public-instance\n3. Click on DELETE\n```\n"
  },
  {
    "path": "28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/01-MySQL-externalName-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-externalname-service\nspec:\n  type: ExternalName\n  externalName: 35.226.81.153"
  },
  {
    "path": "28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/02-Kubernetes-Secrets.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\ntype: Opaque\ndata: \n  db-password: S2FseWFuUmVkZHkxMw==\n\n# Base64 of KalyanReddy13\n# https://www.base64encode.org/\n# Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw=="
  },
  {
    "path": "28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/03-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql-externalname-service\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   "
  },
  {
    "path": "28-GKE-Storage-with-GCP-CloudSQL-Public/kube-manifests/04-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "29-GKE-Storage-with-GCP-CloudSQL-Private/README.md",
    "content": "---\ntitle: GKE Storage with GCP Cloud SQL - MySQL Private Instance\ndescription: Use GCP Cloud SQL MySQL DB for GKE Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n\n## Step-01: Introduction\n- GKE Private Cluster \n- GCP Cloud SQL with Private IP \n\n\n## Step-02: Create Private Service Connection to Google Managed Services from our VPC Network\n## Step-02-01: Create ALLOCATED IP RANGES FOR SERVICES\n- Go to VPC Networks -> default -> PRIVATE SERVICE CONNECTION -> ALLOCATED IP RANGES FOR SERVICES\n- Click on **ALLOCATE IP RANGE**\n- **Name:** google-managed-services-default  (google-managed-services-<VPC-NAME>)\n- **Description:** google-managed-services-default  \n- **IP Range:** Automatic\n- **Prefix Length:** 16\n- Click on **ALLOCATE** \n\n## Step-02-02: Create PRIVATE CONNECTIONS TO SERVICES\n- Delete existing connection if any present `servicenetworking-googleapis-com`\n- Click on **CREATE CONNECTION**\n- **Connected Service Provider:** Google Cloud Platform\n- **Connection Name:** servicenetworking-googleapis-com (DEFAULT POPULATED CANNOT CHANGE)\n- **Assigned IP Allocation:** google-managed-services-default  \n- Click on **CONNECT**\n\n## Step-03: Create Google Cloud SQL MySQL Instance\n- Go to SQL -> Choose MySQL\n- **Instance ID:** ums-db-private-instance\n- **Password:** KalyanReddy13\n- **Database Version:** MYSQL 8.0\n- **Choose a configuration to start with:** Development\n- **Choose region and zonal availability**\n  - **Region:** US-central1(IOWA)\n  - **Zonal availability:** Single Zone\n  - **Primary Zone:** us-central1-c\n- **Customize your instance**\n  - **Machine Type:** LightWeight (1 vCPU, 3.75GB)\n  - **Storage Type:** HDD\n  - **Storage Capacity:** 10GB \n  - **Enable automatic storage increases:** CHECKED\n  - **Instance IP Assignment:** \n    - **Private IP:** CHECKED\n      - **Associated networking:** default\n      - **MESSAGE:** Private services access connection for network default has been successfully created. You will now be able to use the same network across all your project's managed services. If you would like to change this connection, please visit the Networking page.\n      - **Allocated IP range (optional):** google-managed-services-default\n    - **Public IP:** UNCHECKED\n  - **Authorized networks:** NOT ADDED ANYTHING\n- **Data Protection**\n  - **Automatic Backups:** UNCHECKED\n- **Instance deletion protection:** UNCHECKED  \n- **Maintenance:** Leave to defaults\n- **Flags:** Leave to defaults\n- **Labels:** Leave to defaults\n- Click on **CREATE INSTANCE**      \n\n\n## Step-04: Create DB Schema webappdb \n- Go to SQL ->  ums-db-public-instance -> Databases -> **CREATE DATABASE**\n- **Database Name:** webappdb\n- **Character set:** utf8\n- **Collation:** Default collation\n- Click on **CREATE**\n\n\n## Step-05: 01-MySQL-externalName-Service.yaml\n- Update Cloud SQL MySQL DB `Private IP` in ExternalName Service\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-externalname-service\nspec:\n  type: ExternalName\n  externalName: 10.64.18.3\n```\n\n## Step-06: 02-Kubernetes-Secrets.yaml\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\ntype: Opaque\ndata: \n  db-password: S2FseWFuUmVkZHkxMw==\n\n# Base64 of KalyanReddy13\n# https://www.base64encode.org/\n# Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw==\n```\n\n## Step-07: 03-UserMgmtWebApp-Deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql-externalname-service\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   \n```\n\n## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n\n## Step-10: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl\n```t\n## Verify from Kubernetes Cluster, we are able to connect to MySQL DB\n# Template\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ExternalName-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb\n```t\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User (Used for testing  `allowVolumeExpansion: true` Option)\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-13: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Important Note: \nDONT DELETE GCP Cloud SQL Instance. We will use it in next demo and clean-up\n```\n\n## References\n- [Private Service Access with MySQL](https://cloud.google.com/sql/docs/mysql/configure-private-services-access#console)\n- [Private Service Access](https://cloud.google.com/vpc/docs/private-services-access)\n- [VPC Network Peering Limits](https://cloud.google.com/vpc/docs/quota#vpc-peering)\n- [Configuring Private Service Access](https://cloud.google.com/vpc/docs/configure-private-services-access)\n- [Additional Reference Only - Enabling private services access](https://cloud.google.com/service-infrastructure/docs/enabling-private-services-access)\n\n\n"
  },
  {
    "path": "29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/01-MySQL-externalName-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-externalname-service\nspec:\n  type: ExternalName\n  externalName: 10.80.0.3"
  },
  {
    "path": "29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/02-Kubernetes-Secrets.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\ntype: Opaque\ndata: \n  db-password: S2FseWFuUmVkZHkxMw==\n\n# Base64 of KalyanReddy13\n# https://www.base64encode.org/\n# Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw=="
  },
  {
    "path": "29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/03-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql-externalname-service\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   "
  },
  {
    "path": "29-GKE-Storage-with-GCP-CloudSQL-Private/kube-manifests/04-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "30-GCP-CloudSQL-Private-NO-ExternalNameService/README.md",
    "content": "---\ntitle: GKE Storage with GCP Cloud SQL - Without ExternalName Service\ndescription: Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service\n---\n\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Private Cluster \n- GCP Cloud SQL with Private IP \n- [GKE is a Fully Integrated Network Model](https://cloud.google.com/architecture/gke-compare-network-models)\n- GKE is a Fully Integrated Network Model for Kubernetes, that said without ExternalName service we can directly connect to Private or Public IP of Cloud SQL from GKE Cluster itself. \n- We are going to update the UMS Kubernetes Deployment `DB_HOSTNAME` with `Cloud SQL Private IP` and it should work without any issues. \n\n\n\n## Step-02: 03-UserMgmtWebApp-Deployment.yaml\n- **Change-1:** Update Cloud SQL IP Address in `command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']`\n- **Change-2:** Update Cloud SQL IP Address for `DB_HOSTNAME` value `value: 10.64.18.3`\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          #command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']                \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              #value: \"mysql-externalname-service\"            \n              value: 10.64.18.3\n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   \n```\n\n\n## Step-03: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n\n## Step-04: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-05: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Delete Cloud SQL MySQL Instance\n1. Go to SQL ->  ums-db-private-instance -> DELETE\n2. Instance ID: ums-db-private-instance\n3. Click on DELETE\n```\n\n## References\n- [Private Service Access with MySQL](https://cloud.google.com/sql/docs/mysql/configure-private-services-access#console)\n- [Private Service Access](https://cloud.google.com/vpc/docs/private-services-access)\n- [VPC Network Peering Limits](https://cloud.google.com/vpc/docs/quota#vpc-peering)\n- [Configuring Private Service Access](https://cloud.google.com/vpc/docs/configure-private-services-access)\n- [Additional Reference Only - Enabling private services access](https://cloud.google.com/service-infrastructure/docs/enabling-private-services-access)\n\n"
  },
  {
    "path": "30-GCP-CloudSQL-Private-NO-ExternalNameService/kube-manifests/01-Kubernetes-Secrets.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\ntype: Opaque\ndata: \n  db-password: S2FseWFuUmVkZHkxMw==\n\n# Base64 of KalyanReddy13\n# https://www.base64encode.org/\n# Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw=="
  },
  {
    "path": "30-GCP-CloudSQL-Private-NO-ExternalNameService/kube-manifests/02-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          #command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z 10.80.0.3 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']                \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              #value: \"mysql-externalname-service\"            \n              value: 10.80.0.3\n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   "
  },
  {
    "path": "30-GCP-CloudSQL-Private-NO-ExternalNameService/kube-manifests/03-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "31-GKE-FileStore-default-StorageClass/README.md",
    "content": "---\ntitle: GKE Storage with GCP File Store - Default StorageClass\ndescription: Use GCP File Store for GKE Workloads with Default StorageClass\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Storage with GCP File Store - Default StorageClass\n\n\n## Step-02: Enable Filestore CSI driver\t(If not enabled)\n- Go to Kubernetes Engine -> standard-cluster-private-1 -> Details -> Features -> Filestore CSI driver\t\n- Click on Checkbox **Enable Filestore CSI Driver**\n- Click on **SAVE CHANGES**\n\n## Step-03: Verify if Filestore CSI Driver enabled\n```t\n# Verify Filestore CSI Daemonset in kube-system namespace\nkubectl -n kube-system get ds | grep file\nObservation: \n1. You should find the Daemonset with name \"filestore-node\"\n\n# Verify Filestore CSI Daemonset pods in kube-system namespace\nkubectl -n kube-system get pods | grep file\nObservation: \n1. You should find the pods with name \"filestore-node-*\"\n```\n\n## Step-04: Existing Storage Class\n- After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances:\n- **standard-rwx:** using the Basic HDD Filestore service tier\n- **premium-rwx:** using the Basic SSD Filestore service tier\n- **enterprise-rwx**\n- **enterprise-multishare-rwx**\n```t\n# Default Storage Class created as part of FileStore CSI Enablement\nkubectl get sc\nObservation: Below four storage class will be created by default\nstandard-rwx\npremium-rwx \nenterprise-rwx\nenterprise-multishare-rwx\n```\n\n## Step-05: 01-filestore-pvc.yaml\n```yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: gke-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: standard-rwx\n  resources:\n    requests:\n      storage: 1Ti\n```\n\n## Step-06: 02-write-to-filestore-pod.yaml\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: filestore-writer-app\nspec:\n  containers:\n    - name: app\n      image: centos\n      command: [\"/bin/sh\"]\n      args: [\"-c\", \"while true; do echo GCP File Store used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done\"]\n      volumeMounts:\n        - name: persistent-storage\n          mountPath: /data\n  volumes:\n    - name: persistent-storage\n      persistentVolumeClaim:\n        claimName: gke-filestore-pvc\n```\n\n## Step-07: 03-myapp1-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: /usr/share/nginx/html/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: gke-filestore-pvc              \n```\n\n## Step-08: 04-loadBalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-09: Enable Cloud FileStore API (if not enabled)\n- Go to Search -> FileStore -> ENABLE\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n``` \n\n## Step-10: Verify GCP Cloud FileStore Instance\n- Go to FileStore -> Instances\n- Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f**\n- **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-*\n\n## Step-11: Connect to filestore write app Kubernetes pods and Verify\n```t\n# FileStore write app - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty filestore-writer-app  -- /bin/sh\ncd /data\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-12: Connect to myapp1 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97  -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-13: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt\nhttp://35.232.145.61/filestore/myapp1.txt\ncurl http://35.232.145.61/filestore/myapp1.txt\n```\n\n\n## Step-14: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Verify if FileStore Instance is deleted\nGo to -> FileStore -> Instances\n```"
  },
  {
    "path": "31-GKE-FileStore-default-StorageClass/kube-manifests/01-filestore-pvc.yaml",
    "content": "kind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: gke-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: standard-rwx\n  resources:\n    requests:\n      storage: 1Ti\n"
  },
  {
    "path": "31-GKE-FileStore-default-StorageClass/kube-manifests/02-write-to-filestore-pod.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: filestore-writer-app\nspec:\n  containers:\n    - name: app\n      image: centos\n      command: [\"/bin/sh\"]\n      args: [\"-c\", \"while true; do echo GCP Cloud FileStore used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done\"]\n      volumeMounts:\n        - name: persistent-storage\n          mountPath: /data\n  volumes:\n    - name: persistent-storage\n      persistentVolumeClaim:\n        claimName: gke-filestore-pvc"
  },
  {
    "path": "31-GKE-FileStore-default-StorageClass/kube-manifests/03-myapp1-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: /usr/share/nginx/html/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: gke-filestore-pvc            \n    "
  },
  {
    "path": "31-GKE-FileStore-default-StorageClass/kube-manifests/04-loadBalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "32-GKE-FileStore-custom-StorageClass/README.md",
    "content": "---\ntitle: GKE Storage with GCP File Store - Custom StorageClass\ndescription: Use GCP File Store for GKE Workloads with Custom StorageClass\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Storage with GCP File Store - Custom StorageClass\n\n\n## Step-02: Enable Filestore CSI driver\t(If not enabled)\n- Go to Kubernetes Engine -> standard-cluster-private -> Details -> Features -> Filestore CSI driver\t\n- Click on Checkbox **Enable Filestore CSI Driver**\n- Click on **SAVE CHANGES**\n\n## Step-03: Verify if Filestore CSI Driver enabled\n```t\n# Verify Filestore CSI Daemonset in kube-system namespace\nkubectl -n kube-system get ds | grep file\nObservation: \n1. You should find the Daemonset with name \"filestore-node\"\n\n# Verify Filestore CSI Daemonset pods in kube-system namespace\nkubectl -n kube-system get pods | grep file\nObservation: \n1. You should find the pods with name \"filestore-node-*\"\n```\n\n## Step-04: Existing Storage Class\n- After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances:\n- **standard-rwx:** using the Basic HDD Filestore service tier\n- **premium-rwx:** using the Basic SSD Filestore service tier\n```t\n# Default Storage Class created as part of FileStore CSI Enablement\nkubectl get sc\nObservation: Below two storage class will be created by default\nstandard-rwx\npremium-rwx \n```\n\n## Step-05: 00-filestore-storage-class.yaml\n```yaml\napiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: filestore-storage-class\nprovisioner: filestore.csi.storage.gke.io # File Store CSI Driver\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\nparameters:\n  tier: standard # Allowed values standard, premium, or enterprise\n  network: default # The network parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up.\n```\n\n## Step-06: Other YAML files are same as previous section\n- Other YAML files are same as previous section\n- 01-filestore-pvc.yaml\n- 02-write-to-filestore-pod.yaml\n- 03-myapp1-deployment.yaml\n- 04-loadBalancer-service.yaml\n\n## Step-07: Deploy kube-manifests\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n``` \n\n## Step-08: Verify GCP Cloud FileStore Instance\n- Go to FileStore -> Instances\n- Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f**\n- **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-*\n\n## Step-09: Connect to filestore write app Kubernetes pods and Verify\n```t\n# FileStore write app - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty filestore-writer-app  -- /bin/sh\ncd /data\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-10: Connect to myapp1 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97  -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-11: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt\nhttp://35.232.145.61/filestore/myapp1.txt\ncurl http://35.232.145.61/filestore/myapp1.txt\n```\n\n\n## Step-12: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Verify if FileStore Instance is deleted\nGo to -> FileStore -> Instances\n```"
  },
  {
    "path": "32-GKE-FileStore-custom-StorageClass/kube-manifests/00-filestore-storage-class.yaml",
    "content": "apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\n  name: filestore-storage-class\nprovisioner: filestore.csi.storage.gke.io # File Store CSI Driver\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\nparameters:\n  tier: standard # Allowed values standard, premium, or enterprise\n  network: default # The network parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up."
  },
  {
    "path": "32-GKE-FileStore-custom-StorageClass/kube-manifests/01-filestore-pvc.yaml",
    "content": "kind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: gke-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: filestore-storage-class\n  resources:\n    requests:\n      storage: 1Ti\n"
  },
  {
    "path": "32-GKE-FileStore-custom-StorageClass/kube-manifests/02-write-to-filestore-pod.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: filestore-writer-app\nspec:\n  containers:\n    - name: app\n      image: centos\n      command: [\"/bin/sh\"]\n      args: [\"-c\", \"while true; do echo GCP Cloud FileStore used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done\"]\n      volumeMounts:\n        - name: persistent-storage\n          mountPath: /data\n  volumes:\n    - name: persistent-storage\n      persistentVolumeClaim:\n        claimName: gke-filestore-pvc"
  },
  {
    "path": "32-GKE-FileStore-custom-StorageClass/kube-manifests/03-myapp1-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: /usr/share/nginx/html/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: gke-filestore-pvc            \n    "
  },
  {
    "path": "32-GKE-FileStore-custom-StorageClass/kube-manifests/04-loadBalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/01-filestore-pvc.yaml",
    "content": "kind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: gke-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: standard-rwx\n  resources:\n    requests:\n      #storage: 1Ti\n      storage: 100Gi      "
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/02-write-to-filestore-pod.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: filestore-writer-app\nspec:\n  containers:\n    - name: app\n      image: centos\n      command: [\"/bin/sh\"]\n      args: [\"-c\", \"while true; do echo GCP Cloud FileStore used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done\"]\n      volumeMounts:\n        - name: persistent-storage\n          mountPath: /data\n  volumes:\n    - name: persistent-storage\n      persistentVolumeClaim:\n        claimName: gke-filestore-pvc"
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/03-myapp1-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: /usr/share/nginx/html/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: gke-filestore-pvc            \n    "
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/01-myapp1-kube-manifests/04-loadBalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n\n# This will create a Classic Load Balancer\n# AWS will be retiring the EC2-Classic network on August 15, 2022.      "
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/02-volume-backup-kube-manifests/01-VolumeSnapshotClass.yaml",
    "content": "apiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n  name: csi-gcp-filestore-backup-snap-class\ndriver: filestore.csi.storage.gke.io\nparameters:\n  type: backup\ndeletionPolicy: Delete"
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/02-volume-backup-kube-manifests/02-VolumeSnapshot.yaml",
    "content": "apiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n  name: myapp1-volume-snapshot\nspec:\n  volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class\n  source:\n    persistentVolumeClaimName: gke-filestore-pvc"
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/03-volume-restore-myapp2-kube-manifests/01-filestore-pvc.yaml",
    "content": "kind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: restored-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: standard-rwx\n  resources:\n    requests:\n      storage: 1Ti\n  dataSource:\n    kind: VolumeSnapshot\n    name: myapp1-volume-snapshot\n    apiGroup: snapshot.storage.k8s.io      "
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/03-volume-restore-myapp2-kube-manifests/02-myapp2-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp2-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp2\n  template:  \n    metadata: # Dictionary\n      name: myapp2-pod\n      labels: # Dictionary\n        app: myapp2  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp2-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: /usr/share/nginx/html/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: restored-filestore-pvc\n    "
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/03-volume-restore-myapp2-kube-manifests/03-myapp2-loadBalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp2-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp2\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "33-GKE-FileStore-Backup-and-Restore/README.md",
    "content": "---\ntitle: GKE Storage with GCP File Store - Backup and Restore\ndescription: Use GCP File Store for GKE Workloads - Implement Backup and Restore\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Storage with GCP File Store \n- Implement Backups is `VolumeSnapshotClass` and `VolumeSnapshot`\n- Implement Restore of FileStore in myapp2 Application and Verify\n\n\n## Step-02: YAML files are same as first FileStore Demo\n- **Project Folder:** 01-myapp1-kube-manifests\n- YAML files are same as first FileStore Demo\n- 01-filestore-pvc.yaml\n- 02-write-to-filestore-pod.yaml\n- 03-myapp1-deployment.yaml\n- 04-loadBalancer-service.yaml\n\n## Step-03: Deploy 01-myapp1-kube-manifests and Verify\n```t\n# Deploy 01-myapp1-kube-manifests\nkubectl apply -f 01-myapp1-kube-manifests\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n``` \n\n## Step-04: Verify GCP Cloud FileStore Instance\n- Go to FileStore -> Instances\n- Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f**\n- **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-*\n\n## Step-05: Connect to filestore write app Kubernetes pods and Verify\n```t\n# FileStore write app - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty filestore-writer-app  -- /bin/sh\ncd /data\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-06: Connect to myapp1 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97  -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-07: Access Application\n```t\n# List Services\nkubectl get svc\n\n# myapp1 - Access Application\nhttp://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt\nhttp://35.232.145.61/filestore/myapp1.txt\ncurl http://35.232.145.61/filestore/myapp1.txt\n```\n\n\n## Step-08: Volume Backup: 01-VolumeSnapshotClass.yaml\n- **Project Folder:** 02-volume-backup-kube-manifests\n```yaml\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshotClass\nmetadata:\n  name: csi-gcp-filestore-backup-snap-class\ndriver: filestore.csi.storage.gke.io\nparameters:\n  type: backup\ndeletionPolicy: Delete\n```\n\n## Step-09: Volume Backup: 02-VolumeSnapshot.yaml\n- **Project Folder:** 02-volume-backup-kube-manifests\n```yaml\napiVersion: snapshot.storage.k8s.io/v1\nkind: VolumeSnapshot\nmetadata:\n  name: myapp1-volume-snapshot\nspec:\n  volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class\n  source:\n    persistentVolumeClaimName: gke-filestore-pvc\n```\n\n## Step-10: Volume Backup: Deploy 02-volume-backup-kube-manifests and Verify\n```t\n# Deploy 02-volume-backup-kube-manifests\nkubectl apply -f 02-volume-backup-kube-manifests\n\n# List VolumeSnapshotClass\nkubectl get volumesnapshotclass\n\n# Describe VolumeSnapshotClass\nkubectl describe volumesnapshotclass csi-gcp-filestore-backup-snap-class\n\n# List VolumeSnapshot\nkubectl get volumesnapshot\n\n# Describe VolumeSnapshot\nkubectl describe volumesnapshot myapp1-volume-snapshot\n```\n\n## Step-11: Volume Backup: Verify GCP Cloud FileStore Backups\n- Go to FileStore -> Backups\n- Observation: You should find the Backup with name `snapshot-<SOME-ID>` (Example: snapshot-b4f24bd7-649b-45bb-8a0a-2b09d5b0e631)\n\n## Step-12: Volume Restore: 01-filestore-pvc.yaml\n- **Project Folder:** 03-volume-restore-myapp2-kube-manifests\n```yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: restored-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: standard-rwx\n  resources:\n    requests:\n      storage: 1Ti\n  dataSource:\n    kind: VolumeSnapshot\n    name: myapp1-volume-snapshot\n    apiGroup: snapshot.storage.k8s.io      \n```\n\n## Step-13: Volume Restore: 02-myapp2-deployment.yaml\n- **Project Folder:** 03-volume-restore-myapp2-kube-manifests\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp2-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp2\n  template:  \n    metadata: # Dictionary\n      name: myapp2-pod\n      labels: # Dictionary\n        app: myapp2  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp2-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: /usr/share/nginx/html/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: restored-filestore-pvc    \n```\n\n## Step-14: Volume Restore: 03-myapp2-loadBalancer-service.yaml\n- **Project Folder:** 03-volume-restore-myapp2-kube-manifests\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp2-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp2\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-13: Volume Restore: Deploy 03-volume-restore-myapp2-kube-manifests and Verify\n```t\n# Deploy 03-volume-restore-myapp2-kube-manifests\nkubectl apply -f 03-volume-restore-myapp2-kube-manifests\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n\n# Verify if new FileStore Instance is Created\nGo to -> FileStore -> Instances\n```\n\n## Step-14: Volume Restore: Connect to myapp2 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp2-deployment-6dccd6557-9x6dn -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- /bin/sh\nkubectl exec --stdin --tty myapp2-deployment-6dccd6557-mbbjm  -- /bin/sh\ncd /usr/share/nginx/html/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-15: Volume Restore: Access Applications\n```t\n# List Services\nkubectl get svc\n\n# myapp1 - Access Application\nhttp://<MYAPP1-EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt\nhttp://35.232.145.61/filestore/myapp1.txt\n\n\n# myapp2 - Access Application\nhttp://<MYAPP2-EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt\nhttp://34.71.145.41/filestore/myapp1.txt\n\n\nOBSERVATION: \n1. For MyApp1, writer app is writing to FileStore so we get latest timestamp lines (many lines and file growing)\n2. For MyApp2, we have restored it from backup, which means the number of lines present in file at the time of snapshot will be only displayed. \n3. KEY here is we are able to successfully use the filestore backup for our Kubernetes Workloads\n```\n\n## Step-16: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f 01-myapp1-kube-manifests -f 02-volume-backup-kube-manifests -f 03-volume-restore-myapp2-kube-manifests\n\n# Verify if two FileStore Instances are deleted\nGo to -> FileStore -> Instances\n\n# Verify if FileStore Backup is deleted\nGo to -> FileStore -> Backups\n```"
  },
  {
    "path": "34-GKE-Ingress-Basics/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Basics\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Basics\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- Learn Ingress Basics\n- [Ingress Diagram Reference](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#ingress_to_resource_mappings)\n\n## Step-02: Verify HTTP Load Balancing enabled for your GKE Cluster\n- Go to Kubernetes Engine -> standard-cluster-private1 -> DETAILS tab -> Networking\n- Verify `HTTP Load Balancing: Enabled` \n\n\n## Step-03: Kubernetes Deployment: 01-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n```\n\n## Step-04: Kubernetes NodePort Service: 01-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n```\n\n## Step-05: 02-ingress-basic.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-basics\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer\n    kubernetes.io/ingress.class: \"gce\"  \nspec:\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                   \n```\n\n## Step-06: Deploy kube-manifests and Verify\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress\nkubectl get ingress\nObservation:\n1. Wait for ADDRESS field to populate the Public IP Address\n\n# Describe Ingress \nkubectl describe ingress ingress-basics\n\n# Access Application\nhttp://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\nImportant Note:\n1. If you get 502 error, wait for 2 to 3 mins and retry. \n2. It takes time to create load balancer on GCP.\n```\n\n## Step-07: Verify Load Balancer\n- Go to Load Balancing -> Click on Load balancer\n### Load Balancer View \n- DETAILS Tab\n  - Frontend\n  - Host and Path Rules\n  - Backend Services\n  - Health Checks\n- MONITORING TAB\n- CACHING TAB \n### Load Balancer Components View\n- FORWARDING RULES\n- TARGET PROXIES\n- BACKEND SERVICES\n- BACKEND BUCKETS\n- CERTIFICATES\n- TARGET POOLS\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify if load balancer got deleted\nGo to Load Balancing -> Should not see any load balancers\n```\n \n\n## GKE Ingress References\n- [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)\n-[Ingress Concepts](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress)\n- [Service Networking](https://cloud.google.com/kubernetes-engine/docs/concepts/service-networking)"
  },
  {
    "path": "34-GKE-Ingress-Basics/kube-manifests/01-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "34-GKE-Ingress-Basics/kube-manifests/02-ingress-basic.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-basics\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer\n    kubernetes.io/ingress.class: \"gce\"  \nspec:\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                  \n      \n"
  },
  {
    "path": "35-GKE-Ingress-Context-Path-Routing/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Context Path Routing\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- Ingress Context Path based Routing\n- Discuss about the Architecture we are going to build as part of this Section\n- We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller\n  - /app1/* - should go to app1-nginx-nodeport-service\n  - /app2/* - should go to app2-nginx-nodeport-service\n  - /*    - should go to  app3-nginx-nodeport-service\n\n\n## Step-02: Review Nginx App1, App2 & App3 Deployment & Service\n- Differences for all 3 apps will be only one field from kubernetes manifests perspective and additionally their naming conventions\n  - **Kubernetes Deployment:** Container Image name\n- **App1 Nginx: 01-Nginx-App1-Deployment-and-NodePortService.yaml**\n  - **image:** stacksimplify/kube-nginxapp1:1.0.0\n- **App2 Nginx: 02-Nginx-App2-Deployment-and-NodePortService.yaml**\n  - **image:** stacksimplify/kube-nginxapp2:1.0.0\n- **App3 Nginx: 03-Nginx-App3-Deployment-and-NodePortService.yaml**\n  - **image:** stacksimplify/kubenginx:1.0.0\n\n\n## Step-03: 04-Ingress-ContextPath-Based-Routing.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cpr\n  annotations:\n    # External Load Balancer  \n    kubernetes.io/ingress.class: \"gce\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n#          - path: /\n#            pathType: Prefix\n#            backend:\n#              service:\n#                name: app3-nginx-nodeport-service\n#                port: \n#                  number: 80                           \n```\n\n## Step-04: Deploy kube-manifests and test\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-cpr\n```\n\n## Step-05: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app1/index.html\nhttp://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app2/index.html\nhttp://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/\n```\n\n\n## Step-06: Verify Load Balancer\n- Go to Load Balancing -> Click on Load balancer\n### Load Balancer View \n- DETAILS Tab\n  - Frontend\n  - Host and Path Rules\n  - Backend Services\n  - Health Checks\n- MONITORING TAB\n- CACHING TAB \n### Load Balancer Components View\n- FORWARDING RULES\n- TARGET PROXIES\n- BACKEND SERVICES\n- BACKEND BUCKETS\n- CERTIFICATES\n- TARGET POOLS\n\n\n## Step-07: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n```\n"
  },
  {
    "path": "35-GKE-Ingress-Context-Path-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "35-GKE-Ingress-Context-Path-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "35-GKE-Ingress-Context-Path-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "35-GKE-Ingress-Context-Path-Routing/kube-manifests/04-Ingress-ContextPath-Based-Routing.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cpr\n  annotations:\n    # External Load Balancer  \n    kubernetes.io/ingress.class: \"gce\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n#          - path: /\n#            pathType: Prefix\n#            backend:\n#              service:\n#                name: app3-nginx-nodeport-service\n#                port: \n#                  number: 80                     \n             \n\n    \n    "
  },
  {
    "path": "36-GKE-Ingress-Custom-Health-Check/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Ingress Custom Health Check\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes \n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n\n## Step-01: Introduction\n- Ingress Context Path based Routing\n- Ingress Custom Health Checks for each application using Kubernetes Readiness Probes\n  - **App1 Health Check Path:** /app1/index.html\n  - **App2 Health Check Path:** /app2/index.html\n  - **App3 Health Check Path:** /index.html\n\n\n## Step-02: 01-Nginx-App1-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5       \n```\n\n## Step-03: 02-Nginx-App2-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5   \n```\n\n## Step-04: 03-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5     \n```\n\n## Step-05: 04-Ingress-Custom-Healthcheck.yaml\n- NO CHANGES FROM CONTEXT PATH ROUTING DEMO other than Ingress Service name `ingress-custom-healthcheck`\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-custom-healthcheck\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n#          - path: /\n#            pathType: Prefix\n#            backend:\n#              service:\n#                name: app3-nginx-nodeport-service\n#                port: \n#                  number: 80                     \n             \n```\n\n\n## Step-06: Deploy kube-manifests and verify\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-custom-healthcheck\n```\n\n## Step-07: Verify Health Checks\n- Go to Load Balancing -> Click on LB\n- DETAILS TAB\n  - Backend services -> First Backend -> Click on Health Check Link\n- OR\n- Go to Compute Engine -> Instance Groups -> Health Checks\n- Review all 3 Health Checks and their Paths  \n  - **App1 Health Check Path:** /app1/index.html\n  - **App2 Health Check Path:** /app2/index.html\n  - **App3 Health Check Path:** /index.html\n\n\n## Step-08: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app1/index.html\nhttp://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app2/index.html\nhttp://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/\n```\n\n## Step-09: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify Load Balancer Deleted\nGo to Network Services -> Load Balancing -> No Load balancers should be present\n```\n\n## References\n- [GKE Ingress Healthchecks](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks)\n"
  },
  {
    "path": "36-GKE-Ingress-Custom-Health-Check/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "36-GKE-Ingress-Custom-Health-Check/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "36-GKE-Ingress-Custom-Health-Check/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "36-GKE-Ingress-Custom-Health-Check/kube-manifests/04-Ingress-Custom-Healthcheck.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-custom-healthcheck\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n#          - path: /\n#            pathType: Prefix\n#            backend:\n#              service:\n#                name: app3-nginx-nodeport-service\n#                port: \n#                  number: 80                     \n             \n\n    "
  },
  {
    "path": "37-Google-Cloud-Domains/README.md",
    "content": "---\ntitle: Google Cloud Domains\ndescription: Register Domain Name using Google Cloud Domains\n---\n\n## Step-01: Introduction\n- Register Domain Name using Google Cloud Domains\n\n## Step-02: Register Domain\n- Go to Networking Services -> Cloud Domains -> Click on **REGISTER DOMAIN**\n- **Search Domain:** kalyanreddydaida.com\n  - Click on **SELECT**\n  - Click on **CONTINUE**\n- **DNS CONFIGURATION**\n  - **DNS Provider:** Use Cloud DNS (Recommended)\n  - Click on **CONTINUE**  \n- **Privacy protection**\n  - **Privacy Protection:** Privacy Protection ON  \n- **Contact details**\n  - Fill Contact Details\n- Click on **REGISTER**\n\n## Step-03: Review the new domain at Cloud Domains Page\n- Go to Networking Services -> Cloud Domains \n- Review all details populated correctly\n\n## Step-04: Cloud DNS\n- Go to Networking Services -> Cloud DNS -> kalyanreddydaida-com\n- Review all details"
  },
  {
    "path": "38-GKE-Ingress-ExternalIP/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress with External IP\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with External IP\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n3. Registered Domain using Google Cloud Domains\n\n## Step-01: Introduction\n- Reserve an External IP Address\n- Using Annotaiton `kubernetes.io/ingress.global-static-ip-name` associate this External IP to Ingress Service\n\n## Step-02: Create External IP Address using gcloud\n```t\n# Create External IP Address\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip1 --global\n\n# Describe External IP Address \ngcloud compute addresses describe ADDRESS_NAME --global\ngcloud compute addresses describe gke-ingress-extip1 --global\n\n# List External IP Address\ngcloud compute addresses list\n\n# Verify\nGo to VPC Network -> IP Addresses -> External IP Address\n```\n\n## Step-03: Add RECORDSET Google Cloud DNS for this External IP\n- Go to Network Services -> Cloud DNS -> kalyanreddydaida.com -> **ADD RECORD SET**\n- DNS NAME: demo1.kalyanreddydaida.com\n- **IPv4 Address:** <EXTERNAL-IP-RESERVERD-IN-STEP-02>\n- Click on **CREATE**\n\n## Step-04: Verify DNS resolving to IP \n```t\n# nslookup test\nnslookup demo1.kalyanreddydaida.com\n\n## Sample Output\nKalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ nslookup demo1.kalyanreddydaida.com\nServer:\t\t192.168.2.1\nAddress:\t192.168.2.1#53\n\nNon-authoritative answer:\nName:\tdemo1.kalyanreddydaida.com\nAddress: 34.120.32.120\n\nKalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ \n```\n\n\n## Step-05: 04-Ingress-external-ip.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-external-ip\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80          \n```\n\n## Step-06: No changes to other 3 YAML Files\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n\n## Step-07: Deploy kube-manifests and verify\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-external-ip\n```\n\n\n\n## Step-08: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp://<DNS-DOMAIN-NAME>/app1/index.html\nhttp://<DNS-DOMAIN-NAME>/app2/index.html\nhttp://<DNS-DOMAIN-NAME>/\n\n# Replace Domain Name registered in Cloud DNS\nhttp://demo1.kalyanreddydaida.com/app1/index.html\nhttp://demo1.kalyanreddydaida.com/app2/index.html\nhttp://demo1.kalyanreddydaida.com/\n```\n\n## Step-09: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify Load Balancer Deleted\nGo to Network Services -> Load Balancing -> No Load balancers should be present\n```\n\n"
  },
  {
    "path": "38-GKE-Ingress-ExternalIP/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "38-GKE-Ingress-ExternalIP/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "38-GKE-Ingress-ExternalIP/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "38-GKE-Ingress-ExternalIP/kube-manifests/04-Ingress-external-ip.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-external-ip\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n    "
  },
  {
    "path": "39-GKE-Ingress-Google-Managed-SSL/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress SSL\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Registered Domain using Google Cloud Domains\n4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com)\n\n\n## Step-01: Introduction\n- Google Managed Certificates for GKE Ingress\n- Ingress SSL\n- Certificate Validity: 90 days\n- 30 days before expiry google starts renewal process. We dont need to worry about it.\n- **Important Note:** Google-managed certificates are only supported with GKE Ingress using the external HTTP(S) load balancer. Google-managed certificates do not support third-party Ingress controllers.\n\n## Step-02: kube-manifest - NO CHANGES\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n\n## Step-03: 05-Managed-Certificate.yaml\n- **Pre-requisite-1:** Registered Domain using Google Cloud Domains\n- **Pre-requisite-2:** DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com)\n```yaml\napiVersion: networking.gke.io/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - demo1.kalyanreddydaida.com\n```\n\n## Step-04: 04-Ingress-SSL.yaml\n- Add the annotation `networking.gke.io/managed-certificates` to Ingress Service with Managed Certificate name. \n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80   \n```\n\n## Step-06: Deploy kube-manifests and Verify\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-ssl\n```\n\n## Step-07: Verify Managed Certificates\n```t\n# List Managed Certificate\nkubectl get managedcertificate\n\n# Describe managed certificate\nkubectl describe managedcertificate managed-cert-for-ingress\nObservation: \n1. Wait for the Google-managed certificate to finish provisioning. \n2. This might take up to 60 minutes. \n3. Status of the certificate should change from PROVISIONING to ACTIVE\ndemo1.kalyanreddydaida.com: PROVISIONING\n\n# List Certificates\ngcloud compute ssl-certificates list\n```\n\n## Step-08: Verify SSL Certificates from Certificate Tab in Load Balancer\n### Load Balancers Component View\n- View in **Load Balancers Component View**\n- Click on **CERTIFICATES** tab\n\n### Load Balancers View\n- Review FRONTEND with HTTPS Protocol and associated with Certificate\n\n\n\n## Step-09: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp://<DNS-DOMAIN-NAME>/app1/index.html\nhttp://<DNS-DOMAIN-NAME>/app2/index.html\nhttp://<DNS-DOMAIN-NAME>/\n\n# Note: Replace Domain Name registered in Cloud DNS\n# HTTP URLs\nhttp://demo1.kalyanreddydaida.com/app1/index.html\nhttp://demo1.kalyanreddydaida.com/app2/index.html\nhttp://demo1.kalyanreddydaida.com/\n\n# HTTPS URLs\nhttps://demo1.kalyanreddydaida.com/app1/index.html\nhttps://demo1.kalyanreddydaida.com/app2/index.html\nhttps://demo1.kalyanreddydaida.com/\n```\n\n\n\n\n\n## References\n- https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs\n- https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting\n- https://github.com/GoogleCloudPlatform/gke-managed-certs"
  },
  {
    "path": "39-GKE-Ingress-Google-Managed-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "39-GKE-Ingress-Google-Managed-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "39-GKE-Ingress-Google-Managed-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "39-GKE-Ingress-Google-Managed-SSL/kube-manifests/04-Ingress-SSL.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n\n    "
  },
  {
    "path": "39-GKE-Ingress-Google-Managed-SSL/kube-manifests/05-Managed-Certificate.yaml",
    "content": "apiVersion: networking.gke.io/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - demo1.kalyanreddydaida.com\n"
  },
  {
    "path": "40-GKE-Ingress-Google-Managed-SSL-Redirect/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress SSL Redirect\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Registered Domain using Google Cloud Domains\n4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com)\n\n\n## Step-01: Introduction\n- Google Managed Certificates for GKE Ingress\n- Ingress SSL\n- Ingress SSL Redirect (HTTP to HTTPS)\n\n## Step-02: 06-frontendconfig.yaml\n```yaml\napiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE\n```\n\n## Step-03: 04-Ingress-SSL.yaml\n- Add the Annotation `networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"`\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"    \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80                   \n```\n\n## Step-04: Deploy kube-manifests and Verify\n- From previous `Ingress SSL` demo we didn't clean-up those Kubernetes resources.\n- We are going use them here, in addition to previous demo in this demo we are just adding `06-frontendconfig.yaml`\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\nObservation:\n1. Only \"my-frontend-config\" will be created, rest all unchanged\n\n### Sample Output\nKalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ kubectl apply -f kube-manifests/\ndeployment.apps/app1-nginx-deployment unchanged\nservice/app1-nginx-nodeport-service unchanged\ndeployment.apps/app2-nginx-deployment unchanged\nservice/app2-nginx-nodeport-service unchanged\ndeployment.apps/app3-nginx-deployment unchanged\nservice/app3-nginx-nodeport-service unchanged\ningress.networking.k8s.io/ingress-ssl configured\nmanagedcertificate.networking.gke.io/managed-cert-for-ingress unchanged\nfrontendconfig.networking.gke.io/my-frontend-config created  \nKalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ \n\n\n# List FrontendConfig\nkubectl get frontendconfig\n\n# Describe FrontendConfig\nkubectl describe frontendconfig my-frontend-config\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-ssl\n```\n\n\n## Step-05: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp://<DNS-DOMAIN-NAME>/app1/index.html\nhttp://<DNS-DOMAIN-NAME>/app2/index.html\nhttp://<DNS-DOMAIN-NAME>/\n\n# Note: Replace Domain Name registered in Cloud DNS\n# HTTP URLs: Should redirect to HTTPS URLs \nhttp://demo1.kalyanreddydaida.com/app1/index.html\nhttp://demo1.kalyanreddydaida.com/app2/index.html\nhttp://demo1.kalyanreddydaida.com/\n\n# HTTPS URLs\nhttps://demo1.kalyanreddydaida.com/app1/index.html\nhttps://demo1.kalyanreddydaida.com/app2/index.html\nhttps://demo1.kalyanreddydaida.com/\n```\n\n## Step-06: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify Load Balancer Deleted\nGo to Network Services -> Load Balancing -> No Load balancers should be present\n```\n"
  },
  {
    "path": "40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/04-Ingress-SSL.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"    \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n"
  },
  {
    "path": "40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/05-Managed-Certificate.yaml",
    "content": "apiVersion: networking.gke.io/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - demo1.kalyanreddydaida.com"
  },
  {
    "path": "40-GKE-Ingress-Google-Managed-SSL-Redirect/kube-manifests/06-frontendconfig.yaml",
    "content": "apiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE"
  },
  {
    "path": "41-GKE-Workload-Identity/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Workload Identity\ndescription: Implement GCP Google Kubernetes Engine GKE Workload Identity\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n1. Create GCP IAM Service Account\n2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding)\n3. Create Kubernetes Namespace\n4. Create Kubernetes Service Account\n5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding)\n6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount)\n7. Create a Sample App with and without Kubernetes Service Account\n8. Test Workload Identity in GKE Cluster\n\n## Step-02: Verify if Workload Identity Setting is enabled for GKE Cluster\n- Go to Kubernetes Engine -> Clusters -> standard-cluster-private-1 -> DETAILS Tab\n- In Security -> Workload Identity\t-> SHOULD BE IN ENABLED STATE\n\n## Step-03: Create GCP IAM Service Account\n```t\n# List IAM Service Accounts\ngcloud iam service-accounts list\n\n# List Google Cloud Projects\ngcloud projects list\nObservation: \n1. Get the PROJECT_ID for your current project\n2. Replace GSA_PROJECT_ID with PROJECT_ID for your current project\n\n# Create GCP IAM Service Account\ngcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT_ID\nGSA_NAME: the name of the new IAM service account.\nGSA_PROJECT_ID: the project ID of the Google Cloud project for your IAM service account.\nGSA_PROJECT==PROJECT_ID\n\n# Replace GSA_NAME and GSA_PROJECT\ngcloud iam service-accounts create wid-gcpiam-sa --project=kdaida123\n\n# List IAM Service Accounts\ngcloud iam service-accounts list\n```\n\n## Step-04: Add IAM Roles to GCP IAM Service Account\n- We are giving `\"roles/compute.viewer\"` permissions to IAM Service Account. \n- From Kubernetes Pod, we are going to list the compute instances.\n- With the help of the `Google IAM Service account` and `Kubernetes Service Account`, access for Kubernetes Pod from GKE cluster should be successful for listing the google computing instances. \n```t\n# Add IAM Roles to GCP IAM Service Account\ngcloud projects add-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account.\nGSA_PROJECT_ID==PROJECT_ID\nROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer.\n\n# Replace PROJECT_ID, GSA_NAME, GSA_PROJECT_ID, ROLE_NAME\ngcloud projects add-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles/compute.viewer\" \n```\n\n## Step-05: Create Kubernetes Namepsace and Service Account\n```t\n# Create Kubernetes Namespace\nkubectl create namespace <NAMESPACE>\nkubectl create namespace wid-kns\n\n# Create Service Account\nkubectl create serviceaccount <KSA_NAME>  --namespace <NAMESPACE>\nkubectl create serviceaccount wid-ksa  --namespace wid-kns\n```\n\n## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account\n- Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts.\n- This binding allows the Kubernetes service account to act as the IAM service account.\n```t\n# Associate GCP IAM Service Account with Kubernetes Service Account\ngcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com \\\n    --role roles/iam.workloadIdentityUser \\\n    --member \"serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]\"\n\n# Replace GSA_NAME, GSA_PROJECT_ID, PROJECT_ID, NAMESPACE, KSA_NAME\ngcloud iam service-accounts add-iam-policy-binding wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com \\\n    --role roles/iam.workloadIdentityUser \\\n    --member \"serviceAccount:kdaida123.svc.id.goog[wid-kns/wid-ksa]\"\n```\n\n## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address\n- Annotate the Kubernetes service account with the email address of the IAM service account.\n```t\n# Annotate Kubernetes Service Account with GCP IAM SA email Address\nkubectl annotate serviceaccount KSA_NAME \\\n    --namespace NAMESPACE \\\n    iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com\n\n# Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT_ID\nkubectl annotate serviceaccount wid-ksa \\\n    --namespace wid-kns \\\n    iam.gke.io/gcp-service-account=wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\n\n# Describe Kubernetes Service Account\nkubectl describe sa wid-ksa -n wid-kns\n```\n\n## Step-08: 01-wid-demo-pod-without-sa.yaml\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: wid-demo-without-sa\n  namespace: wid-kns\nspec:\n  containers:\n  - image: google/cloud-sdk:slim\n    name: wid-demo-without-sa\n    command: [\"sleep\",\"infinity\"]\n  #serviceAccountName: wid-ksa\n  nodeSelector:\n    iam.gke.io/gke-metadata-server-enabled: \"true\"\n```\n\n## Step-09: 02-wid-demo-pod-with-sa.yaml\n- **Important Note:** For Autopilot clusters, omit the nodeSelector field. Autopilot rejects this nodeSelector because all nodes use Workload Identity.\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: wid-demo-with-sa\n  namespace: wid-kns\nspec:\n  containers:\n  - image: google/cloud-sdk:slim\n    name: wid-demo-with-sa\n    command: [\"sleep\",\"infinity\"]\n  serviceAccountName: wid-ksa\n  nodeSelector:\n    iam.gke.io/gke-metadata-server-enabled: \"true\"\n```\n\n## Step-10: Deploy Kubernetes Manifests and Verify\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl -n wid-kns get pods \n```\n\n## Step-11: Verify from Workload Identity without Service Account Pod\n```t\n# Connect to Pod\nkubectl -n wid-kns exec -it wid-demo-without-sa -- /bin/bash\n\n# Default Service Account Pod is using currently\ngcloud auth list\nObservation: It chose the default account\n\n## Sample Output\nroot@wid-demo-without-sa:/# gcloud auth list\n    Credentialed Accounts\nACTIVE  ACCOUNT\n*       kdaida123.svc.id.goog\n\nTo set the active account, run:\n    $ gcloud config set account `ACCOUNT`\n\nroot@wid-demo-without-sa:/# \n\n\n# List Compute Instances from workload-identity-demo pod\ngcloud compute instances list\n\n## Sample Output\nroot@wid-demo-without-sa:/# gcloud compute instances list\nERROR: (gcloud.compute.instances.list) Some requests did not succeed:\n - Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.\n\nroot@wid-demo-without-sa:/# \n\n# Exit the container terminal\nexit\n```\n\n## Step-12: Verify from Workload Identity with Service Account Pod\n```t\n# Connect to Pod\nkubectl -n wid-kns exec -it wid-demo-with-sa -- /bin/bash\n\n# Default Service Account Pod is using currently\ngcloud auth list\n\n## Sample Output\nroot@wid-demo-with-sa:/# gcloud auth list\n                 Credentialed Accounts\nACTIVE  ACCOUNT\n*       wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\n\nTo set the active account, run:\n    $ gcloud config set account `ACCOUNT`\n\nroot@wid-demo-with-sa:/# \n\n# List Compute Instances from workload-identity-demo pod\ngcloud compute instances list\n\n## Sample Output\nroot@wid-demo-with-sa:/# gcloud compute instances list\nNAME                                                 ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP    EXTERNAL_IP  STATUS\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5cds  us-central1-c  g1-small      true         10.128.15.235               RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz  us-central1-c  g1-small      true         10.128.0.8                  RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6  us-central1-c  g1-small      true         10.128.0.2                  RUNNING\nroot@wid-demo-with-sa:/# \n```\n\n## Step-13: Negative Usecase: Test access to Cloud DNS Record Sets\n```t\n# gcloud list DNS Records\ngcloud dns record-sets list --zone=kalyanreddydaida-com\nObservation:\n1. GCP IAM Service Account \"wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" doesnt have roles assigned related to Cloud DNS so we got HTTP 403\n\n## Sample Output\nroot@wid-demo-with-sa:/# gcloud dns record-sets list --zone=kalyanreddydaida-com\nERROR: (gcloud.dns.record-sets.list) HTTPError 403: Forbidden\nroot@wid-demo-with-sa:/# \n\n# Exit the container terminal\nexit\n```\n\n## Step-14: Give Cloud DNS Admin Role for GCP IAM Servic Account wid-gcpiam-sa\n```t\n# Add IAM Roles to GCP IAM Service Account\ngcloud projects add-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT: the project ID of the Google Cloud project of your IAM service account.\nROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer.\nGSA_PROJECT==PROJECT_ID\n\n# Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects add-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles/dns.admin\" \n```\n\n## Step-15: Verify from Workload Identity with Service Account Pod\n```t\n# Connect to Pod\nkubectl -n wid-kns exec -it wid-demo-with-sa -- /bin/bash\n\n# List Cloud DNS Record Sets\ngcloud dns record-sets list --zone=kalyanreddydaida-com\n\n### Sample Output\nroot@wid-demo-with-sa:/# gcloud dns record-sets list --zone=kalyanreddydaida-com\nNAME                         TYPE  TTL    DATA\nkalyanreddydaida.com.        NS    21600  ns-cloud-a1.googledomains.com.,ns-cloud-a2.googledomains.com.,ns-cloud-a3.googledomains.com.,ns-cloud-a4.googledomains.com.\nkalyanreddydaida.com.        SOA   21600  ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300\ndemo1.kalyanreddydaida.com.  A     300    34.120.32.120\nroot@wid-demo-with-sa:/# \n\n\n# List Compute Instances from workload-identity-demo pod\ngcloud compute instances list\n\n## Sample Output\nroot@wid-demo-with-sa:/# gcloud compute instances list\nNAME                                                 ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP    EXTERNAL_IP  STATUS\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5cds  us-central1-c  g1-small      true         10.128.15.235               RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz  us-central1-c  g1-small      true         10.128.0.8                  RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6  us-central1-c  g1-small      true         10.128.0.2                  RUNNING\nroot@wid-demo-with-sa:/# \n\n# Exit the container terminal \nexit\n```\n\n\n## Step-16: Clean-Up Kubernetes Resources\n```t\n# Delete Kubernetes Pods\nkubectl delete -f kube-manifests\n\n# List Namespaces\nkubectl get ns\n\n# Delete Kubernetes Namespace \nkubectl delete ns wid-kns\nObservation:\n1. Kubernetes Service Account \"wid-ksa\" will get automatically deleted when that namespace is deleted\n```\n\n## Step-17: Clean-Up GCP IAM Resources\n```t\n# List GCP IAM Service Accounts\ngcloud iam service-accounts list\n\n# Remove IAM Roles to GCP IAM Service Account\ngcloud projects remove-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account.\nGSA_PROJECT_ID==PROJECT_ID\nROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer.\n\n# REMOVE ROLE: COMPUTE VIEWER: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects remove-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles/compute.viewer\" \n\n# REMOVE ROLE: DNS ADMIN: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects remove-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles/dns.admin\" \n\n# Delete the GCP IAM Service Account we have created\ngcloud iam service-accounts delete wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com --project=kdaida123\n```\n\n## References\n- [GKE - Use Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)"
  },
  {
    "path": "41-GKE-Workload-Identity/kube-manifests/01-wid-demo-pod-without-sa.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: wid-demo-without-sa\n  namespace: wid-kns\nspec:\n  containers:\n  - image: google/cloud-sdk:slim\n    name: wid-demo-without-sa\n    command: [\"sleep\",\"infinity\"]\n  #serviceAccountName: wid-ksa\n  nodeSelector:\n    iam.gke.io/gke-metadata-server-enabled: \"true\""
  },
  {
    "path": "41-GKE-Workload-Identity/kube-manifests/02-wid-demo-pod-with-sa.yaml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: wid-demo-with-sa\n  namespace: wid-kns\nspec:\n  containers:\n  - image: google/cloud-sdk:slim\n    name: wid-demo-with-sa\n    command: [\"sleep\",\"infinity\"]\n  serviceAccountName: wid-ksa\n  nodeSelector:\n    iam.gke.io/gke-metadata-server-enabled: \"true\""
  },
  {
    "path": "42-GKE-ExternalDNS-Install/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE External DNS Install\ndescription: Implement GCP Google Kubernetes Engine GKE External DNS Install\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n## Step-01: Introduction\n1. Create GCP IAM Service Account: external-dns-gsa\n2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding)\n3. Create Kubernetes Namespace: external-dns-ns\n4. Create Kubernetes Service Account: external-dns-ksa\n5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding)\n6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount)\n7. Install Helm CLI on your local desktop (if not installed)\n8. Install  External-DNS using Helm\n9. Verify External-DNS Logs\n10. Additional Reference: Install [ExternalDNS Controller using Helm](https://github.com/kubernetes-sigs/external-dns)\n\n## Step-03: Create GCP IAM Service Account\n```t\n# List IAM Service Accounts\ngcloud iam service-accounts list\n\n# Create GCP IAM Service Account\ngcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT\nGSA_NAME: the name of the new IAM service account.\nGSA_PROJECT: the project ID of the Google Cloud project for your IAM service account.\n\n# Replace GSA_NAME and GSA_PROJECT\ngcloud iam service-accounts create external-dns-gsa --project=kdaida123\n\n# List IAM Service Accounts\ngcloud iam service-accounts list\n```\n\n## Step-04: Add IAM Roles to GCP IAM Service Account\n```t\n# Add IAM Roles to GCP IAM Service Account\ngcloud projects add-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT: the project ID of the Google Cloud project of your IAM service account.\nROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer.\n\n# Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects add-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:external-dns-gsa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles/dns.admin\" \n```\n\n## Step-05: Create Kubernetes Namepsace and Kubernetes Service Account\n```t\n# Create Kubernetes Namespace\nkubectl create namespace <NAMESPACE>\nkubectl create namespace external-dns-ns\n\n# List Namespaces\nkubectl get ns\n\n# Create Service Account\nkubectl create serviceaccount <KSA_NAME>  --namespace <NAMESPACE>\nkubectl create serviceaccount external-dns-ksa  --namespace external-dns-ns\n\n# List Service Accounts\nkubectl -n external-dns-ns get sa\n```\n\n## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account\n- Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts.\n- This binding allows the Kubernetes service account to act as the IAM service account.\n```t\n# Associate GCP IAM Service Account with Kubernetes Service Account\ngcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \\\n    --role roles/iam.workloadIdentityUser \\\n    --member \"serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]\"\n\n# Replace GSA_NAME, GSA_PROJECT, PROJECT_ID, NAMESPACE, KSA_NAME\ngcloud iam service-accounts add-iam-policy-binding external-dns-gsa@kdaida123.iam.gserviceaccount.com \\\n    --role roles/iam.workloadIdentityUser \\\n    --member \"serviceAccount:kdaida123.svc.id.goog[external-dns-ns/external-dns-ksa]\"\n```\n\n## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address\n- Annotate the Kubernetes service account with the email address of the IAM service account.\n```t\n# Annotate Kubernetes Service Account with GCP IAM SA email Address\nkubectl annotate serviceaccount KSA_NAME \\\n    --namespace NAMESPACE \\\n    iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com\n\n# Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT\nkubectl annotate serviceaccount external-dns-ksa \\\n    --namespace external-dns-ns \\\n    iam.gke.io/gcp-service-account=external-dns-gsa@kdaida123.iam.gserviceaccount.com\n\n# Describe Kubernetes Service Account\nkubectl -n external-dns-ns describe sa external-dns-ksa \n```\n\n## Step-08: Install Helm Client on Local Desktop\n- [Install Helm](https://helm.sh/docs/intro/install/)\n```t\n# Install Helm\nbrew install helm\n\n# Verify Helm version\nhelm version\n```\n\n## Step-09: Review external-dns values.yaml\n- [external-dns values.yaml](https://github.com/kubernetes-sigs/external-dns/blob/master/charts/external-dns/values.yaml)\n- [external-dns Configuration](https://github.com/kubernetes-sigs/external-dns/tree/master/charts/external-dns#configuration)\n\n\n## Step-10: Review external-dns Deployment Configs\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"get\", \"watch\", \"list\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: k8s.gcr.io/external-dns/external-dns:v0.8.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=external-dns-test.gcp.zalan.do # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n        - --provider=google\n#        - --google-project=zalando-external-dns-test # Use this to specify a project different from the one external-dns is running inside\n        - --google-zone-visibility=private # Use this to filter to only zones with this visibility. Set to either 'public' or 'private'. Omitting will match public and private zones\n        - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n        - --registry=txt\n        - --txt-owner-id=my-identifier\n```\n\n## Step-11: Install external-dns using Helm\n```t\n# Add external-dns repo to Helm\nhelm repo add external-dns https://kubernetes-sigs.github.io/external-dns/\n\n# Install Helm Chart\nhelm upgrade --install external-dns external-dns/external-dns \\\n    --set provider=google \\\n    --set policy=sync \\\n    --set google-zone-visibility=public \\\n    --set txt-owner-id=k8s \\\n    --set serviceAccount.create=false \\\n    --set serviceAccount.name=external-dns-ksa \\\n    -n external-dns-ns\n    \n# Optional Setting (Important Note: will make ExternalDNS see only the Cloud DNS zones matching provided domain, omit to process all available Cloud DNS zones)\n--set domain-filter=kalyanreddydaida.com \\\n```\n\n## Step-12: Verify external-dns deployment\n```t\n# List Helm \nhelm  list -n external-dns-ns\n\n# List Kubernetes Service Account\nkubectl -n external-dns-ns get sa\n\n# Describe Kubernetes Service Account\nkubectl -n external-dns-ns describe sa external-dns-ksa\n\n# List All resources from default Namespace\nkubectl -n external-dns-ns get all\n\n# List pods (external-dns pod should be in running state)\nkubectl -n external-dns-ns get pods\n\n# Verify Deployment by checking logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n```\n\n## References\n- https://github.com/kubernetes-sigs/external-dns/tree/master/charts/external-dns\n- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/gke.md\n\n## External-DNS Logs from Reference\n\n```log\nW0624 07:14:15.829747   14199 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.\nTo learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke\nError from server (BadRequest): container \"external-dns\" in pod \"external-dns-6f49549d96-2jd5q\" is waiting to start: ContainerCreating\nKalyans-Mac-mini:48-GKE-Ingress-IAP kalyanreddy$ kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\nW0624 07:14:23.520269   14201 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.\nTo learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke\nW0624 07:14:24.512312   14203 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.\nTo learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"config: {APIServerURL: KubeConfig: RequestTimeout:30s DefaultTargets:[] ContourLoadBalancerService:heptio-contour/contour GlooNamespace:gloo-system SkipperRouteGroupVersion:zalando.org/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false IgnoreIngressRulesSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:google GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s GoogleZoneVisibility: DomainFilter:[] ExcludeDomains:[] RegexDomainFilter: RegexDomainExclusion: ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AWSSDServiceCleanup:false AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: BluecatDNSConfiguration: BluecatConfigFile:/etc/kubernetes/bluecat.json BluecatDNSView: BluecatGatewayHost: BluecatRootZone: BluecatDNSServerName: BluecatDNSDeployType:no-deploy BluecatSkipTLSVerify:false CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: AkamaiEdgercPath: AkamaiEdgercSection: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 InfobloxFQDNRegEx: InfobloxCreatePTR:false InfobloxCacheDuration:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:txt TXTOwnerID:default TXTPrefix: TXTSuffix: Interval:1m0s MinEventSyncInterval:5s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s TXTWildcardReplacement: ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136GSSTSIG:false RFC2136KerberosRealm: RFC2136KerberosUsername: RFC2136KerberosPassword: RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s RFC2136BatchChangeSize:50 NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50 ManagedDNSRecordTypes:[A CNAME] GoDaddyAPIKey: GoDaddySecretKey: GoDaddyTTL:0 GoDaddyOTE:false OCPRouterName:}\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Instantiating new Kubernetes client\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Using inCluster-config based on serviceaccount-token\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Created Kubernetes client https://10.104.0.1:443\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Google project auto-detected: kdaida123\"\ntime=\"2022-06-24T01:44:23Z\" level=error msg=\"Get \\\"https://dns.googleapis.com/dns/v1/projects/kdaida123/managedZones?alt=json&prettyPrint=false\\\": compute: Received 403 `Unable to generate access token; IAM returned 403 Forbidden: The caller does not have permission\\nThis error could be caused by a missing IAM policy binding on the target IAM service account.\\nFor more information, refer to the Workload Identity documentation:\\n\\thttps://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to\\n\\n`\"\n\n```"
  },
  {
    "path": "43-GKE-ExternalDNS-Ingress-Demo/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress with External DNS \ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with External DNS\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. External DNS Controller Installed\n\n## Step-01: Introduction\n- Ingress with External DNS\n- We are going to use the Annotation `external-dns.alpha.kubernetes.io/hostname` in Ingress Service.\n- DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed\n\n\n## Step-02: 01-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5               \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80   \n```\n\n## Step-03: 02-ingress-external-dns.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-externaldns-demo\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingressextdns101.kalyanreddydaida.com\nspec:\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                  \n```\n\n## Step-04: Deploy Kubernetes Manifests and Verify\n```t\n# Deploy Kubernetes Manifests \nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# Describe Ingress Service\nkubectl describe ingress ingress-externaldns-demo\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Ingress Service should be present \n\n# Access Application\nhttp://<DNS-Name>\nhttp://ingressextdns101.kalyanreddydaida.com\n```\n\n## Step-05: Delete kube-manifests\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com \n2. Verify Record sets, DNS Name we added in Ingress Service should be not preset (already deleted) \n```\n\n\n\n"
  },
  {
    "path": "43-GKE-ExternalDNS-Ingress-Demo/kube-manifests/01-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5               \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "43-GKE-ExternalDNS-Ingress-Demo/kube-manifests/02-ingress-external-dns.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-externaldns-demo\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # External DNS - For creating a Record Set in Google Cloud - Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingressextdns101.kalyanreddydaida.com\nspec:\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                  \n      \n"
  },
  {
    "path": "44-GKE-ExternalDNS-Service-Demo/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Service with External DNS \ndescription: Implement GCP Google Kubernetes Engine GKE Service with External DNS\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. External DNS Controller Installed\n\n## Step-01: Introduction\n- Kubernetes Service of Type Load Balancer with External DNS\n- We are going to use the Annotation `external-dns.alpha.kubernetes.io/hostname` in Kubernetes Service.\n- DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed\n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80      \n```\n\n## Step-03: 02-kubernetes-loadbalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  annotations:\n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: extdns-k8s-svc-demo.kalyanreddydaida.com\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy \n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Kubernetes Service should be present \n\n# Access Application\nhttp://<DNS-Name>\nhttp://extdns-k8s-svc-demo.kalyanreddydaida.com\n```\n\n\n## Step-06: Delete kube-manifests\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests/\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Kubernetes Service should be not preset (already deleted) \n```\n"
  },
  {
    "path": "44-GKE-ExternalDNS-Service-Demo/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n    "
  },
  {
    "path": "44-GKE-ExternalDNS-Service-Demo/kube-manifests/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  annotations:\n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: extdns-k8s-svc-demo.kalyanreddydaida.com\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "45-GKE-Ingress-NameBasedVhost-Routing/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. External DNS Controller Installed\n\n## Step-01: Introduction\n1. Requests will be routed in Load Balancer based on DNS Names\n2. `app1-ingress.kalyanreddydaida.com` will send traffic to `App1 Pods`\n3. `app2-ingress.kalyanreddydaida.com` will send traffic to `App2 Pods`\n4. `default-ingress.kalyanreddydaida.com` will send traffic to `App3 Pods`\n\n\n## Step-02: Review kube-manifests\n1. 01-Nginx-App1-Deployment-and-NodePortService.yaml\n2. 02-Nginx-App2-Deployment-and-NodePortService.yaml\n3. 03-Nginx-App3-Deployment-and-NodePortService.yaml\n4. NO CHANGES TO ABOVE 3 files - Standard Deployment and NodePort Service we are using from previous Context Path based Routing Demo\n\n\n## Step-03: 04-Ingress-NameBasedVHost-Routing.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-namebasedvhost-routing\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80     \n  rules:\n    - host: app1-ingress.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2-ingress.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n```\n\n## Step-04: 05-Managed-Certificate.yaml\n```yaml\napiVersion: networking.gke.io/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - default101-ingress.kalyanreddydaida.com\n    - app101-ingress.kalyanreddydaida.com\n    - app201-ingress.kalyanreddydaida.com\n```\n\n## Step-05: 06-frontendconfig.yaml\n```yaml\napiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE\n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Ingress Service should be present \n\n# List FrontendConfigs\nkubectl get frontendconfig\n\n# List Managed Certificates\nkubectl get managedcertificate\n\n# Describe Managed Certificates\nkubectl describe managedcertificate managed-cert-for-ingress\nObservation:\n1. Wait for Domain Status to be changed from \"Provisioning\" to \"ACTIVE\"\n2. It might take minimum 60 minutes for provisioning Google Managed SSL Certificates\n```\n\n## Step-07: Access Application\n```t\n# Access Application\nhttp://app1-ingress.kalyanreddydaida.com/app1/index.html\nhttp://app2-ingress.kalyanreddydaida.com/app2/index.html\nhttp://default-ingress.kalyanreddydaida.com\n\nObservation:\n1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing\n2. HTTP to HTTPS redirect should work\n```\n\n## Step-08: Access Application - Negative usecase Testing\n```t\n# Access Application - App1 DNS Name\nhttp://app1-ingress.kalyanreddydaida.com/app2/index.html   \nObservation: SHOULD FAIL In Pod App1 we don't app2 context path (app2 folder) - 404 ERROR\n\n# Access Application - App2 DNS Name\nhttp://app2-ingress.kalyanreddydaida.com/app1/index.html\nObservation: SHOULD FAIL In Pod App2 we don't app1 context path (app1 folder) - 404 ERROR\n\n# Access Application - App3 or Default DNS Name\nhttp://default-ingress.kalyanreddydaida.com/app1/index.html\nObservation: SHOULD FAIL In Pod App3 we don't app1 context path (app1 folder) - 404 ERROR\n```\n\n## Step-09: Clean-Up\n- DONT DELETE, WE ARE GOING TO USE THESE KUBERNETES RESOURCES IN NEXT DEMO RELATED TO SSL-POLICY\n\n## References\n- [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)\n\n\n"
  },
  {
    "path": "45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/04-Ingress-NameBasedVHost-Routing.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-namebasedvhost-routing\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80     \n  rules:\n    - host: app1-ingress.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2-ingress.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n\n"
  },
  {
    "path": "45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/05-Managed-Certificate.yaml",
    "content": "apiVersion: networking.gke.io/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - default-ingress.kalyanreddydaida.com\n    - app1-ingress.kalyanreddydaida.com\n    - app2-ingress.kalyanreddydaida.com"
  },
  {
    "path": "45-GKE-Ingress-NameBasedVhost-Routing/kube-manifests/06-frontendconfig.yaml",
    "content": "apiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE"
  },
  {
    "path": "46-GKE-Ingress-SSL-Policy/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. External DNS Controller Installed\n\n## Step-01: Introduction\n- Implement SSL Policies in GCP and use it for Ingress Service\n\n## Step-02: Create an SSL policy with a Google-managed profile\n- [Create SSL Policies](https://cloud.google.com/load-balancing/docs/use-ssl-policies#gcloud)\n```t\n# List Available Features\ngcloud compute ssl-policies list-available-features\n\n# List SSL Policies\ngcloud compute ssl-policies list\n\n# Create an SSL policy with a Google-managed profile\ngcloud compute ssl-policies create SSL_POLICY_NAME \\\n    --profile COMPATIBLE | MODERN | RESTRICTED   \\\n    --min-tls-version 1.0 | 1.1 | 1.2\n\n# Replace Values  \ngcloud compute ssl-policies create gke-ingress-ssl-policy --profile MODERN --min-tls-version 1.0  \n\n# List SSL Policies\ngcloud compute ssl-policies list\n\n# Verify using Google Cloud Console\nGo to Network Security -> SSL Policies -> gke-ingress-ssl-policy\n```\n\n## Step-03: Review kube-manifests\n1. 01-Nginx-App1-Deployment-and-NodePortService.yaml\n2. 02-Nginx-App2-Deployment-and-NodePortService.yaml\n3. 03-Nginx-App3-Deployment-and-NodePortService.yaml\n4. 04-Ingress-NameBasedVHost-Routing.yaml\n5. 05-Managed-Certificate.yaml\n4. NO CHANGES TO ABOVE 5 files - same as previous demo\n\n## Step-04: FrontendConfig\n```yaml\napiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  # HTTP to HTTPS Redirect\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE\n  # SSL Policy\n  sslPolicy: gke-ingress-ssl-policy    \n```\n\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n### Sample Output\nKalyans-Mac-mini:44-GKE-Ingress-SSL-Policy kalyanreddy$ kubectl apply -f kube-manifests\ndeployment.apps/app1-nginx-deployment unchanged\nservice/app1-nginx-nodeport-service unchanged\ndeployment.apps/app2-nginx-deployment unchanged\nservice/app2-nginx-nodeport-service unchanged\ndeployment.apps/app3-nginx-deployment unchanged\nservice/app3-nginx-nodeport-service unchanged\ningress.networking.k8s.io/ingress-namebasedvhost-routing unchanged\nmanagedcertificate.networking.gke.io/managed-cert-for-ingress unchanged\nfrontendconfig.networking.gke.io/my-frontend-config configured   ----> CONFGIURED\nKalyans-Mac-mini:44-GKE-Ingress-SSL-Policy kalyanreddy$ \n\n# Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Settings\n```\n\n## Step-06: Dont Clean-Up\n- Dont Clean-Up, We are going to use it in next section.\n- To avoid delay of 1 hour for creating managed certificates, we will re-use same configs which are already created.\n\n## References\n- [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)\n- [SSL Policy](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#ssl)\n\n\n\n"
  },
  {
    "path": "46-GKE-Ingress-SSL-Policy/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "46-GKE-Ingress-SSL-Policy/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "46-GKE-Ingress-SSL-Policy/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "46-GKE-Ingress-SSL-Policy/kube-manifests/04-Ingress-NameBasedVHost-Routing.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-namebasedvhost-routing\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80     \n  rules:\n    - host: app1-ingress.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2-ingress.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n\n"
  },
  {
    "path": "46-GKE-Ingress-SSL-Policy/kube-manifests/05-Managed-Certificate.yaml",
    "content": "apiVersion: networking.gke.io/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - default-ingress.kalyanreddydaida.com\n    - app1-ingress.kalyanreddydaida.com\n    - app2-ingress.kalyanreddydaida.com"
  },
  {
    "path": "46-GKE-Ingress-SSL-Policy/kube-manifests/06-frontendconfig.yaml",
    "content": "apiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  # HTTP to HTTPS Redirect\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE\n  # SSL Policy\n  sslPolicy: gke-ingress-ssl-policy    "
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n3.  External DNS Controller should be installed and ready to use\n\n## Step-01: Introduction\n1. Configuring the OAuth consent screen\n2. Creating OAuth credentials\n3. Setting up IAP access\n4. Creating a Kubernetes Secret with OAuth Client ID Credentials\n5. Adding an iap block to the BackendConfig\n\n## Step-02: Create basic google gmail users (if not present)\n- I have created below two users for this IAP Demo\n    - gcpuser901@gmail.com\n    - gcpuser902@gmail.com\n\n## Step-03: Enabling IAP for GKE\n- [Enabling IAP for GKE](https://cloud.google.com/iap/docs/enabling-kubernetes-howto)\n- We will follow steps from above documentation link to create below 2 items\n    1. [Configuring the OAuth consent screen](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-configure)\n    2. [Creating OAuth credentials](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-credentials)\n\n\n```t\n# Make a note of Client ID and Client Secret\nClient ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com\nClient Secret: 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\n\n# Template\nhttps://iap.googleapis.com/v1/oauth/clientIds/CLIENT_ID:handleRedirect\n\n# Replace CLIENT_ID (Update URL in OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds)\nhttps://iap.googleapis.com/v1/oauth/clientIds/1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com:handleRedirect\n```\n\n## Step-04: Creating a Kubernetes Secret\n```t\n# Make a note of Client ID and Client Secret\nClient ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com\nClient Secret: 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\n\n# List Kubernetes Secrets (Default Namespace)\nkubectl get secrets\n\n# Create Kubernetes Secret\nkubectl create secret generic my-secret --from-literal=client_id=client_id_key \\\n    --from-literal=client_secret=client_secret_key\n\n# Replace  client_id_key, client_secret_key\nkubectl create secret generic my-secret --from-literal=client_id=1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com \\\n    --from-literal=client_secret=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\n\n# List Kubernetes Secrets (Default Namespace)\nkubectl get secrets\n```\n\n## Step-05: Adding an iap block to the BackendConfig\n- **File Name:** 07-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  iap:\n    enabled: true\n    oauthclientCredentials:\n      secretName: my-secret    \n```\n\n## Step-06: Review Kubenertes Manifests\n- All 3 Node Port Services will have annotation added `cloud.google.com/backend-config`\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'      \nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n```\n\n## Step-07: Review Kubenertes Manifests\n- No changes to below YAML files from previous section\n- 04-Ingress-NameBasedVHost-Routing.yaml\n- 05-Managed-Certificate.yaml\n- 06-frontendconfig.yaml\n\n\n## Step-08: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\nObservation: \n1. All other configs already created as part of previous demo, only backendconfig change will be applied now. \n\n# List Deployments\nkubectl get deploy \n\n# List Pods\nkubectl get pods \n\n# List Services\nkubectl get svc \n\n# List Ingress Services\nkubectl get ingress \n\n# List Frontend Configs\nkubectl get frontendconfig \n\n# List Backend Configs\nkubectl get backendconfig\n```\n\n## Step-09: Setting up IAP access\n- [Setting up IAP access](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#iap-access)\n- Add User `gcpuser901@gmail.com` as Principal.\n\n## Step-10: Access Application\n```t\n# Access Application\nhttp://app1-ingress.kalyanreddydaida.com/app1/index.html\nhttp://app2-ingress.kalyanreddydaida.com/app2/index.html\nhttp://default-ingress.kalyanreddydaida.com\n\nUsername: gcpuser901@gmail.com (In your case it might be a different user you added as part of Step-09)\nPassword: XXXXXXXXXX\n\nObservation:\n1. All 3 URLS will redirect to Google Authentication. Provide credentials to login\n2. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing\n3. HTTP to HTTPS redirect should work\n```\n\n## Step-11: Negative Usecase: Access using User which is not added in Principal\n```t\n# Access Application\nhttp://app1-ingress.kalyanreddydaida.com/app1/index.html\nhttp://app2-ingress.kalyanreddydaida.com/app2/index.html\nhttp://default-ingress.kalyanreddydaida.com\n\nUsername: gcpuser902@gmail.com (user which is not added in principal as part of Step-09)\nPassword: XXXXXXXXXX\n\nObservation:\n1. It should fail, Application should not be accessible. \n```\n\n## Step-12: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete Kubernetes Secret\nkubectl delete secret my-secret\n\n# Delete OAuth Credentials\nGo to API & Services -> Credentials -> OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds -> DELETE\n```\n\n\n\n## References\n- [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)\n- [Enabling IAP for GKE](https://cloud.google.com/iap/docs/enabling-kubernetes-howto)\n\n"
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/04-Ingress-NameBasedVHost-Routing.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-namebasedvhost-routing\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80     \n  rules:\n    - host: app1-ingress.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2-ingress.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n\n"
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/05-Managed-Certificate.yaml",
    "content": "apiVersion: networking.gke.io/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - default-ingress.kalyanreddydaida.com\n    - app1-ingress.kalyanreddydaida.com\n    - app2-ingress.kalyanreddydaida.com"
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/06-frontendconfig.yaml",
    "content": "apiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE"
  },
  {
    "path": "47-GKE-Ingress-with-Identity-Aware-Proxy/kube-manifests/07-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  iap:\n    enabled: true\n    oauthclientCredentials:\n      secretName: my-secret    \n\n# sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged \n# and 1.0 means 100% of packets are logged. This field is only relevant if enable is set \n# to true. sampleRate is an optional field, but if it's configured then enable: true must \n# also be set or else it is interpreted as enable: false.    "
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n1. Implement Self Signed SSL Certificates with GKE Ingress Service\n2. Create SSL Certificates using OpenSSL.\n3. Create Kubernetes Secret with SSL Certificate and Private Key\n4. Reference these Kubernetes Secrets in Ingress Service **Ingress spec.tls**\n\n## Step-02: App1 - Create Self-Signed SSL Certificates and Kubernetes Secrets\n```t\n# Change Directory \ncd SSL-SelfSigned-Certs\n\n# Create your app1 key:\nopenssl genrsa -out app1-ingress.key 2048\n\n# Create your app1 certificate signing request:\nopenssl req -new -key app1-ingress.key -out app1-ingress.csr -subj \"/CN=app1.kalyanreddydaida.com\"\n\n# Create your app1 certificate:\nopenssl x509 -req -days 7300 -in app1-ingress.csr -signkey app1-ingress.key -out app1-ingress.crt\n\n# Create a Secret that holds your app1 certificate and key:\nkubectl create secret tls app1-secret  --cert app1-ingress.crt --key app1-ingress.key\n\n# List Secrets\nkubectl get secrets\n```\n\n\n## Step-03: App2 - Create Self-Signed SSL Certificates and Kubernetes Secrets\n```t\n# Change Directory \ncd SSL-SelfSigned-Certs\n\n# Create your app2 key:\nopenssl genrsa -out app2-ingress.key 2048\n\n# Create your app2 certificate signing request:\nopenssl req -new -key app2-ingress.key -out app2-ingress.csr -subj \"/CN=app2.kalyanreddydaida.com\"\n\n# Create your app2 certificate:\nopenssl x509 -req -days 7300 -in app2-ingress.csr -signkey app2-ingress.key -out app2-ingress.crt\n\n# Create a Secret that holds your app2 certificate and key:\nkubectl create secret tls app2-secret  --cert app2-ingress.crt --key app2-ingress.key\n\n# List Secrets\nkubectl get secrets\n```\n\n## Step-03: App3 - Create Self-Signed SSL Certificates and Kubernetes Secrets\n```t\n# Change Directory \ncd SSL-SelfSigned-Certs\n\n# Create your app3 key:\nopenssl genrsa -out app3-ingress.key 2048\n\n# Create your app3 certificate signing request:\nopenssl req -new -key app3-ingress.key -out app3-ingress.csr -subj \"/CN=app3-default.kalyanreddydaida.com\"\n\n# Create your app3 certificate:\nopenssl x509 -req -days 7300 -in app3-ingress.csr -signkey app3-ingress.key -out app3-ingress.crt\n\n# Create a Secret that holds your app3 certificate and key:\nkubectl create secret tls app3-secret  --cert app3-ingress.crt --key app3-ingress.key\n\n# List Secrets\nkubectl get secrets\n```\n\n## Step-04: No changes to following kube-manifests from previous Ingress Name Based Virtual Host Routing Demo\n1. 01-Nginx-App1-Deployment-and-NodePortService.yaml\n2. 02-Nginx-App2-Deployment-and-NodePortService.yaml\n3. 03-Nginx-App3-Deployment-and-NodePortService.yaml\n4. 05-frontendconfig.yaml\n\n## Step-05: Review 04-ingress-self-signed-ssl.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-selfsigned-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com\nspec: \n  # SSL Certs - Associate using Kubernetes Secrets         \n  tls:\n  - secretName: app1-secret\n  - secretName: app2-secret\n  - secretName: app3-secret\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80           \n  rules:\n    - host: app1.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# Describe Ingress Service\nkubectl describe ingress ingress-selfsigned-ssl\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Ingress Service should be present \n\n# List FrontendConfigs\nkubectl get frontendconfig\n\n# Verify SSL Certificates\nGo to Load Balancers\n1. Load Balancers View -> In Frontends\n2. Load Balancers Components View -> Certificates Tab\n```\n\n## Step-07: Access Application\n```t\n# Access Application\nhttp://app1.kalyanreddydaida.com/app1/index.html\nhttp://app2.kalyanreddydaida.com/app2/index.html\nhttp://app3-default.kalyanreddydaida.com\n\nObservation:\n1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing\n2. HTTP to HTTPS redirect should work\n3. You will get a warning \"The certificate is not trusted because it is self-signed.\". Click on \"Accept the risk and continue\"\n```\n\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# List Kubernetes Secrets\nkubectl get secrets\n\n# Delete Kubernetes Secrets\nkubectl delete secret app1-secret \nkubectl delete secret app2-secret \nkubectl delete secret app3-secret \n```\n\n## References\n- [User Managed Certificates](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl#user-managed-certs)\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app1-ingress.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIICzzCCAbcCFDQiqvY0cwNLP1ljifVfveOZo7G4MA0GCSqGSIb3DQEBCwUAMCQx\nIjAgBgNVBAMMGWFwcDEua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx\nMDUyWhcNNDIxMTIwMDIxMDUyWjAkMSIwIAYDVQQDDBlhcHAxLmthbHlhbnJlZGR5\nZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5Nt/2cBg\nHSkMXNo04h9tN8f8ioPkZkl5rFwNmgW+tGei4wa2QFRt0xCeOyd7+0GWmyH6602M\ni3WabHTruHKWsCAikx20KaOEjYF+cmDMpWQdrWADoAIIfov+BO9VTmFJUX9JUDiR\nf6CHxtCZIifL4VM0InSpMIy4OJGQgzOVrlWwLYVcYla529VUGU5qBJFAliKve3N+\nSBYoNI5uX0rERm4hqCUHKrQnsIfA0OnNccPdAoi+KmC/oipfUOpL9URholj7spAT\nJczcTGw+s7gehCDXm6YU7cBHtD2hLx106otEzJGIwys4JtmuKXtDw+w2eOGRUIrT\nQ7YLX6N5LJ8IDwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQABileCf1mnee2r0cLD\n3bHaYo68JZkl9BS6dJN6DuOSD1Sha/NArgsuQa6uX8ApnUhTt5DucRyp7o5pdhCi\nrNaJwR9zYQmjxdH+RGtb5sPEAo7D47kQqp4wlJtL5AUfCI1nGgpg5cJCEqTVlbmP\nPEAJYlaWi8LNe4h+qukECcAA3Nsgvvm3Ls1qmKIEKJr05ppCq7EbYCrXJrN75Pl1\n31w8Q0tr80qgNlhz65EyvLrIe6RK72qyOe9+oRCp9wRIoCvs47vUuNMfzZRxZXGn\ndPSLWkQt4LrQ/5RZr0lyUWtzr/l7GWu9GYljWe6toxTQHSqSkV/WAch8P4g0zvRJ\nNdHO\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app1-ingress.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMS5rYWx5YW5yZWRkeWRhaWRhLmNv\nbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOTbf9nAYB0pDFzaNOIf\nbTfH/IqD5GZJeaxcDZoFvrRnouMGtkBUbdMQnjsne/tBlpsh+utNjIt1mmx067hy\nlrAgIpMdtCmjhI2BfnJgzKVkHa1gA6ACCH6L/gTvVU5hSVF/SVA4kX+gh8bQmSIn\ny+FTNCJ0qTCMuDiRkIMzla5VsC2FXGJWudvVVBlOagSRQJYir3tzfkgWKDSObl9K\nxEZuIaglByq0J7CHwNDpzXHD3QKIvipgv6IqX1DqS/VEYaJY+7KQEyXM3ExsPrO4\nHoQg15umFO3AR7Q9oS8ddOqLRMyRiMMrOCbZril7Q8PsNnjhkVCK00O2C1+jeSyf\nCA8CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBexWuraAJF+txKTKQM6w2/Hnur\n9BobBC/OYqdsracfmSAJ1eGEKI/ISUSZHVtZJptygxTEVXCTsN+ZukXlETM7AI4a\nZ8KatvHNrzhnpFV84ONpnCiUrQmik0IWwKcDJCzl4f7KUDITDC3hh5WGVY67OvuK\nmx03qu54ZFmFJkM2vwVn/ODbvdScYI5tDRjFIbyrwkxxW/1q1otkotOk7Z3hLdMN\nHWJh7IeXfw06q7+llX3Qg1OkpfyY682A0S2G2K6vrGFJUKFJ2CrPDzmht5G5kUz4\nHANxZuSKeHcI7rlB3IVyjNa77oXOX0+ZQLYgCf/cA3Lu2zAkhEvtq9ui0Edl\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app1-ingress.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDk23/ZwGAdKQxc\n2jTiH203x/yKg+RmSXmsXA2aBb60Z6LjBrZAVG3TEJ47J3v7QZabIfrrTYyLdZps\ndOu4cpawICKTHbQpo4SNgX5yYMylZB2tYAOgAgh+i/4E71VOYUlRf0lQOJF/oIfG\n0JkiJ8vhUzQidKkwjLg4kZCDM5WuVbAthVxiVrnb1VQZTmoEkUCWIq97c35IFig0\njm5fSsRGbiGoJQcqtCewh8DQ6c1xw90CiL4qYL+iKl9Q6kv1RGGiWPuykBMlzNxM\nbD6zuB6EINebphTtwEe0PaEvHXTqi0TMkYjDKzgm2a4pe0PD7DZ44ZFQitNDtgtf\no3ksnwgPAgMBAAECggEAE65cvFky6s8Q5RtO2PNi7R0htrfI+JLxB8WS1eAQmmsf\nMu7s1XNtTm1rbiLjIqRtU0IE1h+BKq0ebp1PeDlChDr/Pi+bwsjxKUotmaCBeOe3\nNaXAKg6CtH9NhRcf+vGa4ItVvrRert8bThm6UZmiiuog3aWytx4i6Zp7Fw1kne1O\n4k70Xt/jxHK9WJEp02seREZHZCEnP9mHp7ngzjBOC8ow4F14ONvC4NpRl2y3rdC3\nWyw26Hrfhy1j/Ww6J04j2qvsplXly+acpOOWZuCFSX33zUFEZUtw+ncNFPxk/3G5\nPT4Jn7zPbB7YtId5wjN4dB+rv8b8rQf1ShIioW4KAQKBgQD8ZqQjxpmQTZGGsGjr\ntWZaGV/StW/pdhyFHrRwmCl6LbskuYDmWYuyHpoiVmEicg8mF+alCqGK8b5Cd7yH\nb8fl0uImC4CQVcWE8KieEPFAt5wYLyAper77iSunx/7g7Kv7sq4q84nIrFwlnBRO\nnjb1QmY/JaPQ00CLFPwEVWwswQKBgQDoHupdlwDP97Zd4lUfn25pg0b3MNKNsFPH\nALS4jR7qtJPjj7FdZTxGfBI1kJhVfWq2vWl/kK033c2JMGE0dpMNH/ZpQ2EVWqOR\ntA34diFXE2lBAOojHIdH5x38fWZv18aLdvIif/CwqaBhVfqRJGFgHk+4qlLpjAwE\nGxPkmWTYzwKBgQDJPZkvgSBdQst+BVeSX668tbCGEu2oyehRZzrc7yVa6e1liZYx\nk0HjgazJJfAKg8B6UeIuwvwsCTT2T/t8TO6n2m0/gjo+WnTC2xLF/KIuRHbrfV96\nUwjFCwhInRgmA+3YIA3n5wd7fZl2zywNxu3wvMFDJeKoFFdIzTFmzykRwQKBgQCY\nlPnqW4ClNGgkfssF5n9lzG2xv94oVWg8wDILvng8QEeWprYodouQqa4ul8YLLE4h\noZDf0fKLbrnVHIBJREiVsBUCTNBcgSBUfs9QLBbubkwZ9sfyHKawlTQY7TWQ/337\n30x7cS5+coKCeUokbo2z6TjuYsftzal4aXRCKLMp8QKBgQCWBLJmI9jt9tRQvyKY\nEtEQd/3qFtymDjXnLaFn/FclIDm2me0YNfp+kumOe2XapG/m5TOPoPIG+OKkEwCV\nQXeOxjlQnzwvvMuQDbTUyjeYk4AV/wYExQAymmP1d8AeqddJQ103FSvj1iNM8kN6\nvE3/wIDHWWmQa5AsHyGXfyebxA==\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app2-ingress.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIICzzCCAbcCFBIM4d2+RH+OQFbNyPH7vO7dj4aAMA0GCSqGSIb3DQEBCwUAMCQx\nIjAgBgNVBAMMGWFwcDIua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx\nMjMxWhcNNDIxMTIwMDIxMjMxWjAkMSIwIAYDVQQDDBlhcHAyLmthbHlhbnJlZGR5\nZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuKahBefa\nkPNvnvaxLkxnShxg5sX4P9FYlL1yCYLgrdW0DQWqouZ4nac31r4/YXeTJgXUCyQq\nHYy+tXNCangblaGFZXT71FcyoLT9RYE7p0/AePqHvP4gywJ/CEdk23obQak2Cuc/\ncHoo/tnUnArevmvGoNLgb6TkHWiHE85LB8LPi1ra9KABU7/xF9XpyJWfRtkG8A7G\njdbvggHVYD7l9oJyQB0+AcR7ddTbOk8D6CFHnMJa65/HyplErFWnrrKHvkKKqW6c\nf88kbp9qKPddmniwNOHqIu1QUADgJq97Y7fH9E0IZneMFWmGFRaXYxyUn4WziXEH\nnpnbFg73/8FoFwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAAFOCpdOVoREKR6S6u\nF10Jp4DpQQDXsfgCVAxA58MNMGForwNhK1E28w0GBDm4K02nyOqqQxDWiFp8Am/T\nr+vzF1BBwNsiZ1r5naQTA5Jh2XgGrjOQOJhRZbEE4RwOxWsTvEyUJn2S0bYtGfES\n5HjzZfq/0Gpxh3Z+oq8cINwzRzoirgf3Kk9SESvluxejnZMehVK5YIQp0IoM1Q5A\nt+ApJNyb107UxYLAfy8DSe5aMGtON8DYE+WLidL4CC1zRTABUjcBGsa09inGhgiF\nF5O8Eyc6LzA8EasmeJbWsUUYxUoLvq0OPXKq8Drjlt0SnttB4zpE8agtwzKKCodL\ntEoU\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app2-ingress.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMi5rYWx5YW5yZWRkeWRhaWRhLmNv\nbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALimoQXn2pDzb572sS5M\nZ0ocYObF+D/RWJS9cgmC4K3VtA0FqqLmeJ2nN9a+P2F3kyYF1AskKh2MvrVzQmp4\nG5WhhWV0+9RXMqC0/UWBO6dPwHj6h7z+IMsCfwhHZNt6G0GpNgrnP3B6KP7Z1JwK\n3r5rxqDS4G+k5B1ohxPOSwfCz4ta2vSgAVO/8RfV6ciVn0bZBvAOxo3W74IB1WA+\n5faCckAdPgHEe3XU2zpPA+ghR5zCWuufx8qZRKxVp66yh75CiqlunH/PJG6faij3\nXZp4sDTh6iLtUFAA4Cave2O3x/RNCGZ3jBVphhUWl2MclJ+Fs4lxB56Z2xYO9//B\naBcCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBL3moJBmkteEAExoskvJrKbmW6\naMyMFZmHUhPYqe8IkFG2/QRwN0C3r9lU8+UX0Qt+XqVx8hzi2FFsQYyZ/gdhZ1NP\nOq60qH9Z95evTdIzN5FbkQiT1kgb1dGFs7WgcDLJM10dIeaq4M7MQrF3R99tEtbj\nEGiHQaXogqkIU5dcwoD9tZFB+7i7ymv6C19SSGHE/amIMFVp1hBfcKH7wxQ6wlZF\nLl5WRTtdrM7E685VYKmH8ccF5rB+oyAH9be3kO2NWYo48QyoSnqk82UvmzcL0H/P\n+DUD7EfXQvlK02HfpmJxpWjT9wKYuA/AUC21L0w/gZWhXwAUnQtsbDGKxfKj\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app2-ingress.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC4pqEF59qQ82+e\n9rEuTGdKHGDmxfg/0ViUvXIJguCt1bQNBaqi5nidpzfWvj9hd5MmBdQLJCodjL61\nc0JqeBuVoYVldPvUVzKgtP1FgTunT8B4+oe8/iDLAn8IR2TbehtBqTYK5z9weij+\n2dScCt6+a8ag0uBvpOQdaIcTzksHws+LWtr0oAFTv/EX1enIlZ9G2QbwDsaN1u+C\nAdVgPuX2gnJAHT4BxHt11Ns6TwPoIUecwlrrn8fKmUSsVaeusoe+Qoqpbpx/zyRu\nn2oo912aeLA04eoi7VBQAOAmr3tjt8f0TQhmd4wVaYYVFpdjHJSfhbOJcQeemdsW\nDvf/wWgXAgMBAAECggEAC7VJFYJHihRdegNjZa+jhv/4pvlbjdRc3QWMIw1A6NTZ\nl0/KK40YjcqKEFw80ZXO50TMVq6C2x/PAdtelTiraxf0SOQbibHDvIvtWUhh+3Bj\noGgmTjYA505vtpssSnxaGRY9HoDeNWgRjGNMh15rFEDqNc1ZPMsESdcUZY2ZlVLJ\ntwuH3icBtl6pN5opTt8GNu3BgHflSbuKFQba46OubKhaWaSj2/jSw73jZtaABqei\neelmNnxRevDBBlvFr2WYzkFpG4hIS9gX7aBdDW3cjFsvaEwYK2oqI7BrLS0rmAwZ\nVX0Xsr5mt1Zz1SC+AETD/CmTNECCL6mJdlxLE0i72QKBgQDEoaE4e1VLVnWzQhc0\n5V14QQ71XwV17bZQKTQM57jUGwIdj4+koXeNdh7bejTbZdfPC10i9V2nEa5UMccS\nQOI1B72fBX1zWNstW8wH3ZxnBpf4xRsCFvcGJ90vicMx2lXMLkuYBxJMG1/AeaF1\nIAlmMv3LU0ITn/yaQRNCtXcB7wKBgQDwZvvldUOyPTaNqLA0QQ3i1CMvRvZqzL16\n/u0FpR8XyAw8S1YnheYwSC4O+9WZoxzUetsH2v8C29VZUt1vgCea5HabEH87kHVh\nc7ZrJcAuiE/9a82ne+ikDDKrEAqkVERrnLQ0mE//pXGojjU16dLRtnQMMx7wiL/9\nqG1P2kYEWQKBgQCHXvs+hnJ3VoPbsLGHYi1Sf//LX+rDgK9WSreh9tohdKKlNVPg\nNKW5B0xBL8Y6Echcq2cojSI3xg1tu4NhBrh1Z+ndFAuFIPRsKtmxxJlLuJdh1lk8\nvBC+9Szq8H4o0TbmRi0W8i9fpCzstxA4MaEm8g4WMDC6kBd5HzoiYAoZkwKBgCqs\n/XaEVJolh7OqCG2eRsrHgd94p3HaGqDk9EqWP2jHWHSzov2tJWnYxmRejFKTxCBs\nFsnUNITbZYpPzYNnqqAygmOQkCWQxWWhVva6Yt1f0WNZac6bjnbgu3XmiR0W4HaC\nAPN9PmZRhlW3uPZzJbuYug0YXhuxCvQKnC0awGcxAoGALGNotFeQOOJS1UKdLbKf\nBI7jMXnzNB+JqdMYZisCV8Ey2pc0NWe1sv3VB6nDKFA331EBkDNH/VpRluvOY8SQ\nhikUrY9YqIlYYXSie9LBgAXZ1LKB/1JknhqaQeCwWe4oVSVL8Jee9rcLZEXXfvch\nAzjalkzTL4lZQfOoLk6iDiw=\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app3-ingress.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIIC3zCCAccCFE5YVEQSuVzTlSNeeNV8BZuR6aWMMA0GCSqGSIb3DQEBCwUAMCwx\nKjAoBgNVBAMMIWFwcDMtZGVmYXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTAeFw0y\nMjExMjUwMjEzMDVaFw00MjExMjAwMjEzMDVaMCwxKjAoBgNVBAMMIWFwcDMtZGVm\nYXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEP\nADCCAQoCggEBAKI6FJgH3TJ5ejRd7H/AvY+7EN0Vft/BQDoEfcjEUwrc7VM5/wgM\nExE5Uj1Z0aMvIAruEMq2Zxe+dDHmqircrLHzH5uPjni3iBQ7dxvOGZTcdIM6JIax\nrauZJ5XtyXWBDvWACag59LtmFNtLXQQjNJHOKHZpZgi3bG49t26Aw9F0EelUCNpX\nRhBMX8wzT+gz0B0RA+Lj5B2wwGm2z+GgcM8E0jScaTgBQfhQM/kHp8oSNifySO7S\np0z7zLc4h6fvfZhjPw/g7PYCksSm6wS0DJxzloeaxfudc5GHejlk13EX9FCnTx8s\nlWwQBBbpjv8Ht/J4QY3WKXDFR8od2wcUjKcCAwEAATANBgkqhkiG9w0BAQsFAAOC\nAQEAJ4tl2RjaRciW5aemwS1cGkGwyEZOqrkBRRTxzBhKu/XYgMzzfFDRux/04QQR\nw214mPwTKhsO4laUQ0d0457AS+2dyFsqLT46lQynXqZilr8IrSYENdnnZV7qqw7h\ne5Js/EUw2sCjtnQiz5W3Ty/+TuDN6vhLDeU6e+68TEOjqyVEym6pISNJekw1IAL6\ntO1nvb+Pj1Gq6tXbf8lXgr5ys6NU65sc6CpZQwD/FWWy0A4sLFjyHSproeNFxaln\nqBvj/5I4At0M6eJ6RtNGx9fem/VpOUWhjprsiYIDXBBbEOHmPKrc9u0I3VdfwREy\nMmm+XAsfEVITpRvcDwoUHVsgKw==\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app3-ingress.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIICcTCCAVkCAQAwLDEqMCgGA1UEAwwhYXBwMy1kZWZhdWx0LmthbHlhbnJlZGR5\nZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAojoUmAfd\nMnl6NF3sf8C9j7sQ3RV+38FAOgR9yMRTCtztUzn/CAwTETlSPVnRoy8gCu4QyrZn\nF750MeaqKtyssfMfm4+OeLeIFDt3G84ZlNx0gzokhrGtq5knle3JdYEO9YAJqDn0\nu2YU20tdBCM0kc4odmlmCLdsbj23boDD0XQR6VQI2ldGEExfzDNP6DPQHRED4uPk\nHbDAabbP4aBwzwTSNJxpOAFB+FAz+QenyhI2J/JI7tKnTPvMtziHp+99mGM/D+Ds\n9gKSxKbrBLQMnHOWh5rF+51zkYd6OWTXcRf0UKdPHyyVbBAEFumO/we38nhBjdYp\ncMVHyh3bBxSMpwIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBAIYBvMVB+MMu3Imm\n8T8yEcxc1zCGsuTRNLyAaBHwbGUeqdxOncfxnPWoLLxgic3sUWtPrOgAnSkE7d2P\noIn9fkNojyfmHzgoH4WEjghSFVzqenq/ABqs/fcZBTIjHSXXSah+nuOjrc7W218/\n6RAszOj6+tyQOAxz4kDvK8W/Ykigk9+vlBSSnUGsTjmB4afCctJzo6k3YBiD9wFT\nev9IRdRPH1b+WzBP/HxBfkHsTPg3YEEa9ldMySJ514tHlJRHk9URbDj+fOCD6QG1\nIY7/IfdIz040xiXaXVOh8bs8qBWqpBjChvxVeW3HxtGQE8koMppFspD0gw1KEe3g\nddAOF+8=\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/SSL-SelfSigned-Certs/app3-ingress.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCiOhSYB90yeXo0\nXex/wL2PuxDdFX7fwUA6BH3IxFMK3O1TOf8IDBMROVI9WdGjLyAK7hDKtmcXvnQx\n5qoq3Kyx8x+bj454t4gUO3cbzhmU3HSDOiSGsa2rmSeV7cl1gQ71gAmoOfS7ZhTb\nS10EIzSRzih2aWYIt2xuPbdugMPRdBHpVAjaV0YQTF/MM0/oM9AdEQPi4+QdsMBp\nts/hoHDPBNI0nGk4AUH4UDP5B6fKEjYn8kju0qdM+8y3OIen732YYz8P4Oz2ApLE\npusEtAycc5aHmsX7nXORh3o5ZNdxF/RQp08fLJVsEAQW6Y7/B7fyeEGN1ilwxUfK\nHdsHFIynAgMBAAECggEAEqpVGUL6X+bbOTA/WFmgVevDnnRtMyiEj8hZgqKYHW1a\n/xLytYXSIc6zGCz/8mMnMCrBEtnW1cQLkXxFQwY99oGPNvJXBau0RAOtiiz2A4sz\n+q9TaY4C+fX2uIjx/4uYYYXYVptIfdFaf/rVWncEguwx+qHY5BLarnp6YwP8w9oE\nejTPG56QZIp3nv62iRpt4WK/+sMFf+LNRneg+yKinZf/vg404NfhPwadcZ19s4NC\njDNBtX9wC7vygUpeLGbF4Fb6oEOoW3o6i9GrX3mK1SAW138npE0uTTpd6qcjfKhd\nUbhw/yXSg3bkhZOnlx96+K5h4IRWy++4Sr+gI+xlrQKBgQC9a/g/bnnD4EhCcSgr\nM7wKW/6ZykRo6PVCrusJf+vVO+lOEDpXU+pu/GX24Vn6EdU4Vailz6t3LnDY0LEq\noGlhijGTvAh4mwVp5jLAywvEfHy9XGEKADQxFpfreCUJnU4Oluao2/uit2fjavsQ\nPOjEfx88LgkeXBpo2TMXH91ETQKBgQDbPyKsF0o7pyJo2EG74/tqBw4xNmoASwtW\nCHFigj4dGmT0ojQalJp6ruvNJsnUUhnZ51fh3qBJQOIphDbAFhtyO9Pmhy9uJ6up\nb53dsm/3GPTLowkC57WPDrs8W4h8eRilKrICd7fPGXDFydpBWavbdZxByqsQzxYc\ndaW7T+qewwKBgQCgBwJgXG4EnIuPjle4P+nB+rxaovYuh3kE0BADI45SxF2zNKSF\nOIDbKOLfsry4Nq6i/EMRaiPa+WIe2hiDAahl3kFKJVYmxhjJwc/o7uFPKzibJdtZ\nfpiZTBQmu4bW242hZ70QtWCetEHRcIUQz9R6hUcXKXFMs9Uf9TdjdukRFQKBgE6M\nuCdf0MC+iJ13nVVrwM+j53nKPQAN4unX7IeWkhprMnBTDMfZJd9+fAzsMLNZFtnz\nAJFz6YlVLbIiJFt9kCfFN44IMP4OSHpT+wNKwsKMtmee6cOYsHuok3x0btnpqOLE\nATLRIZGZU8YJI6D2N5RQ9sK7kb5b81gO7mnFoBFxAoGAZBtT5LZpGV+uguhBbmd9\nnXdOhzoU29zgpbPHwETSY1uk5DwVhJL7xucHrGBVimHKaP/t0hI2YnyofvHyybHA\nML8OeTKn8NEwW5G3nnP9p6m9nCwP7bjwMXb+H96jl9MYeyyTwa9IJajtJKrSga5H\nlkNlw3MuKofm0pfTpqQa5go=\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/kube-manifests/04-ingress-self-signed-ssl.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-selfsigned-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com\nspec: \n  # SSL Certs - Associate using Kubernetes Secrets         \n  tls:\n  - secretName: app1-secret\n  - secretName: app2-secret\n  - secretName: app3-secret\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80           \n  rules:\n    - host: app1.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n\n\n"
  },
  {
    "path": "48-GKE-Ingress-SelfSigned-SSL/kube-manifests/05-frontendconfig.yaml",
    "content": "apiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n1. Implement Self Signed SSL Certificates with GKE Ingress Service\n2. Create SSL Certificates using OpenSSL.\n3. Create pre-shared certificates in Google Cloud using `gcloud compute ssl-certificates create` \n4. Reference these pre-shared certificates in Ingress Service using Annotation `ingress.gcp.kubernetes.io/pre-shared-cert: \"app1-ingress,app2-ingress,app3-ingress\"`\n\n## Step-02: Creating pre-shared certificates in Google Cloud\n```t\n# List SSL Certificates\ngcloud compute ssl-certificates list\n\n# Change Directory \ncd SSL-SelfSigned-Certs\nObservation: We should find certificates we have created in previous Self Signed Certs Demo\n\n# App1 - Create a certificate resource in your Google Cloud project:\ngcloud compute ssl-certificates create app1-ingress --certificate app1-ingress.crt  --private-key app1-ingress.key\n\n# App2 - Create a certificate resource in your Google Cloud project:\ngcloud compute ssl-certificates create app2-ingress --certificate app2-ingress.crt  --private-key app2-ingress.key\n\n# App3 - Create a certificate resource in your Google Cloud project:\ngcloud compute ssl-certificates create app3-ingress --certificate app3-ingress.crt  --private-key app3-ingress.key\n\n# List SSL Certificates\ngcloud compute ssl-certificates list\n```\n\n\n## Step-03: No changes to following kube-manifests from previous Ingress Name Based Virtual Host Routing Demo\n1. 01-Nginx-App1-Deployment-and-NodePortService.yaml\n2. 02-Nginx-App2-Deployment-and-NodePortService.yaml\n3. 03-Nginx-App3-Deployment-and-NodePortService.yaml\n4. 05-frontendconfig.yaml\n\n## Step-04: Review 04-ingress-preshared-ssl.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-preshared-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com\n    # Pre-shared certificate resources  \n    ingress.gcp.kubernetes.io/pre-shared-cert: \"app1-ingress,app2-ingress,app3-ingress\"\nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80           \n  rules:\n    - host: app1.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n```\n\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# Describe Ingress Service\nkubectl describe ingress ingress-preshared-ssl\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Ingress Service should be present \n\n# List FrontendConfigs\nkubectl get frontendconfig\n\n# Verify SSL Certificates\nGo to Load Balancers\n1. Load Balancers View -> In FRONTENDS -> Certificate\n2. Load Balancers Components View -> CERTIFICATEs Tab\n3. Load Balancers Components View -> TARGET PROXIES -> HTTPS Proxy -> SSL Certificates\n```\n\n## Step-06: Access Application\n```t\n# Access Application\nhttp://app1.kalyanreddydaida.com/app1/index.html  --> VIEW CERTIFICATE WHEN ACCESSING URL\nhttp://app2.kalyanreddydaida.com/app2/index.html  --> VIEW CERTIFICATE WHEN ACCESSING URL\nhttp://app3-default.kalyanreddydaida.com          --> VIEW CERTIFICATE WHEN ACCESSING URL\n\nObservation:\n1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing\n2. HTTP to HTTPS redirect should work\n3. You will get a warning \"The certificate is not trusted because it is self-signed.\". Click on \"Accept the risk and continue\"\n```\n\n## Step-07: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n```\n\n## Step-08: Clean-Up SSL Certs from your Google Cloud Project\n```t\n# List SSL Certificates\ngcloud compute ssl-certificates list\n\n# Delete SSL Certificates\ngcloud compute ssl-certificates delete app1-ingress\ngcloud compute ssl-certificates delete app2-ingress\ngcloud compute ssl-certificates delete app3-ingress\n\n# List SSL Certificates\ngcloud compute ssl-certificates list\n\n# Verify SSL Certificates In Load Balancing Section\nGo to Load Balancers\n1. Load Balancers Components View -> CERTIFICATEs Tab\n```\n\n## References\n- [Ingress Pre-shared Certificates](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl#pre-shared-certs)\n\n\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app1-ingress.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIICzzCCAbcCFDQiqvY0cwNLP1ljifVfveOZo7G4MA0GCSqGSIb3DQEBCwUAMCQx\nIjAgBgNVBAMMGWFwcDEua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx\nMDUyWhcNNDIxMTIwMDIxMDUyWjAkMSIwIAYDVQQDDBlhcHAxLmthbHlhbnJlZGR5\nZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5Nt/2cBg\nHSkMXNo04h9tN8f8ioPkZkl5rFwNmgW+tGei4wa2QFRt0xCeOyd7+0GWmyH6602M\ni3WabHTruHKWsCAikx20KaOEjYF+cmDMpWQdrWADoAIIfov+BO9VTmFJUX9JUDiR\nf6CHxtCZIifL4VM0InSpMIy4OJGQgzOVrlWwLYVcYla529VUGU5qBJFAliKve3N+\nSBYoNI5uX0rERm4hqCUHKrQnsIfA0OnNccPdAoi+KmC/oipfUOpL9URholj7spAT\nJczcTGw+s7gehCDXm6YU7cBHtD2hLx106otEzJGIwys4JtmuKXtDw+w2eOGRUIrT\nQ7YLX6N5LJ8IDwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQABileCf1mnee2r0cLD\n3bHaYo68JZkl9BS6dJN6DuOSD1Sha/NArgsuQa6uX8ApnUhTt5DucRyp7o5pdhCi\nrNaJwR9zYQmjxdH+RGtb5sPEAo7D47kQqp4wlJtL5AUfCI1nGgpg5cJCEqTVlbmP\nPEAJYlaWi8LNe4h+qukECcAA3Nsgvvm3Ls1qmKIEKJr05ppCq7EbYCrXJrN75Pl1\n31w8Q0tr80qgNlhz65EyvLrIe6RK72qyOe9+oRCp9wRIoCvs47vUuNMfzZRxZXGn\ndPSLWkQt4LrQ/5RZr0lyUWtzr/l7GWu9GYljWe6toxTQHSqSkV/WAch8P4g0zvRJ\nNdHO\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app1-ingress.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMS5rYWx5YW5yZWRkeWRhaWRhLmNv\nbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOTbf9nAYB0pDFzaNOIf\nbTfH/IqD5GZJeaxcDZoFvrRnouMGtkBUbdMQnjsne/tBlpsh+utNjIt1mmx067hy\nlrAgIpMdtCmjhI2BfnJgzKVkHa1gA6ACCH6L/gTvVU5hSVF/SVA4kX+gh8bQmSIn\ny+FTNCJ0qTCMuDiRkIMzla5VsC2FXGJWudvVVBlOagSRQJYir3tzfkgWKDSObl9K\nxEZuIaglByq0J7CHwNDpzXHD3QKIvipgv6IqX1DqS/VEYaJY+7KQEyXM3ExsPrO4\nHoQg15umFO3AR7Q9oS8ddOqLRMyRiMMrOCbZril7Q8PsNnjhkVCK00O2C1+jeSyf\nCA8CAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBexWuraAJF+txKTKQM6w2/Hnur\n9BobBC/OYqdsracfmSAJ1eGEKI/ISUSZHVtZJptygxTEVXCTsN+ZukXlETM7AI4a\nZ8KatvHNrzhnpFV84ONpnCiUrQmik0IWwKcDJCzl4f7KUDITDC3hh5WGVY67OvuK\nmx03qu54ZFmFJkM2vwVn/ODbvdScYI5tDRjFIbyrwkxxW/1q1otkotOk7Z3hLdMN\nHWJh7IeXfw06q7+llX3Qg1OkpfyY682A0S2G2K6vrGFJUKFJ2CrPDzmht5G5kUz4\nHANxZuSKeHcI7rlB3IVyjNa77oXOX0+ZQLYgCf/cA3Lu2zAkhEvtq9ui0Edl\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app1-ingress.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDk23/ZwGAdKQxc\n2jTiH203x/yKg+RmSXmsXA2aBb60Z6LjBrZAVG3TEJ47J3v7QZabIfrrTYyLdZps\ndOu4cpawICKTHbQpo4SNgX5yYMylZB2tYAOgAgh+i/4E71VOYUlRf0lQOJF/oIfG\n0JkiJ8vhUzQidKkwjLg4kZCDM5WuVbAthVxiVrnb1VQZTmoEkUCWIq97c35IFig0\njm5fSsRGbiGoJQcqtCewh8DQ6c1xw90CiL4qYL+iKl9Q6kv1RGGiWPuykBMlzNxM\nbD6zuB6EINebphTtwEe0PaEvHXTqi0TMkYjDKzgm2a4pe0PD7DZ44ZFQitNDtgtf\no3ksnwgPAgMBAAECggEAE65cvFky6s8Q5RtO2PNi7R0htrfI+JLxB8WS1eAQmmsf\nMu7s1XNtTm1rbiLjIqRtU0IE1h+BKq0ebp1PeDlChDr/Pi+bwsjxKUotmaCBeOe3\nNaXAKg6CtH9NhRcf+vGa4ItVvrRert8bThm6UZmiiuog3aWytx4i6Zp7Fw1kne1O\n4k70Xt/jxHK9WJEp02seREZHZCEnP9mHp7ngzjBOC8ow4F14ONvC4NpRl2y3rdC3\nWyw26Hrfhy1j/Ww6J04j2qvsplXly+acpOOWZuCFSX33zUFEZUtw+ncNFPxk/3G5\nPT4Jn7zPbB7YtId5wjN4dB+rv8b8rQf1ShIioW4KAQKBgQD8ZqQjxpmQTZGGsGjr\ntWZaGV/StW/pdhyFHrRwmCl6LbskuYDmWYuyHpoiVmEicg8mF+alCqGK8b5Cd7yH\nb8fl0uImC4CQVcWE8KieEPFAt5wYLyAper77iSunx/7g7Kv7sq4q84nIrFwlnBRO\nnjb1QmY/JaPQ00CLFPwEVWwswQKBgQDoHupdlwDP97Zd4lUfn25pg0b3MNKNsFPH\nALS4jR7qtJPjj7FdZTxGfBI1kJhVfWq2vWl/kK033c2JMGE0dpMNH/ZpQ2EVWqOR\ntA34diFXE2lBAOojHIdH5x38fWZv18aLdvIif/CwqaBhVfqRJGFgHk+4qlLpjAwE\nGxPkmWTYzwKBgQDJPZkvgSBdQst+BVeSX668tbCGEu2oyehRZzrc7yVa6e1liZYx\nk0HjgazJJfAKg8B6UeIuwvwsCTT2T/t8TO6n2m0/gjo+WnTC2xLF/KIuRHbrfV96\nUwjFCwhInRgmA+3YIA3n5wd7fZl2zywNxu3wvMFDJeKoFFdIzTFmzykRwQKBgQCY\nlPnqW4ClNGgkfssF5n9lzG2xv94oVWg8wDILvng8QEeWprYodouQqa4ul8YLLE4h\noZDf0fKLbrnVHIBJREiVsBUCTNBcgSBUfs9QLBbubkwZ9sfyHKawlTQY7TWQ/337\n30x7cS5+coKCeUokbo2z6TjuYsftzal4aXRCKLMp8QKBgQCWBLJmI9jt9tRQvyKY\nEtEQd/3qFtymDjXnLaFn/FclIDm2me0YNfp+kumOe2XapG/m5TOPoPIG+OKkEwCV\nQXeOxjlQnzwvvMuQDbTUyjeYk4AV/wYExQAymmP1d8AeqddJQ103FSvj1iNM8kN6\nvE3/wIDHWWmQa5AsHyGXfyebxA==\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app2-ingress.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIICzzCCAbcCFBIM4d2+RH+OQFbNyPH7vO7dj4aAMA0GCSqGSIb3DQEBCwUAMCQx\nIjAgBgNVBAMMGWFwcDIua2FseWFucmVkZHlkYWlkYS5jb20wHhcNMjIxMTI1MDIx\nMjMxWhcNNDIxMTIwMDIxMjMxWjAkMSIwIAYDVQQDDBlhcHAyLmthbHlhbnJlZGR5\nZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuKahBefa\nkPNvnvaxLkxnShxg5sX4P9FYlL1yCYLgrdW0DQWqouZ4nac31r4/YXeTJgXUCyQq\nHYy+tXNCangblaGFZXT71FcyoLT9RYE7p0/AePqHvP4gywJ/CEdk23obQak2Cuc/\ncHoo/tnUnArevmvGoNLgb6TkHWiHE85LB8LPi1ra9KABU7/xF9XpyJWfRtkG8A7G\njdbvggHVYD7l9oJyQB0+AcR7ddTbOk8D6CFHnMJa65/HyplErFWnrrKHvkKKqW6c\nf88kbp9qKPddmniwNOHqIu1QUADgJq97Y7fH9E0IZneMFWmGFRaXYxyUn4WziXEH\nnpnbFg73/8FoFwIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQAAFOCpdOVoREKR6S6u\nF10Jp4DpQQDXsfgCVAxA58MNMGForwNhK1E28w0GBDm4K02nyOqqQxDWiFp8Am/T\nr+vzF1BBwNsiZ1r5naQTA5Jh2XgGrjOQOJhRZbEE4RwOxWsTvEyUJn2S0bYtGfES\n5HjzZfq/0Gpxh3Z+oq8cINwzRzoirgf3Kk9SESvluxejnZMehVK5YIQp0IoM1Q5A\nt+ApJNyb107UxYLAfy8DSe5aMGtON8DYE+WLidL4CC1zRTABUjcBGsa09inGhgiF\nF5O8Eyc6LzA8EasmeJbWsUUYxUoLvq0OPXKq8Drjlt0SnttB4zpE8agtwzKKCodL\ntEoU\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app2-ingress.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIICaTCCAVECAQAwJDEiMCAGA1UEAwwZYXBwMi5rYWx5YW5yZWRkeWRhaWRhLmNv\nbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALimoQXn2pDzb572sS5M\nZ0ocYObF+D/RWJS9cgmC4K3VtA0FqqLmeJ2nN9a+P2F3kyYF1AskKh2MvrVzQmp4\nG5WhhWV0+9RXMqC0/UWBO6dPwHj6h7z+IMsCfwhHZNt6G0GpNgrnP3B6KP7Z1JwK\n3r5rxqDS4G+k5B1ohxPOSwfCz4ta2vSgAVO/8RfV6ciVn0bZBvAOxo3W74IB1WA+\n5faCckAdPgHEe3XU2zpPA+ghR5zCWuufx8qZRKxVp66yh75CiqlunH/PJG6faij3\nXZp4sDTh6iLtUFAA4Cave2O3x/RNCGZ3jBVphhUWl2MclJ+Fs4lxB56Z2xYO9//B\naBcCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQBL3moJBmkteEAExoskvJrKbmW6\naMyMFZmHUhPYqe8IkFG2/QRwN0C3r9lU8+UX0Qt+XqVx8hzi2FFsQYyZ/gdhZ1NP\nOq60qH9Z95evTdIzN5FbkQiT1kgb1dGFs7WgcDLJM10dIeaq4M7MQrF3R99tEtbj\nEGiHQaXogqkIU5dcwoD9tZFB+7i7ymv6C19SSGHE/amIMFVp1hBfcKH7wxQ6wlZF\nLl5WRTtdrM7E685VYKmH8ccF5rB+oyAH9be3kO2NWYo48QyoSnqk82UvmzcL0H/P\n+DUD7EfXQvlK02HfpmJxpWjT9wKYuA/AUC21L0w/gZWhXwAUnQtsbDGKxfKj\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app2-ingress.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC4pqEF59qQ82+e\n9rEuTGdKHGDmxfg/0ViUvXIJguCt1bQNBaqi5nidpzfWvj9hd5MmBdQLJCodjL61\nc0JqeBuVoYVldPvUVzKgtP1FgTunT8B4+oe8/iDLAn8IR2TbehtBqTYK5z9weij+\n2dScCt6+a8ag0uBvpOQdaIcTzksHws+LWtr0oAFTv/EX1enIlZ9G2QbwDsaN1u+C\nAdVgPuX2gnJAHT4BxHt11Ns6TwPoIUecwlrrn8fKmUSsVaeusoe+Qoqpbpx/zyRu\nn2oo912aeLA04eoi7VBQAOAmr3tjt8f0TQhmd4wVaYYVFpdjHJSfhbOJcQeemdsW\nDvf/wWgXAgMBAAECggEAC7VJFYJHihRdegNjZa+jhv/4pvlbjdRc3QWMIw1A6NTZ\nl0/KK40YjcqKEFw80ZXO50TMVq6C2x/PAdtelTiraxf0SOQbibHDvIvtWUhh+3Bj\noGgmTjYA505vtpssSnxaGRY9HoDeNWgRjGNMh15rFEDqNc1ZPMsESdcUZY2ZlVLJ\ntwuH3icBtl6pN5opTt8GNu3BgHflSbuKFQba46OubKhaWaSj2/jSw73jZtaABqei\neelmNnxRevDBBlvFr2WYzkFpG4hIS9gX7aBdDW3cjFsvaEwYK2oqI7BrLS0rmAwZ\nVX0Xsr5mt1Zz1SC+AETD/CmTNECCL6mJdlxLE0i72QKBgQDEoaE4e1VLVnWzQhc0\n5V14QQ71XwV17bZQKTQM57jUGwIdj4+koXeNdh7bejTbZdfPC10i9V2nEa5UMccS\nQOI1B72fBX1zWNstW8wH3ZxnBpf4xRsCFvcGJ90vicMx2lXMLkuYBxJMG1/AeaF1\nIAlmMv3LU0ITn/yaQRNCtXcB7wKBgQDwZvvldUOyPTaNqLA0QQ3i1CMvRvZqzL16\n/u0FpR8XyAw8S1YnheYwSC4O+9WZoxzUetsH2v8C29VZUt1vgCea5HabEH87kHVh\nc7ZrJcAuiE/9a82ne+ikDDKrEAqkVERrnLQ0mE//pXGojjU16dLRtnQMMx7wiL/9\nqG1P2kYEWQKBgQCHXvs+hnJ3VoPbsLGHYi1Sf//LX+rDgK9WSreh9tohdKKlNVPg\nNKW5B0xBL8Y6Echcq2cojSI3xg1tu4NhBrh1Z+ndFAuFIPRsKtmxxJlLuJdh1lk8\nvBC+9Szq8H4o0TbmRi0W8i9fpCzstxA4MaEm8g4WMDC6kBd5HzoiYAoZkwKBgCqs\n/XaEVJolh7OqCG2eRsrHgd94p3HaGqDk9EqWP2jHWHSzov2tJWnYxmRejFKTxCBs\nFsnUNITbZYpPzYNnqqAygmOQkCWQxWWhVva6Yt1f0WNZac6bjnbgu3XmiR0W4HaC\nAPN9PmZRhlW3uPZzJbuYug0YXhuxCvQKnC0awGcxAoGALGNotFeQOOJS1UKdLbKf\nBI7jMXnzNB+JqdMYZisCV8Ey2pc0NWe1sv3VB6nDKFA331EBkDNH/VpRluvOY8SQ\nhikUrY9YqIlYYXSie9LBgAXZ1LKB/1JknhqaQeCwWe4oVSVL8Jee9rcLZEXXfvch\nAzjalkzTL4lZQfOoLk6iDiw=\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app3-ingress.crt",
    "content": "-----BEGIN CERTIFICATE-----\nMIIC3zCCAccCFE5YVEQSuVzTlSNeeNV8BZuR6aWMMA0GCSqGSIb3DQEBCwUAMCwx\nKjAoBgNVBAMMIWFwcDMtZGVmYXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTAeFw0y\nMjExMjUwMjEzMDVaFw00MjExMjAwMjEzMDVaMCwxKjAoBgNVBAMMIWFwcDMtZGVm\nYXVsdC5rYWx5YW5yZWRkeWRhaWRhLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEP\nADCCAQoCggEBAKI6FJgH3TJ5ejRd7H/AvY+7EN0Vft/BQDoEfcjEUwrc7VM5/wgM\nExE5Uj1Z0aMvIAruEMq2Zxe+dDHmqircrLHzH5uPjni3iBQ7dxvOGZTcdIM6JIax\nrauZJ5XtyXWBDvWACag59LtmFNtLXQQjNJHOKHZpZgi3bG49t26Aw9F0EelUCNpX\nRhBMX8wzT+gz0B0RA+Lj5B2wwGm2z+GgcM8E0jScaTgBQfhQM/kHp8oSNifySO7S\np0z7zLc4h6fvfZhjPw/g7PYCksSm6wS0DJxzloeaxfudc5GHejlk13EX9FCnTx8s\nlWwQBBbpjv8Ht/J4QY3WKXDFR8od2wcUjKcCAwEAATANBgkqhkiG9w0BAQsFAAOC\nAQEAJ4tl2RjaRciW5aemwS1cGkGwyEZOqrkBRRTxzBhKu/XYgMzzfFDRux/04QQR\nw214mPwTKhsO4laUQ0d0457AS+2dyFsqLT46lQynXqZilr8IrSYENdnnZV7qqw7h\ne5Js/EUw2sCjtnQiz5W3Ty/+TuDN6vhLDeU6e+68TEOjqyVEym6pISNJekw1IAL6\ntO1nvb+Pj1Gq6tXbf8lXgr5ys6NU65sc6CpZQwD/FWWy0A4sLFjyHSproeNFxaln\nqBvj/5I4At0M6eJ6RtNGx9fem/VpOUWhjprsiYIDXBBbEOHmPKrc9u0I3VdfwREy\nMmm+XAsfEVITpRvcDwoUHVsgKw==\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app3-ingress.csr",
    "content": "-----BEGIN CERTIFICATE REQUEST-----\nMIICcTCCAVkCAQAwLDEqMCgGA1UEAwwhYXBwMy1kZWZhdWx0LmthbHlhbnJlZGR5\nZGFpZGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAojoUmAfd\nMnl6NF3sf8C9j7sQ3RV+38FAOgR9yMRTCtztUzn/CAwTETlSPVnRoy8gCu4QyrZn\nF750MeaqKtyssfMfm4+OeLeIFDt3G84ZlNx0gzokhrGtq5knle3JdYEO9YAJqDn0\nu2YU20tdBCM0kc4odmlmCLdsbj23boDD0XQR6VQI2ldGEExfzDNP6DPQHRED4uPk\nHbDAabbP4aBwzwTSNJxpOAFB+FAz+QenyhI2J/JI7tKnTPvMtziHp+99mGM/D+Ds\n9gKSxKbrBLQMnHOWh5rF+51zkYd6OWTXcRf0UKdPHyyVbBAEFumO/we38nhBjdYp\ncMVHyh3bBxSMpwIDAQABoAAwDQYJKoZIhvcNAQELBQADggEBAIYBvMVB+MMu3Imm\n8T8yEcxc1zCGsuTRNLyAaBHwbGUeqdxOncfxnPWoLLxgic3sUWtPrOgAnSkE7d2P\noIn9fkNojyfmHzgoH4WEjghSFVzqenq/ABqs/fcZBTIjHSXXSah+nuOjrc7W218/\n6RAszOj6+tyQOAxz4kDvK8W/Ykigk9+vlBSSnUGsTjmB4afCctJzo6k3YBiD9wFT\nev9IRdRPH1b+WzBP/HxBfkHsTPg3YEEa9ldMySJ514tHlJRHk9URbDj+fOCD6QG1\nIY7/IfdIz040xiXaXVOh8bs8qBWqpBjChvxVeW3HxtGQE8koMppFspD0gw1KEe3g\nddAOF+8=\n-----END CERTIFICATE REQUEST-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/SSL-SelfSigned-Certs/app3-ingress.key",
    "content": "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCiOhSYB90yeXo0\nXex/wL2PuxDdFX7fwUA6BH3IxFMK3O1TOf8IDBMROVI9WdGjLyAK7hDKtmcXvnQx\n5qoq3Kyx8x+bj454t4gUO3cbzhmU3HSDOiSGsa2rmSeV7cl1gQ71gAmoOfS7ZhTb\nS10EIzSRzih2aWYIt2xuPbdugMPRdBHpVAjaV0YQTF/MM0/oM9AdEQPi4+QdsMBp\nts/hoHDPBNI0nGk4AUH4UDP5B6fKEjYn8kju0qdM+8y3OIen732YYz8P4Oz2ApLE\npusEtAycc5aHmsX7nXORh3o5ZNdxF/RQp08fLJVsEAQW6Y7/B7fyeEGN1ilwxUfK\nHdsHFIynAgMBAAECggEAEqpVGUL6X+bbOTA/WFmgVevDnnRtMyiEj8hZgqKYHW1a\n/xLytYXSIc6zGCz/8mMnMCrBEtnW1cQLkXxFQwY99oGPNvJXBau0RAOtiiz2A4sz\n+q9TaY4C+fX2uIjx/4uYYYXYVptIfdFaf/rVWncEguwx+qHY5BLarnp6YwP8w9oE\nejTPG56QZIp3nv62iRpt4WK/+sMFf+LNRneg+yKinZf/vg404NfhPwadcZ19s4NC\njDNBtX9wC7vygUpeLGbF4Fb6oEOoW3o6i9GrX3mK1SAW138npE0uTTpd6qcjfKhd\nUbhw/yXSg3bkhZOnlx96+K5h4IRWy++4Sr+gI+xlrQKBgQC9a/g/bnnD4EhCcSgr\nM7wKW/6ZykRo6PVCrusJf+vVO+lOEDpXU+pu/GX24Vn6EdU4Vailz6t3LnDY0LEq\noGlhijGTvAh4mwVp5jLAywvEfHy9XGEKADQxFpfreCUJnU4Oluao2/uit2fjavsQ\nPOjEfx88LgkeXBpo2TMXH91ETQKBgQDbPyKsF0o7pyJo2EG74/tqBw4xNmoASwtW\nCHFigj4dGmT0ojQalJp6ruvNJsnUUhnZ51fh3qBJQOIphDbAFhtyO9Pmhy9uJ6up\nb53dsm/3GPTLowkC57WPDrs8W4h8eRilKrICd7fPGXDFydpBWavbdZxByqsQzxYc\ndaW7T+qewwKBgQCgBwJgXG4EnIuPjle4P+nB+rxaovYuh3kE0BADI45SxF2zNKSF\nOIDbKOLfsry4Nq6i/EMRaiPa+WIe2hiDAahl3kFKJVYmxhjJwc/o7uFPKzibJdtZ\nfpiZTBQmu4bW242hZ70QtWCetEHRcIUQz9R6hUcXKXFMs9Uf9TdjdukRFQKBgE6M\nuCdf0MC+iJ13nVVrwM+j53nKPQAN4unX7IeWkhprMnBTDMfZJd9+fAzsMLNZFtnz\nAJFz6YlVLbIiJFt9kCfFN44IMP4OSHpT+wNKwsKMtmee6cOYsHuok3x0btnpqOLE\nATLRIZGZU8YJI6D2N5RQ9sK7kb5b81gO7mnFoBFxAoGAZBtT5LZpGV+uguhBbmd9\nnXdOhzoU29zgpbPHwETSY1uk5DwVhJL7xucHrGBVimHKaP/t0hI2YnyofvHyybHA\nML8OeTKn8NEwW5G3nnP9p6m9nCwP7bjwMXb+H96jl9MYeyyTwa9IJajtJKrSga5H\nlkNlw3MuKofm0pfTpqQa5go=\n-----END PRIVATE KEY-----\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/kube-manifests/04-ingress-preshared-ssl.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-preshared-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com\n    # Pre-shared certificate resources  \n    ingress.gcp.kubernetes.io/pre-shared-cert: \"app1-ingress,app2-ingress,app3-ingress\"\nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80           \n  rules:\n    - host: app1.kalyanreddydaida.com\n      http:\n        paths:\n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: /\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n\n\n"
  },
  {
    "path": "49-GKE-Ingress-Preshared-SSL/kube-manifests/05-frontendconfig.yaml",
    "content": "apiVersion: networking.gke.io/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE"
  },
  {
    "path": "50-GKE-Ingress-Cloud-CDN/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress and Cloud CDN\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n- Implement following Features for Ingress Service\n1. BackendConfig for Ingress Service\n2. Backend Service Timeout\n3. Connection Draining\n4. Ingress Service HTTP Access Logging\n5. Enable Cloud CDN\n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cdn-demo-deployment\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: cdn-demo\n  template:\n    metadata:\n      labels:\n        app: cdn-demo\n    spec:\n      containers:\n        - name: cdn-demo\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0\n          ports:\n            - containerPort: 8080\n```\n\n## Step-03: 02-kubernetes-NodePort-service.yaml\n- Update Backend Config with annotation **cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'**\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: cdn-demo-nodeport-service\n  annotations:\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cdn-demo\n  ports:\n    - port: 80\n      targetPort: 8080\n```\n## Step-04: 03-ingress.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cdn-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingress-cdn-demo.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cdn-demo-nodeport-service\n      port:\n        number: 80     \n```\n\n## Step-05: 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  cdn:\n    enabled: true\n    cachePolicy:\n      includeHost: true\n      includeProtocol: true\n      includeQueryString: false  \n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Service\nkubectl get ingress\n\n# List Backend Config\nkubectl get backendconfig\nkubectl describe backendconfig my-backendconfig\n```\n## Step-07: Verify Settings in Load Balancer\n- Go to Network Services -> Load Balancing -> Click on Load Balancer\n- Go to Backend -> Backend Services\n- Verify the Settings\n  - **Timeout:** 42 seconds\n  - **Connection draining timeout:** 62 seconds\n  - **Cloud CDN:** Enabled\n  - **Logging: Enabled:** (sample rate: 1)\n\n## Step-08: Verify Cloud CDN \n- Go to Network Services -> Cloud CDN -> (Automatically created when Ingress Deployed k8s1-c6634a10-default-cdn-demo-nodeport-service-80-553facae)\n- Verify Settings\n  - DETAILS TAB\n  - MONITORING TAB\n  - CACHING TAB\n\n## Step-09: Access Application and Verify Cache Age\n```t\n# Access Application\nhttp://<DNS-NAME-FROM-INGRESS-SERVICE>\n[or]\nhttp://<IP-ADDRESS-FROM-INGRESS-SERVICE-OUTPUT>\n\n# Access Application using DNS Name\nhttp://ingress-cdn-demo.kalyanreddydaida.com\ncurl -v http://ingress-cdn-demo.kalyanreddydaida.com/?cache=true\ncurl -v http://ingress-cdn-demo.kalyanreddydaida.com\ncurl -v http://ingress-cdn-demo.kalyanreddydaida.com\n\n## Important Note:\n1. The output shows the response headers and body. \n2. In the response headers, you can see that the content was cached. The Age header tells you how many seconds the content has been cached\n\n## Sample Output\nKalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ curl -v http://ingress-cdn-demo.kalyanreddydaida.com\n*   Trying 34.120.32.120:80...\n* Connected to ingress-cdn-demo.kalyanreddydaida.com (34.120.32.120) port 80 (#0)\n> GET / HTTP/1.1\n> Host: ingress-cdn-demo.kalyanreddydaida.com\n> User-Agent: curl/7.79.1\n> Accept: */*\n> \n* Mark bundle as not supporting multiuse\n< HTTP/1.1 200 OK\n< Content-Length: 76\n< Via: 1.1 google\n< Date: Thu, 23 Jun 2022 04:47:42 GMT\n< Content-Type: text/plain; charset=utf-8\n< Age: 1625\n< Cache-Control: max-age=3600,public\n< \nHello, world!\nVersion: 1.0.0\nHostname: cdn-demo-deployment-6f4c8f655d-htpsn\n* Connection #0 to host ingress-cdn-demo.kalyanreddydaida.com left intact\nKalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ \n```\n\n## Step-10: Verify Cloud CDN Monitoring Tab\n- Go to Network Services -> Cloud CDN -> MONITORING Tab\n- Review Charts\n  - CDN Bandwidth\n  - CDN Hit Rate\n  - CDN Fill Rate\n  - CDN Egress Rate\n  - Requests\n  - Response Codes\n\n## Step-11: Verify Ingress Service Logs in Cloud Logging\n- Go to Cloud Logging -> Logs Explorer -> Log Fields -> Select\n- Resource Type: Cloud HTTP Load Balancer\n- Severity: Info\n- Project ID: kdaida123\n- Review the logs\n- Access Application and parallely review the logs\n```t\n# Access Application\ncurl -v http://ingress-cdn-demo.kalyanreddydaida.com\n```\n\n## Step-12: Verify Ingress Service Logs in Cloud Logging using Other Approach\n- Go to Cloud Logging -> Logs Dashboard \n- Go to Chart -> HTTP/S Load Balancer Logs By Severity -> Click on **VIEW LOGS**\n\n\n## References\n- [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)\n- [Caching overview](https://cloud.google.com/cdn/docs/caching#cacheability)\n\n\n"
  },
  {
    "path": "50-GKE-Ingress-Cloud-CDN/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cdn-demo-deployment\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: cdn-demo\n  template:\n    metadata:\n      labels:\n        app: cdn-demo\n    spec:\n      containers:\n        - name: cdn-demo\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0\n          ports:\n            - containerPort: 8080\n\n\n   "
  },
  {
    "path": "50-GKE-Ingress-Cloud-CDN/kube-manifests/02-kubernetes-NodePort-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: cdn-demo-nodeport-service\n  annotations:\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cdn-demo\n  ports:\n    - port: 80\n      targetPort: 8080"
  },
  {
    "path": "50-GKE-Ingress-Cloud-CDN/kube-manifests/03-ingress.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cdn-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingress-cdn-demo.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cdn-demo-nodeport-service\n      port:\n        number: 80     \n"
  },
  {
    "path": "50-GKE-Ingress-Cloud-CDN/kube-manifests/04-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  cdn:\n    enabled: true\n    cachePolicy:\n      includeHost: true\n      includeProtocol: true\n      includeQueryString: false  \n\n# sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged \n# and 1.0 means 100% of packets are logged. This field is only relevant if enable is set \n# to true. sampleRate is an optional field, but if it's configured then enable: true must \n# also be set or else it is interpreted as enable: false.    "
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cdn-demo-deployment\nspec:\n  replicas: 4\n  selector:\n    matchLabels:\n      app: cdn-demo\n  template:\n    metadata:\n      labels:\n        app: cdn-demo\n    spec:\n      containers:\n        - name: cdn-demo\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0\n          ports:\n            - containerPort: 8080\n\n\n   "
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/02-kubernetes-NodePort-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: cdn-demo-nodeport-service\n  annotations:\n    #cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig\"}}'\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cdn-demo\n  ports:\n    - port: 80\n      targetPort: 8080"
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/03-ingress.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cdn-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingress-with-clientip-affinity.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cdn-demo-nodeport-service\n      port:\n        number: 80     \n"
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/01-kube-manifests-with-clientip-affinity/04-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"CLIENT_IP\"  # Disable at Step-07\n    #affinityType: \"\"          # Enable at Step-07\n"
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cdn-demo-deployment2\nspec:\n  replicas: 4\n  selector:\n    matchLabels:\n      app: cdn-demo2\n  template:\n    metadata:\n      labels:\n        app: cdn-demo2\n    spec:\n      containers:\n        - name: cdn-demo2\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0\n          ports:\n            - containerPort: 8080\n\n\n   "
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/02-kubernetes-NodePort-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: cdn-demo-nodeport-service2\n  annotations:\n    #cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig2\"}}'\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig2\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cdn-demo2\n  ports:\n    - port: 80\n      targetPort: 8080"
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/03-ingress.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cdn-demo2\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip2\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingress-without-clientip-affinity.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cdn-demo-nodeport-service2\n      port:\n        number: 80     \n"
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/02-kube-manifests-without-clientip-affinity/04-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig2\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n\n\n# sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged \n# and 1.0 means 100% of packets are logged. This field is only relevant if enable is set \n# to true. sampleRate is an optional field, but if it's configured then enable: true must \n# also be set or else it is interpreted as enable: false.    "
  },
  {
    "path": "51-GKE-Ingress-ClientIP-Affinity/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n- Implement following Features for Ingress Service\n- BackendConfig - CLIENT_IP Affinity for Ingress Service\n- We are going to create two projects\n  - **Project-01:** CLIENT_IP Affinity enabled\n  - **Project-02:** CLIENT_IP Affinity disabled\n\n## Step-02: Create External IP Address using gcloud\n```t\n# Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS)\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip1 --global\n\n# Create External IP Address 2\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip2 --global\n\n# Describe External IP Address to get\ngcloud compute addresses describe ADDRESS_NAME --global\ngcloud compute addresses describe gke-ingress-extip2 --global\n\n# Verify\nGo to VPC Network -> IP Addresses -> External IP Address\n```\n\n## Step-03: Project-01: Review YAML Manifests\n- **Project Folder:** 01-kube-manifests-with-clientip-affinity\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"CLIENT_IP\"\n```\n\n## Step-04: Project-02: Review YAML Manifests\n- **Project Folder:** 02-kube-manifests-without-clientip-affinity\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig2\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n```\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Project-01: Deploy Kubernetes Manifests \nkubectl apply -f 01-kube-manifests-with-clientip-affinity\n\n# Project-02: Deploy Kubernetes Manifests \nkubectl apply -f 02-kube-manifests-without-clientip-affinity\n\n# Verify Deployments\nkubectl get deploy \n\n# Verify Pods\nkubectl get pods\n\n# Verify Node Port Services\nkubectl get svc\n\n# Verify Ingress Services\nkubectl get ingress\n\n# Verify Backend Config\nkubectl get backendconfig\n\n# Project-01: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting\nObservation:\nClient IP Affinity setting should be in enabled state\n\n# Project-02: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting\nClient IP Affinity setting should be in disabled state\n```\n\n## Step-06: Access Application\n```t\n# Project-01: Access Application using DNS or ExtIP\nhttp://ingress-with-clientip-affinity.kalyanreddydaida.com\nhttp://<EXT-IP-1>\ncurl ingress-with-clientip-affinity.kalyanreddydaida.com\nObservation:\n1. Request will keep going always to only one POD due to CLIENT_IP Affinity we configured\n\n# Project-02: Access Application using DNS or ExtIP\nhttp://ingress-without-clientip-affinity.kalyanreddydaida.com\nhttp://<EXT-IP-2>\ncurl ingress-without-clientip-affinity.kalyanreddydaida.com\nObservation:\n1. Requests will be load balanced to 4 pods created as part of \"cdn-demo2\" deployment.\n```\n\n## Step-07: How to remove a setting from FrontendConfig or BackendConfig\n- To revoke an Ingress feature, you must explicitly disable the feature configuration in the FrontendConfig or BackendConfig CRD\n- **Important Note:** To clear or disable a previously enabled configuration, set the field's value to an empty string (\"\") or to a Boolean value of false, depending on the field type.\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    #affinityType: \"CLIENT_IP\"  # Disable at Step-07\n    affinityType: \"\"          # Enable at Step-07\n```\n\n## Step-08: Apply Changes and Verify\n```t\n# Apply Changes\nkubectl apply -f 01-kube-manifests-with-clientip-affinity\n\n# Verify Load Balancer \nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting\nObservation:\nShould be disabled\n```\n\n## Step-09: Deleting a FrontendConfig or BackendConfig\n- [Deleting a FrontendConfig or BackendConfig](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#deleting_a_frontendconfig_or_backendconfig)\n\n## Step-10: Clean-Up\n```t\n# Project-01: Delete Kubernetes Resources \nkubectl delete -f 01-kube-manifests-with-clientip-affinity\n\n# Project-02: Delete Kubernetes Resources \nkubectl delete -f 02-kube-manifests-without-clientip-affinity\n```\n\n## Step-11: Rollback 04-backendconfig.yaml\n- Put back `affinityType: \"CLIENT_IP\"` it will be ready for Students Demo.\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"CLIENT_IP\"  # Disable at Step-07\n    #affinityType: \"\"          # Enable at Step-07\n```\n\n\n\n## References\n- [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)\n\n\n"
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cdn-demo-deployment\nspec:\n  replicas: 4\n  selector:\n    matchLabels:\n      app: cdn-demo\n  template:\n    metadata:\n      labels:\n        app: cdn-demo\n    spec:\n      containers:\n        - name: cdn-demo\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0\n          ports:\n            - containerPort: 8080\n\n\n   "
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/02-kubernetes-NodePort-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: cdn-demo-nodeport-service\n  annotations:\n    #cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig\"}}'\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cdn-demo\n  ports:\n    - port: 80\n      targetPort: 8080"
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/03-ingress.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cdn-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingress-with-cookie-affinity.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cdn-demo-nodeport-service\n      port:\n        number: 80     \n"
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/01-kube-manifests-with-cookie-affinity/04-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"GENERATED_COOKIE\"\n    affinityCookieTtlSec: 50 # TTL of 50 seconds\n\n# sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged \n# and 1.0 means 100% of packets are logged. This field is only relevant if enable is set \n# to true. sampleRate is an optional field, but if it's configured then enable: true must \n# also be set or else it is interpreted as enable: false.    "
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cdn-demo-deployment2\nspec:\n  replicas: 4\n  selector:\n    matchLabels:\n      app: cdn-demo2\n  template:\n    metadata:\n      labels:\n        app: cdn-demo2\n    spec:\n      containers:\n        - name: cdn-demo2\n          image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0\n          ports:\n            - containerPort: 8080\n\n\n   "
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/02-kubernetes-NodePort-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: cdn-demo-nodeport-service2\n  annotations:\n    #cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig2\"}}'\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig2\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cdn-demo2\n  ports:\n    - port: 80\n      targetPort: 8080"
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/03-ingress.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cdn-demo2\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip2\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: ingress-without-cookie-affinity.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cdn-demo-nodeport-service2\n      port:\n        number: 80     \n"
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/02-kube-manifests-without-cookie-affinity/04-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig2\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n\n\n# sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged \n# and 1.0 means 100% of packets are logged. This field is only relevant if enable is set \n# to true. sampleRate is an optional field, but if it's configured then enable: true must \n# also be set or else it is interpreted as enable: false.    "
  },
  {
    "path": "52-GKE-Ingress-Cookie-Affinity/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Cookie Affinity\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n- Implement following Features for Ingress Service\n- BackendConfig - GENERATED_COOKIE Affinity for Ingress Service\n- We are going to create two projects\n  - **Project-01:** GENERATED_COOKIE Affinity enabled\n  - **Project-02:** GENERATED_COOKIE Affinity disabled\n\n## Step-02: Create External IP Address using gcloud\n```t\n# Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS)\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip1 --global\n\n# Create External IP Address 2\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip2 --global\n\n# Describe External IP Address to get\ngcloud compute addresses describe ADDRESS_NAME --global\ngcloud compute addresses describe gke-ingress-extip2 --global\n\n# Verify\nGo to VPC Network -> IP Addresses -> External IP Address\n```\n\n## Step-03: Project-01: Review YAML Manifests\n- **Project Folder:** 01-kube-manifests-with-cookie-affinity \n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"GENERATED_COOKIE\"\n```\n\n## Step-04: Project-02: Review YAML Manifests\n- **Project Folder:** 02-kube-manifests-without-cookie-affinity\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig2\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n```\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Project-01: Deploy Kubernetes Manifests \nkubectl apply -f 01-kube-manifests-with-cookie-affinity\n\n# Project-02: Deploy Kubernetes Manifests \nkubectl apply -f 02-kube-manifests-without-cookie-affinity\n\n# Verify Deployments\nkubectl get deploy \n\n# Verify Pods\nkubectl get pods\n\n# Verify Node Port Services\nkubectl get svc\n\n# Verify Ingress Services\nkubectl get ingress\n\n# Verify Backend Config\nkubectl get backendconfig\n\n# Project-01: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting\nObservation:\nCookie Affinity setting should be in enabled state\n\n# Project-02: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting\nCookie Affinity setting should be in disabled state\n```\n\n## Step-06: Access Application\n```t\n# Project-01: Access Application using DNS or ExtIP\nhttp://ingress-with-cookie-affinity.kalyanreddydaida.com\nhttp://<EXT-IP-1>\nObservation:\n1. Request will keep going always to only one POD due to GENERATED_COOKIE Affinity we configured\n\n# Project-02: Access Application using DNS or ExtIP\nhttp://ingress-without-cookie-affinity.kalyanreddydaida.com\nhttp://<EXT-IP-2>\nObservation:\n1. Requests will be load balanced to 4 pods created as part of \"cdn-demo2\" deployment.\n```\n## Step-07: Clean-Up\n```t\n# Project-01: Delete Kubernetes Resources \nkubectl delete -f 01-kube-manifests-with-cookie-affinity\n\n# Project-02: Delete Kubernetes Resources \nkubectl delete -f 02-kube-manifests-without-cookie-affinity\n```\n\n## References\n- [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)\n\n\n\nhttp://ingress-without-cookie-affinity.kalyanreddydaida.com\nhttp://ingress-with-cookie-affinity.kalyanreddydaida.com"
  },
  {
    "path": "53-GKE-Ingress-HealthCheck-with-backendConfig/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Implement Ingress Custom Health Checks\n- Comment `Readiness Probe` in Kubernetes Deployment.\n- Add Custom Health Checks in `kind: BackendConfig` Kubernetes Resource\n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          #readinessProbe:\n          #  httpGet:\n          #    scheme: HTTP\n          #    path: /index.html\n          #    port: 80\n          #  initialDelaySeconds: 10\n          #  periodSeconds: 15\n          #  timeoutSeconds: 5    \n```\n\n## Step-03: 02-kubernetes-NodePort-service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\n    #cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig\"}}' \n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n```\n\n## Step-04: 03-ingress.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-custom-healthcheck\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \nspec:          \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80     \n```\n## Step-05: 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  healthCheck:\n    checkIntervalSec: 5 # Default is 5 seconds\n    timeoutSec: 5 # The value of timeoutSec must be less than or equal to the checkIntervalSec\n    healthyThreshold: 2 # Default value 2\n    unhealthyThreshold: 2 # Default value 2\n    type: HTTP # The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2 protocols\n    requestPath: /index.html\n    port: 80\n```\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get po\n\n# List Services\nkubectl get svc\n\n# List Backend Configs\nkubectl get backendconfig\n\n# List Ingress Service\nkubectl get ingress\n```\n\n## Step-07: Verify Load Balancer Details\n- Go to Network Services -> Loadbalancing -> Load Balancer\n- Backends -> Backend -> Click on **Health check related link**\n- Verify health check details\n\n## Step-08: Access Application\n```t\n# List Ingress Service\nkubectl get ingress\n\n# Access Application\nhttp://<ADDRESS-FROM-GET-INGRESS-OUTPUT>\n```\n## Step-09: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n```\n\n## References\n- [Ingress Health Checks](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks)\n- [Custom Health Check Configuration](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health)"
  },
  {
    "path": "53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          #readinessProbe:\n          #  httpGet:\n          #    scheme: HTTP\n          #    path: /index.html\n          #    port: 80\n          #  initialDelaySeconds: 10\n          #  periodSeconds: 15\n          #  timeoutSeconds: 5    "
  },
  {
    "path": "53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/02-kubernetes-NodePort-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\n    #cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig\"}}' \n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/03-ingress.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-custom-healthcheck\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \nspec:          \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80     \n"
  },
  {
    "path": "53-GKE-Ingress-HealthCheck-with-backendConfig/kube-manifests/04-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  healthCheck:\n    checkIntervalSec: 5 # Default is 5 seconds\n    timeoutSec: 5 # The value of timeoutSec must be less than or equal to the checkIntervalSec\n    healthyThreshold: 2 # Default value 2\n    unhealthyThreshold: 2 # Default value 2\n    type: HTTP # The BackendConfig only supports creating health checks using the HTTP, HTTPS, or HTTP2 protocols\n    requestPath: /index.html\n    port: 80\n\n# sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged \n# and 1.0 means 100% of packets are logged. This field is only relevant if enable is set \n# to true. sampleRate is an optional field, but if it's configured then enable: true must \n# also be set or else it is interpreted as enable: false.    "
  },
  {
    "path": "54-GKE-Ingress-InternalLB/01-kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app1/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "54-GKE-Ingress-InternalLB/01-kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /app2/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5            \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app2-nginx-nodeport-service\n  labels:\n    app: app2-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app2-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n\n   "
  },
  {
    "path": "54-GKE-Ingress-InternalLB/01-kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5              \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "54-GKE-Ingress-InternalLB/01-kube-manifests/04-Ingress-internal-lb.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-internal-lb\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer  \n    # Internal Load Balancer\n    kubernetes.io/ingress.class: \"gce-internal\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n           \n      \n    "
  },
  {
    "path": "54-GKE-Ingress-InternalLB/02-kube-manifests-curl/01-curl-pod.yml",
    "content": "apiVersion: v1\nkind: Pod\nmetadata:\n  name: curl-pod\nspec:\n  containers:\n  - name: curl\n    image: curlimages/curl \n    command: [ \"sleep\", \"600\" ]"
  },
  {
    "path": "54-GKE-Ingress-InternalLB/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Ingress Internal Load Balancer\ndescription: Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Ingress Internal Load Balancer\n\n## Step-02: Review Kubernetes Deployment manifests\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n\n## Step-03: 04-Ingress-internal-lb.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-internal-lb\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer  \n    # Internal Load Balancer\n    kubernetes.io/ingress.class: \"gce-internal\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: /app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: /app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80                 \n```\n\n## Step-04: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get po\n\n# List Services\nkubectl get svc\n\n# List Backend Configs\nkubectl get backendconfig\n\n# List Ingress Service\nkubectl get ingress\n\n# Describe Ingress Service\nkubectl describe ingress ingress-internal-lb\n\n# Verify Load Balancer\nGo to Network Services -> Load Balancing -> Load Balancer\n```\n\n## Step-05: Review Curl Kubernetes Manifests\n- **Project Folder:** 02-kube-manifests-curl\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: curl-pod\nspec:\n  containers:\n  - name: curl\n    image: curlimages/curl \n    command: [ \"sleep\", \"600\" ]\n```\n\n## Step-06: Deply Curl-pod and Verify Internal LB\n```t\n# Deploy curl-pod\nkubectl apply -f 02-kube-manifests-curl\n\n# Will open up a terminal session into the container\nkubectl exec -it curl-pod -- sh\n\n# App1 Curl Test\ncurl http://<INTERNAL-INGRESS-LB-IP>/app1/index.html\n\n# App2 Curl Test\ncurl http://<INTERNAL-INGRESS-LB-IP>/app2/index.html\n\n# App3 Curl Test\ncurl http://<INTERNAL-INGRESS-LB-IP>\n```\n\n## Step-07: Clean-Up\n```t\n# Delete Kubernetes Manifests\nkubectl delete -f 01-kube-manifests\nkubectl delete -f 02-kube-manifests-curl\n```\n\n## References\n- [Ingress for Internal HTTP(S) Load Balancing](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb)"
  },
  {
    "path": "55-GKE-Ingress-Cloud-Armor/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Ingress with Cloud Armor\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n3. Registered Domain using Google Cloud Domains\n4. External DNS Controller installed and ready to use\n```t\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n5. Verify if External IP Address is created\n```t\n# List External IP Address\ngcloud compute addresses list\n\n# Describe External IP Address \ngcloud compute addresses describe gke-ingress-extip1 --global\n```\n\n\n## Step-01: Introduction\n- Ingress Service with Cloud Armor\n\n## Step-02: Create Cloud Armor Policy\n- Go to Network Security -> Cloud Armor -> CREATE POLICY\n### Configure Policy\n- **Name:** cloud-armor-policy-1\n- **Description:** Cloud Armor Demo with GKE Ingress\n- **Policy type:** Backend security policy\n- **Default rule action:** Deny\n- **Deny Status:** 403(Forbidden)\n- Click on **NEXT STEP**\n### Add More Rules (Optional)\n- Leave to default \n- NO NEW RULES OTHER THAN EXISTING DEFAULT RULE\n- ALL IP ADDRESS -> DENY -> With 403 ERROR -> Priority 2,147,483,647\t\n- Click on **NEXT STEP**\n### Add Policy to Targets (Optional)\n- Leave to default \n- Click on **NEXT STEP**\n### Advanced configurations (Adaptive Protection) (optional)\n- Click on **Enable** checkbox\n- Click on **DONE**\n- Click on **CREATE POLICY**\n\n## Step-03: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cloud-armor-demo-deployment\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: cloud-armor-demo\n  template:\n    metadata:\n      labels:\n        app: cloud-armor-demo\n    spec:\n      containers:\n        - name: cloud-armor-demo\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5    \n```\n## Step-04: 02-kubernetes-NodePort-service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: cloud-armor-demo-nodeport-service\n  annotations:\n    cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig\"}}'\nspec:\n  type: NodePort\n  selector:\n    app: cloud-armor-demo\n  ports:\n    - port: 80\n      targetPort: 80\n```\n## Step-05: 03-ingress.yaml\n```yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cloud-armor-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: cloudarmor-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cloud-armor-demo-nodeport-service\n      port:\n        number: 80     \n\n```\n## Step-06: 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  securityPolicy:\n    name: \"cloud-armor-policy-1\"\n```\n## Step-07: Deploy Kubernetes Manifests and Verify\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get po\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# List Backendconfig\nkubectl get backendconfig\n\n# Access Application\nhttp://<DNS-NAME>\nhttp://cloudarmor-ingress.kalyanreddydaida.com\nObservation:\n1. We should get 403 Forbidden error.\n2. This is expected because we have configured a Cloud Armor Policy to block All IP Addresses with 403 Error\n```\n\n## Step-08: Make a note of Public IP for your Internet Connection\n- Go to [URL: www.whatismyip.com](https://www.whatismyip.com/) and make a note of your local desktop Public IP\n- If you are behind Company / Organizations proxies, not sure if it works. \n- I am using my Home Internet Connection\n\n\n## Step-09: Add new rule in Cloud Armor Policy\n- Go to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> RULES -> ADD RULE\n- **Description:** Allow-from-my-desktop\n- **Mode:** Basic Mode(IP Address / Ranges only)\n- **Match:** 49.206.52.84 (My internet connection public ip)\n- **Action:** Allow\n- **Priority:** 1\n- Click on **ADD**\n- WAIT FOR 5 MINUTES for new policy to go live\n\n## Step-10: Access Application\n```t\n# Access Application from local desktop\nhttp://<DNS-NAME>\nhttp://cloudarmor-ingress.kalyanreddydaida.com\ncurl http://cloudarmor-ingress.kalyanreddydaida.com\nObservation:\n1. Application access should be successful\n```\n\n## Step-11: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete Cloud Armor Policy\nGo to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> DELETE\n```\n\n\n## References\n- https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#cloud_armor\n- https://cloud.google.com/armor/docs/security-policy-overview\n- https://cloud.google.com/armor/docs/integrating-cloud-armor\n- https://cloud.google.com/armor/docs/configure-security-policies"
  },
  {
    "path": "55-GKE-Ingress-Cloud-Armor/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: cloud-armor-demo-deployment\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: cloud-armor-demo\n  template:\n    metadata:\n      labels:\n        app: cloud-armor-demo\n    spec:\n      containers:\n        - name: cloud-armor-demo\n          image: stacksimplify/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5             \n\n\n   "
  },
  {
    "path": "55-GKE-Ingress-Cloud-Armor/kube-manifests/02-kubernetes-NodePort-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: cloud-armor-demo-nodeport-service\n  annotations:\n    #cloud.google.com/backend-config: '{\"ports\": {\"80\":\"my-backendconfig\"}}'\n    cloud.google.com/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cloud-armor-demo\n  ports:\n    - port: 80\n      targetPort: 80"
  },
  {
    "path": "55-GKE-Ingress-Cloud-Armor/kube-manifests/03-ingress.yaml",
    "content": "apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-cloud-armor-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io/hostname: cloudarmor-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cloud-armor-demo-nodeport-service\n      port:\n        number: 80     \n"
  },
  {
    "path": "55-GKE-Ingress-Cloud-Armor/kube-manifests/04-backendconfig.yaml",
    "content": "apiVersion: cloud.google.com/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  securityPolicy:\n    name: \"cloud-armor-policy-1\"\n\n# sampleRate: Specify a value from 0.0 through 1.0, where 0.0 means no packets are logged \n# and 1.0 means 100% of packets are logged. This field is only relevant if enable is set \n# to true. sampleRate is an optional field, but if it's configured then enable: true must \n# also be set or else it is interpreted as enable: false.    "
  },
  {
    "path": "56-GKE-Artifact-Registry/01-Docker-Image/Dockerfile",
    "content": "FROM nginx\nCOPY index.html /usr/share/nginx/html"
  },
  {
    "path": "56-GKE-Artifact-Registry/01-Docker-Image/index.html",
    "content": "<!DOCTYPE html>\n<html>\n   <body style=\"background-color:rgb(194, 98, 89);\">\n      <h1>Welcome to StackSimplify</h1>\n      <p>Google Kubernetes Engine</p>\n      <p>Application Version: V1</p>\n   </body>\n</html>"
  },
  {
    "path": "56-GKE-Artifact-Registry/02-kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: \n  name: myapp1-deployment\nspec: \n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: \n      name: myapp1-pod\n      labels: \n        app: myapp1  \n    spec:\n      containers:\n        - name: myapp1-container\n          #image: us-central1-docker.pkg.dev/<GCP-PROJECT-ID>/<ARTIFACT-REPO>/myapp1:v1\n          image: us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1\n          ports: \n            - containerPort: 80  \n    "
  },
  {
    "path": "56-GKE-Artifact-Registry/02-kube-manifests/02-kubernetes-loadBalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "56-GKE-Artifact-Registry/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE Artifact Registry\ndescription: Implement GCP Google Kubernetes Engine GKE Artifact Registry\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal.\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Build a Docker Image\n- Create a Docker repository in Google Artifact Registry.\n- Set up authentication.\n- Push an image to the repository.\n- Pull the image from the repository and Create Deployment in GKE Cluster\n- Access Sample Application in browser and verify\n\n\n## Step-02: Create Dockefile\n- **Dockerfile**\n```t\nFROM nginx\nCOPY index.html /usr/share/nginx/html\n```\n\n## Step-03: Build Docker Image\n```t\n# Change Directory\ncd google-kubernetes-engine/56-GKE-Artifact-Registry/\ncd 01-Docker-Image\n\n# Build Docker Image\ndocker build -t myapp1:v1 .\n\n# List Docker Image\ndocker images myapp1\n```\n\n## Step-04: Run Docker Image\n```t\n# Run Docker Image\ndocker run --name myapp1 -p 80:80 -d myapp1:v1\n\n# Access in browser\nhttp://localhost\n\n# List Running Docker Containers\ndocker ps\n\n# Stop Docker Container\ndocker stop myapp1\n\n# List All Docker Containers (Stopped Containers)\ndocker ps -a\n\n# Delete Stopped Container\ndocker rm myapp1\n\n# List All Docker Containers (Stopped Containers)\ndocker ps -a\n```\n\n## Step-05: Create Google Artifact Registry\n- Go to Artifact Registry -> Repositories -> Create\n```t\n# Create Google Artifact Registry \nName: gke-artifact-repo1\nFormat: Docker\nRegion: us-central-1\nEncryption: Google-managed encryption key\nClick on Create\n```\n\n## Step-06: Configure Google Artifact Repository authentication\n```t\n# Google Artifact Repository authentication\n## To set up authentication to Docker repositories in the region us-central1\ngcloud auth configure-docker <LOCATION>-docker.pkg.dev\ngcloud auth configure-docker us-central1-docker.pkg.dev\n```\n\n## Step-07: Tag & push the Docker image to Google Artifact Registry\n```t\n# Tag the Docker Image\ndocker tag myapp1:v1 <LOCATION>-docker.pkg.dev/<GOOGLE-PROJECT-ID>/<GOOGLE-ARTIFACT-REGISTRY-NAME>/<IMAGE-NAME>:<IMAGE-TAG>\n\n# Replace Values for docker tag command \n# - LOCATION, \n# - GOOGLE-PROJECT-ID, \n# - GOOGLE-ARTIFACT-REGISTRY-NAME, \n# - IMAGE-NAME, \n# - IMAGE-TAG\ndocker tag myapp1:v1 us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1\n\n# Push the Docker Image to Google Artifact Registry\ndocker push us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1\n```\n\n## Step-08: Verify the Docker Image on Google Artifact Registry\n- Go to Google Artifact Registry -> Repositories -> gke-artifact-repo1\n- Review **myapp1** Docker Image\n\n## Step-09: Update Docker Image and Review kube-manifests\n- **Project-Folder:** 02-kube-manifests\n```yaml\n# Dcoker Image\nimage: us-central1-docker.pkg.dev/<GCP-PROJECT-ID>/<ARTIFACT-REPO>/myapp1:v1\n\n# Update Docker Image in 01-kubernetes-deployment.yaml\nimage: us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1\n```\n\n## Step-10: Deploy kube-manifests\n```t\n# Deploy kube-manifests\nkubectl apply -f 02-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# Describe Pod\nkubectl describe pod <POD-NAME>\n\n## Observation - Verify Events command \"kubectl describe pod <POD-NAME>\"\n### We should see image pulled from \"us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1\"\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  86s   default-scheduler  Successfully assigned default/myapp1-deployment-5f8d5c6f48-pb686 to gke-standard-cluster-1-default-pool-2c852f67-46hv\n  Normal  Pulling    85s   kubelet            Pulling image \"us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1\"\n  Normal  Pulled     81s   kubelet            Successfully pulled image \"us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1\" in 4.285567138s\n  Normal  Created    81s   kubelet            Created container myapp1-container\n  Normal  Started    80s   kubelet            Started container myapp1-container\nKalyans-MacBook-Pro:41-GKE-Artiact-Registry kdaida$ \n\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<SVC-EXTERNAL-IP>\n```\n\n## Step-11: Clean-Up\n```t\n# Undeploy sample App\nkubectl delete -f 02-kube-manifests\n```\n\n\n## References\n- [Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/overview)"
  },
  {
    "path": "57-GKE-Continuous-Integration/01-SSH-Keys/id_gcp_cloud_source",
    "content": "-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW\nQyNTUxOQAAACBMtrPvYdsBE/05pRtCNK6HfcySVB3HsupIh6b1TwgctgAAAKAPWMfAD1jH\nwAAAAAtzc2gtZWQyNTUxOQAAACBMtrPvYdsBE/05pRtCNK6HfcySVB3HsupIh6b1Twgctg\nAAAEAbFRNQOxoqetg+QLLX4IAWQEMlwFjk0Al4395DbuHdQky2s+9h2wET/TmlG0I0rod9\nzJJUHcey6kiHpvVPCBy2AAAAFmRrYWx5YW5yZWRkeUBnbWFpbC5jb20BAgMEBQYH\n-----END OPENSSH PRIVATE KEY-----\n"
  },
  {
    "path": "57-GKE-Continuous-Integration/01-SSH-Keys/id_gcp_cloud_source.pub",
    "content": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEy2s+9h2wET/TmlG0I0rod9zJJUHcey6kiHpvVPCBy2 dkalyanreddy@gmail.com\n"
  },
  {
    "path": "57-GKE-Continuous-Integration/02-Docker-Image/Dockerfile",
    "content": "FROM nginx\nCOPY index.html /usr/share/nginx/html"
  },
  {
    "path": "57-GKE-Continuous-Integration/02-Docker-Image/index.html",
    "content": "<!DOCTYPE html>\n<html>\n   <body style=\"background-color:rgb(196, 84, 74);\">\n      <h1>Welcome to StackSimplify</h1>\n      <p>Google Kubernetes Engine</p>\n      <p>Application Version: V1</p>\n   </body>\n</html>"
  },
  {
    "path": "57-GKE-Continuous-Integration/03-cloudbuild-yaml/cloudbuild.yaml",
    "content": "steps:\n# This step builds the container image.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Build\n  args:\n  - 'build'\n  - '-t'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n  - '.'\n\n# This step pushes the image to Artifact Registry\n# The PROJECT_ID and SHORT_SHA variables are automatically\n# replaced by Cloud Build.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Push\n  args:\n  - 'push'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'"
  },
  {
    "path": "57-GKE-Continuous-Integration/04-kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 1\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          #image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:d1c3b88\n          image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:3d5c45b\n          ports: \n            - containerPort: 80  \n    "
  },
  {
    "path": "57-GKE-Continuous-Integration/04-kube-manifests/02-kubernetes-loadBalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "57-GKE-Continuous-Integration/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE CI\ndescription: Implement GCP Google Kubernetes Engine GKE Continuous Integration\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n\n## Step-01: Introduction\n- Implement Continuous Integration for GKE Workloads using\n- Google Cloud Source\n- Google Cloud Build\n- Google Artifact Repository\n\n\n## Step-02: Enable APIs in Google Cloud\n```t\n# Enable APIs in Google Cloud\ngcloud services enable container.googleapis.com \\\n    cloudbuild.googleapis.com \\\n    sourcerepo.googleapis.com \\\n    artifactregistry.googleapis.com\n\n# Google Cloud Services \nGKE: container.googleapis.com     \nCloud Build: cloudbuild.googleapis.com\nCloud Source: sourcerepo.googleapis.com\nArtifact Registry: artifactregistry.googleapis.com\n```\n\n## Step-03: Create Artifact Repository\n```t\n# List Artifact Repositories\ngcloud artifacts repositories list\n\n# Create Artifact Repository\ngcloud artifacts repositories create myapps-repository \\\n  --repository-format=docker \\\n  --location=us-central1 \n\n# List Artifact Repositories\ngcloud artifacts repositories list\n\n# Describe Artifact Repository \ngcloud artifacts repositories describe myapps-repository --location=us-central1\n```\n\n## Step-04: Install Git client on local desktop (if not present)\n```t\n# Download and Install Git Client and Installed\nhttps://git-scm.com/downloads\n```\n\n## Step-05: Create SSH Keys for Git Repo Access\n- [Generating SSH Key Pair](https://cloud.google.com/source-repositories/docs/authentication#generate_a_key_pair)\n```t\n# Change Directory\ncd 01-SSH-Keys\n\n# Create SSH Keys\nssh-keygen -t [KEY_TYPE] -C \"[USER_EMAIL]\"\nKEY_TYPE: rsa, ecdsa, ed25519\nUSER_EMAIL: dkalyanreddy@gmail.com \n\n# Replace Values KEY_TYPE, USER_EMAIL\nssh-keygen -t ed25519 -C \"dkalyanreddy@gmail.com\"\nProvide the File Name as \"id_gcp_cloud_source\"\n\n## Sample Output\nKalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ssh-keygen -t ed25519 -C \"dkalyanreddy@gmail.com\"\nGenerating public/private ed25519 key pair.\nEnter file in which to save the key (/Users/kalyanreddy/.ssh/id_ed25519): id_gcp_cloud_source\nEnter passphrase (empty for no passphrase): \nEnter same passphrase again: \nYour identification has been saved in id_gcp_cloud_source\nYour public key has been saved in id_gcp_cloud_source.pub\nThe key fingerprint is:\nSHA256:YialyCj3XaSa4b8ewk4bcK1hXxO7DDM5uiCP1J2TOZ0 dkalyanreddy@gmail.com\nThe key's randomart image is:\n+--[ED25519 256]--+\n|                 |\n|                 |\n|      . o        |\n| o . + + o       |\n|o = B % S        |\n|...B.&=X.o       |\n|....%B+Eo        |\n|.+ + *o.         |\n|. . +.+.         |\n+----[SHA256]-----+\nKalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ls -lrta\ntotal 16\ndrwxr-xr-x  6 kalyanreddy  staff  192 Jun 29 09:45 ..\n-rw-------  1 kalyanreddy  staff  419 Jun 29 09:46 id_gcp_cloud_source\ndrwxr-xr-x  4 kalyanreddy  staff  128 Jun 29 09:46 .\n-rw-r--r--  1 kalyanreddy  staff  104 Jun 29 09:46 id_gcp_cloud_source.pub\nKalyans-Mac-mini:01-SSH-Keys kalyanreddy$ \n```\n\n## Step-06: Review SSH Keys (Public and Private Keys)\n```t\n# Change Directroy \ncd 01-SSH-Keys\n\n# Review Private Key: id_gcp_cloud_source\ncat id_gcp_cloud_source\n\n# Review Public Key: id_gcp_cloud_source.pub \ncat id_gcp_cloud_source.pub \n```\n\n## Step-07: Update SSH Public Key in Google Cloud Source\n- Go to -> Source Repositories -> 3 Dots -> Manage SSH Keys -> Register SSH Key\n- [Google Cloud Source URL](https://source.cloud.google.com/)\n```t\n# Key Name\nName: gke-course\nKey: Output from command \"cat id_gcp_cloud_source.pub\" in previous step. Put content from Public Key\n```\n- Click on **Register**\n\n\n## Step-08: Update SSH Private Key in Git Config\n- Update SSH Private Key in your local desktop Git Config\n```t\n# Copy SSH Private Key to your \".ssh\" folder in your Home Directory from your course\ncd 01-SSH-Keys\ncp id_gcp_cloud_source $HOME/.ssh  \n\n# Change Directory (Your local desktop home directory)\ncd $HOME/.ssh  \n\n# Verify File in \"$HOME/.ssh\"\nls -lrta id_gcp_cloud_source\n\n# Verify existing git \"config\" file\ncat config\n\n# Backup any existing \"config\" file\ncp config config_bkup_before_cloud_source\n\n# Update \"config\" file to point to \"id_gcp_cloud_source\" private key\nvi config\n\n## Sample Output after changes\nKalyans-Mac-mini:.ssh kalyanreddy$ cat config\nHost *\n  AddKeysToAgent yes\n  UseKeychain yes\n  IdentityFile ~/.ssh/id_gcp_cloud_source\nKalyans-Mac-mini:.ssh kalyanreddy$ \n\n# Backup config with cloudsource\ncp config config_with_cloud_source_key\n```\n\n## Step-09: Update Git Global Config in your local deskopt\n```t\n# List Global Git Config\ngit config --list\n\n# Update Global Git Config\ngit config --global user.email \"YOUR_EMAIL_ADDRESS\"\ngit config --global user.name \"YOUR_NAME\"\n\n# Replace YOUR_EMAIL_ADDRESS, YOUR_NAME\ngit config --global user.name \"Kalyan Reddy Daida\"\ngit config --global user.email \"dkalyanreddy@gmail.com\"\n\n# List Global Git Config\ngit config --list\n```\n\n## Step-10: Create Git repositories in Cloud Source\n```t\n# List Cloud Source Repository\ngcloud source repos list\n\n# Create Git repositories in Cloud Source\ngcloud source repos create myapp1-app-repo\n\n# List Cloud Source Repository\ngcloud source repos list\n\n# Verify using Cloud Console\nSearch for -> Source Repositories \nhttps://source.cloud.google.com/repos\n```\n\n## Step-11: Clone Cloud Source Git Repository, Commit a Change, Push to Remote Repo and Verify\n```t\n# Change Directory \ncd course-repos\n\n# Verify using Cloud Console\nSearch for -> Source Repositories \nhttps://source.cloud.google.com/repos\nGo to Repo -> myapp1-app-repo -> SSH Authentication\n\n# Copy the git clone command and run \ngit clone ssh://dkalyanreddy@gmail.com@source.developers.google.com:2022/p/kdaida123/r/myapp1-app-repo\n\n# Change Directory\ncd myapp1-app-repo\n\n# Create a simple readme file\ntouch README.md\necho \"# GKE CI Demo\" > README.md\nls -lrta\n\n# Add Files and do local commit\ngit add .\ngit commit -am \"First Commit\"\n\n# Push file to Cloud Source Git Repo (Remote Repo)\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps://source.cloud.google.com/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-12: Review Files in 02-Docker-Image folder\n1. Dockerfile\n2. index.html\n\n## Step-13: Copy files from 02-Docker-Image folder to Git Repo\n```t\n# Change Directroy \ncd 57-GKE-Continuous-Integration/02-Docker-Image\n\n# Copy Files to Git repo \"myapp1-app-repo\"\n1. Dockerfile\n2. index.html\n\n# Local Git Commit and Push to Remote Repo\ngit add .\ngit commit -am \"Second Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps://source.cloud.google.com/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-14: Create a container image with Cloud Build and store it in Artifact Registry using glcoud builds command\n```t\n# Change Directory (Git App Repo: myapp1-app-repo)\ncd myapp1-app-repo\n\n# Get latest git commit id (current branch)\ngit rev-parse HEAD\n\n# Get latest git commit id first 7 chars (current branch)\ngit rev-parse --short=7 HEAD\n\n# Ensure you are in local git repo folder where \"Dockerfile, index.html\" present\ncd myapp1-app-repo \n\n# Create a Cloud Build build based on the latest commit \ngcloud builds submit --tag=\"us-central1-docker.pkg.dev/${PROJECT_ID}/${$APP_ARTIFACT_REPO}/myapp1:${COMMIT_ID}\" .\n\n# Replace Values ${PROJECT_ID}, ${$APP_ARTIFACT_REPO}, ${COMMIT_ID}\ngcloud builds submit --tag=\"us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:6f7d338\" .\n```\n\n## Step-15: Review Cloud Build YAML file\n```yaml\nsteps:\n# This step builds the container image.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Build\n  args:\n  - 'build'\n  - '-t'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n  - '.'\n\n# This step pushes the image to Artifact Registry\n# The PROJECT_ID and SHORT_SHA variables are automatically\n# replaced by Cloud Build.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Push\n  args:\n  - 'push'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n```\n\n## Step-16: Copy cloudbuild.yaml to Git Repo\n```t\n# Change Directroy \ncd 57-GKE-Continuous-Integration/03-cloudbuild-yaml\n\n# Copy Files to Git repo\n1. cloudbuild.yaml\n\n# Local Git Commit and Push to Remote Repo\ngit add .\ngit commit -am \"Third Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps://source.cloud.google.com/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-17: Create Continuous Integration Pipeline in Cloud Build\n- Go to Cloud Build -> Dashboard -> Region: us-central-1 -> Click on **SET UP BUILD TRIGGERS** [OR]\n- Go to Cloud Build -> TRIGGERS -> Click on **CREATE TRIGGER** \n- **Name:** myapp1-ci\n- **Region:** us-central1\n- **Description:** myapp1 Continuous Integration Pipeline\n- **Tags:** environment=dev\n- **Event:** Push to a branch\n- **Source:** myapp1-app-repo\n- **Branch:** main (Auto-populated)\n- **Configuration:** Cloud Build configuration file (yaml or json)\n- **Location:** Repository\n- **Cloud Build Configuration file location:** /cloudbuild.yaml\n- **Approval:** leave unchecked\n- **Service account:** leave to default\n- Click on **CREATE**\n\n\n## Step-18: Make a simple change in \"index.html\" and push the changes to Git Repo\n```t\n# Change Directroy \ncd myapp1-app-repo\n\n# Update file index.html (change V1 to V2)\n<p>Application Version: V2</p>\n\n# Local Git Commit and Push to Remote Repo\ngit status\ngit add .\ngit commit -am \"V2 Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps://source.cloud.google.com/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-19: Verify Code Build CI Pipeline\n```t\n# Verify Code Build\n1. Go to Code Build -> Dashboard or go directly to Code Build -> History\n2. Click on Build History -> View All\n3. Verify \"BUILD LOG\"\n4. Verify \"EXECUTION DETAILS\"\n5. Verify \"VIEW RAW\"\n\n# Verify Artifact Repository\n1. Go to Artifact Registry -> myapps-repository -> myapp1\n2. You should find the docker image pushed to Artifact Registry\n```\n\n## Step-20: Review Kubernetes Manifests\n- **Project Folder:** 04-kube-manifests\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-loadBalancer-service.yaml\n\n## Step-21: Update Container Image to V1 Docker Image we built\n```yaml\n# 01-kubernetes-deployment.yaml: Update \"image\" \n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:d1c3b88\n          ports: \n            - containerPort: 80  \n```\n\n## Step-22: Deploy Kubernetes Manifests and Verify\n```t\n# Change Directory\nYou should in Course Content folder \ngoogle-kubernetes-engine/<RESPECTIVE-SECTION>\n\n# Deploy Kubernetes Manifests\nkubectl apply -f 04-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# Describe Pod (Review Events to understand from where Docker Image downloaded)\nkubectl describe pod <POD-NAME>\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<EXTERNAL-IP-GET-SERVICE-OUTPUT>\nObservation:\n1. You should see \"Application Version: V1\"\n```\n\n## Step-23: Update Container Image to V2 Docker Image we built\n```yaml\n# 01-kubernetes-deployment.yaml: Update \"image\" \n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:3af592c\n          ports: \n            - containerPort: 80  \n```\n\n## Step-24: Update Kubernetes Deployment and Verify\n```t\n# Deply Kubernetes Manifests (Updated Image Tag)\nkubectl apply -f 04-kube-manifests\n\n# Restart Kubernetes Deployment (Optional - if it is not updated)\nkubectl rollout restart deployment myapp1-deployment\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# Describe Pod (Review Events to understand from where Docker Image downloaded)\nkubectl describe pod <POD-NAME>\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<EXTERNAL-IP-GET-SERVICE-OUTPUT>\nObservation:\n1. You should see \"Application Version: V2\"\n```\n\n## Step-25: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 04-kube-manifests\n```\n\n## Step-26: How to add Approvals before starting the Build Process ?\n### Step-26-01: Enable Approval in Cloud Build\n- Go to Cloud Build -> Triggers -> myapp1-ci\n- Check the box in **Approval: Require approval before build executes**\n\n### Step-26-02: Add Users to Cloud Build Approver IAM Role\n- Go to IAM & Admin -> GRANT ACCESS \n- **Add Principal:** dkalyanreddy@gmail.com\n- **Assign Roles:** Cloud Build Approver\n- Click on **SAVE**\n\n## Step-27: Update the Git Repo to test Build Approval Process\n```t\n# Change Directroy \ncd myapp1-app-repo\n\n# Update file index.html (change V2 to V3)\n<p>Application Version: V3</p>\n\n# Local Git Commit and Push to Remote Repo\ngit status\ngit add .\ngit commit -am \"V3 Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps://source.cloud.google.com/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-28: Verify and Approve the Build\n- Go to Cloud Build -> Triggers -> myapp1-ci -> Select and Approve\n- Verify if build is successful.\n\n\n\n\n## References\n- [Cloud Build for Docker Images](https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build)"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/01-myapp1-k8s-repo/cloudbuild-delivery.yaml",
    "content": "# [START cloudbuild-delivery]\nsteps:\n# This step deploys the new version of our container image\n# in the \"standard-cluster-private-1\" Google Kubernetes Engine cluster.\n- name: 'gcr.io/cloud-builders/kubectl'\n  id: Deploy\n  args:\n  - 'apply'\n  - '-f'\n  - 'kubernetes.yaml'\n  env:\n  - 'CLOUDSDK_COMPUTE_REGION=us-central1'\n  #- 'CLOUDSDK_COMPUTE_ZONE=us-central1-c'  \n  - 'CLOUDSDK_CONTAINER_CLUSTER=standard-cluster-private-1' # Provide GKE Cluster Name\n\n# This step copies the applied manifest to the production branch\n# The COMMIT_SHA variable is automatically\n# replaced by Cloud Build.\n- name: 'gcr.io/cloud-builders/git'\n  id: Copy to production branch\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    set -x && \\\n    # Configure Git to create commits with Cloud Build's service account\n    git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') && \\\n    # Switch to the production branch and copy the kubernetes.yaml file from the candidate branch\n    git fetch origin production && git checkout production && \\\n    git checkout $COMMIT_SHA kubernetes.yaml && \\\n    # Commit the kubernetes.yaml file with a descriptive commit message\n    git commit -m \"Manifest from commit $COMMIT_SHA\n    $(git log --format=%B -n 1 $COMMIT_SHA)\" && \\\n    # Push the changes back to Cloud Source Repository\n    git push origin production\n# [END cloudbuild-delivery]"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/02-Source-Writer-IAM-Role/myapp1-k8s-repo-policy.yaml",
    "content": "bindings:\n- members:\n  - serviceAccount:1057267725005@cloudbuild.gserviceaccount.com\n  role: roles/source.writer\n"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/Dockerfile",
    "content": "FROM nginx\nCOPY index.html /usr/share/nginx/html"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/README.md",
    "content": "# GKE CI Demo\n"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/cloudbuild-trigger-cd.yaml",
    "content": "# [START cloudbuild - Docker Image Build]\nsteps:\n# This step builds the container image.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Build\n  args:\n  - 'build'\n  - '-t'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n  - '.'\n\n# This step pushes the image to Artifact Registry\n# The PROJECT_ID and SHORT_SHA variables are automatically\n# replaced by Cloud Build.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Push\n  args:\n  - 'push'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n# [END cloudbuild - Docker Image Build]\n\n\n# [START cloudbuild-trigger-cd]\n# This step clones the myapp1-k8s-repo repository\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Clone myapp1-k8s-repo repository\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    gcloud source repos clone myapp1-k8s-repo && \\\n    cd myapp1-k8s-repo && \\\n    git checkout candidate && \\\n    git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)')\n# This step generates the new manifest\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Generate Kubernetes manifest\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n     sed \"s/GOOGLE_CLOUD_PROJECT/${PROJECT_ID}/g\" kubernetes.yaml.tpl | \\\n     sed \"s/COMMIT_SHA/${SHORT_SHA}/g\" > myapp1-k8s-repo/kubernetes.yaml\n# This step pushes the manifest back to myapp1-k8s-repo\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Push manifest\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    set -x && \\\n    cd myapp1-k8s-repo && \\\n    git add kubernetes.yaml && \\\n    git commit -m \"Deploying image us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:${SHORT_SHA}\n    Built from commit ${COMMIT_SHA} of repository myapp1-app-repo\n    Author: $(git log --format='%an <%ae>' -n 1 HEAD)\" && \\\n    git push origin candidate\n# [END cloudbuild-trigger-cd]"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/cloudbuild.yaml",
    "content": "# [START cloudbuild - Docker Image Build]\nsteps:\n# This step builds the container image.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Build\n  args:\n  - 'build'\n  - '-t'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n  - '.'\n\n# This step pushes the image to Artifact Registry\n# The PROJECT_ID and SHORT_SHA variables are automatically\n# replaced by Cloud Build.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Push\n  args:\n  - 'push'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n# [END cloudbuild - Docker Image Build]\n\n\n# [START cloudbuild-trigger-cd]\n# This step clones the myapp1-k8s-repo repository\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Clone myapp1-k8s-repo repository\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    gcloud source repos clone myapp1-k8s-repo && \\\n    cd myapp1-k8s-repo && \\\n    git checkout candidate && \\\n    git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)')\n# This step generates the new manifest\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Generate Kubernetes manifest\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n     sed \"s/GOOGLE_CLOUD_PROJECT/${PROJECT_ID}/g\" kubernetes.yaml.tpl | \\\n     sed \"s/COMMIT_SHA/${SHORT_SHA}/g\" > myapp1-k8s-repo/kubernetes.yaml\n# This step pushes the manifest back to myapp1-k8s-repo\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Push manifest\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    set -x && \\\n    cd myapp1-k8s-repo && \\\n    git add kubernetes.yaml && \\\n    git commit -m \"Deploying image us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:${SHORT_SHA}\n    Built from commit ${COMMIT_SHA} of repository myapp1-app-repo\n    Author: $(git log --format='%an <%ae>' -n 1 HEAD)\" && \\\n    git push origin candidate\n# [END cloudbuild-trigger-cd]"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/index.html",
    "content": "<!DOCTYPE html>\n<html>\n   <body style=\"background-color:rgb(196, 84, 74);\">\n      <h1>Welcome to StackSimplify</h1>\n      <p>Google Kubernetes Engine</p>\n      <p>Application Version: V101</p>\n   </body>\n</html>"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/03-myapp1-app-repo/kubernetes.yaml.tpl",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: myapp1-deployment\n  labels:\n    app: myapp1\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: myapp1\n  template:\n    metadata:\n      labels:\n        app: myapp1\n    spec:\n      containers:\n      - name: myapp1\n        image: us-central1-docker.pkg.dev/GOOGLE_CLOUD_PROJECT/myapps-repository/myapp1:COMMIT_SHA\n        ports:\n        - containerPort: 80\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer\n  selector:\n    app: myapp1\n  ports:\n  - protocol: TCP\n    port: 80\n    targetPort: 80\n"
  },
  {
    "path": "58-GKE-Continuous-Delivery-with-CloudBuild/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine GKE CD\ndescription: Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Implement Continuous Delivery Pipeline for GKE Workloads using\n- Google Cloud Source\n- Google Cloud Build\n- Google Artifact Repository\n\n\n## Step-02: Assign Kubernetes Engine Developer IAM Role to Cloud Build\n- To deploy the application in your Googke GKE Kubernetes cluster, **Cloud Build** needs the **Kubernetes Engine Developer Identity and Access Management Role.**\n```t\n# Verify if changes took place using Google Cloud Console    \n1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions\n2. Kubernetes Engine\t-> Should be in \"DISABLED\" state\n\n# Get current project PROJECT_ID\nPROJECT_ID=\"$(gcloud config get-value project)\"\necho ${PROJECT_ID}\n\n# Get Google Cloud Project Number\nPROJECT_NUMBER=\"$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')\"\necho ${PROJECT_NUMBER}\n\n# Associate Kubernetes Engine Developer IAM Role to Cloud Build\ngcloud projects add-iam-policy-binding ${PROJECT_NUMBER} \\\n    --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \\\n    --role=roles/container.developer\n\n# Verify if changes took place using Google Cloud Console    \n1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions\n2. Kubernetes Engine\t-> Should be in \"ENABLED\" state\n```\n\n## Step-03: Review File cloudbuild-delivery.yaml\n- **File Location:** 01-myapp1-k8s-repo\n```yaml\n# [START cloudbuild-delivery]\nsteps:\n# This step deploys the new version of our container image\n# in the \"standard-cluster-private-1\" Google Kubernetes Engine cluster.\n- name: 'gcr.io/cloud-builders/kubectl'\n  id: Deploy\n  args:\n  - 'apply'\n  - '-f'\n  - 'kubernetes.yaml'\n  env:\n  - 'CLOUDSDK_COMPUTE_REGION=us-central1'\n  #- 'CLOUDSDK_COMPUTE_ZONE=us-central1-c'  \n  - 'CLOUDSDK_CONTAINER_CLUSTER=standard-cluster-private-1' # Provide GKE Cluster Name\n\n# This step copies the applied manifest to the production branch\n# The COMMIT_SHA variable is automatically\n# replaced by Cloud Build.\n- name: 'gcr.io/cloud-builders/git'\n  id: Copy to production branch\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    set -x && \\\n    # Configure Git to create commits with Cloud Build's service account\n    git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') && \\\n    # Switch to the production branch and copy the kubernetes.yaml file from the candidate branch\n    git fetch origin production && git checkout production && \\\n    git checkout $COMMIT_SHA kubernetes.yaml && \\\n    # Commit the kubernetes.yaml file with a descriptive commit message\n    git commit -m \"Manifest from commit $COMMIT_SHA\n    $(git log --format=%B -n 1 $COMMIT_SHA)\" && \\\n    # Push the changes back to Cloud Source Repository\n    git push origin production\n# [END cloudbuild-delivery]\n```\n## Step-04: Create and Initialize myapp1-k8s-repo Repo, Copy Files and Push to Cloud Source Repository\n```t\n# Change Directory \ncd course-repos\n\n# List Cloud Source Repositories\ngcloud source repos list\n\n# Create Cloud Source Gith Repo: myapp1-k8s-repo\ngcloud source repos create myapp1-k8s-repo\n\n# Initialize myapp1-k8s-repo Repo\ngcloud source repos clone myapp1-k8s-repo\n\n# Copy Files to myapp1-k8s-repo\ncloudbuild-delivery.yaml from \"58-GKE-Continuous-Delivery-with-CloudBuild/01-myapp1-k8s-repo\"\n\n# Change Directory\ncd myapp1-k8s-repo\n\n# Commit Changes\ngit add .\ngit commit -m \"Create cloudbuild-delivery.yaml for k8s deployment\"\n\n# Create a candidate branch and push to be available in Cloud Source Repositories.\ngit checkout -b candidate\ngit push origin candidate\n\n# Create a production branch and push to be available in Cloud Source Repositories.\ngit checkout -b production\ngit push origin production\n```\n\n## Step-05: Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account\n- Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account for the **myapp1-k8s-repo** repository.\n\n```t\n# Get current project PROJECT_ID\nPROJECT_ID=\"$(gcloud config get-value project)\"\necho ${PROJECT_ID}\n\n# GET GCP PROJECT NUMBER\nPROJECT_NUMBER=\"$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')\"\necho ${PROJECT_NUMBER}\n\n# Change Directory    \ncd 02-Source-Writer-IAM-Role\n\n# Clean-Up File (put the file empty - No Content)\n>myapp1-k8s-repo-policy.yaml\n\n# Create IAM Policy YAML File\ncat >myapp1-k8s-repo-policy.yaml <<EOF\nbindings:\n- members:\n  - serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com\n  role: roles/source.writer\nEOF\n\n# Verify IAM Policy File created with PROJECT_NUMBER\ncat myapp1-k8s-repo-policy.yaml\n\n# Set IAM Policy to Cloud Source Repository: myapp1-k8s-repo\ngcloud source repos set-iam-policy \\\n    myapp1-k8s-repo myapp1-k8s-repo-policy.yaml\n```\n\n## Step-06: Create the trigger for the continuous delivery pipeline\n- Go to Cloud Build -> Triggers -> Region: us-central-1 -> Click on **CREATE TRIGGER**\n- **Name:** myapp1-cd\n- **Region:** us-central1\n- **Description:** myapp1 Continuous Deployment Pipeline\n- **Tags:** environment=dev\n- **Event:** Push to a branch\n- **Source:** myapp1-k8s-repo\n- **Branch:** candidate \n- **Configuration:** Cloud Build configuration file (yaml or json)\n- **Location:** Repository\n- **Cloud Build Configuration file location:** cloudbuild-delivery.yaml\n- **Approval:** leave unchecked\n- **Service account:** leave to default\n- Click on **CREATE**\n\n\n## Step-06: Review files in folder 03-myapp1-app-repo\n1. Dockerfile\n2. index.html\n3. kubernetes.yaml.tpl\n4. cloudbuild-trigger-cd.yaml\n5. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml)\n```yaml\n# [START cloudbuild - Docker Image Build]\nsteps:\n# This step builds the container image.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Build\n  args:\n  - 'build'\n  - '-t'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n  - '.'\n\n# This step pushes the image to Artifact Registry\n# The PROJECT_ID and SHORT_SHA variables are automatically\n# replaced by Cloud Build.\n- name: 'gcr.io/cloud-builders/docker'\n  id: Push\n  args:\n  - 'push'\n  - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA'\n# [END cloudbuild - Docker Image Build]\n\n\n# [START cloudbuild-trigger-cd]\n# This step clones the myapp1-k8s-repo repository\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Clone myapp1-k8s-repo repository\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    gcloud source repos clone myapp1-k8s-repo && \\\n    cd myapp1-k8s-repo && \\\n    git checkout candidate && \\\n    git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)')\n# This step generates the new manifest\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Generate Kubernetes manifest\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n     sed \"s/GOOGLE_CLOUD_PROJECT/${PROJECT_ID}/g\" kubernetes.yaml.tpl | \\\n     sed \"s/COMMIT_SHA/${SHORT_SHA}/g\" > myapp1-k8s-repo/kubernetes.yaml\n# This step pushes the manifest back to myapp1-k8s-repo\n- name: 'gcr.io/cloud-builders/gcloud'\n  id: Push manifest\n  entrypoint: /bin/sh\n  args:\n  - '-c'\n  - |\n    set -x && \\\n    cd myapp1-k8s-repo && \\\n    git add kubernetes.yaml && \\\n    git commit -m \"Deploying image us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:${SHORT_SHA}\n    Built from commit ${COMMIT_SHA} of repository myapp1-app-repo\n    Author: $(git log --format='%an <%ae>' -n 1 HEAD)\" && \\\n    git push origin candidate\n# [END cloudbuild-trigger-cd]\n```\n\n\n## Step-07: Update index.html in myapp1-app-repo, Push and Verify\n```t\n# Change Directory (GIT REPO)\ncd myapp1-app-repo\n\n# Update index.html\n      <p>Application Version: V4</p>\n\n# Add additional files to myapp1-app-repo\n1. kubernetes.yaml.tpl\n2. cloudbuild-trigger-cd.yaml\n3. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml)\n\n\n# Git Commit and Push to Remote Repository\ngit status\ngit add .\ngit commit -am \"V4 Commit CI CD\"\ngit push\n\n# Verify Cloud Source Repository: myapp1-app-repo\nhttps://source.cloud.google.com/\nmyapp1-app-repo\n\n# Verify Cloud Source Repository: myapp1-k8s-repo\nhttps://source.cloud.google.com/\nmyapp1-k8s-repo\nBranch: Candidate\nYou should find \"kubernetes.yaml\" file with latest commit code for Image from \"myapp1-app-repo\"\n```\n\n## Step-08: Verify myapp1-ci and myapp1-cd builds\n- Go to Cloud Build -> History\n- Review latest **myapp1-ci** build steps\n- Review latest **myapp1-cd** build steps\n\n## Step-09: Verify Files in Cloud Source Repositories\n- Go to Cloud Source \n- **myapp1-app-repo:** New files should be present\n- **myapp1-k8s-repo:** kubernetes.yaml file with values replaced related to GOOGLE GOOGLE_CLOUD_PROJECT and COMMIT_SHA should be replaced `image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:2a3e72a`\n\n## Step-10: Verify Google Artifact Registry\n- Go to Artifact Registry -> Repositories -> myapps-repository -> myapp1\n- Shoud see a new docker image\n\n## Step-11: Access Application\n```t\n# List Pods\nkubect get pods\n\n# List Deployments\nkubectl get deploy\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<SERVICE-EXTERNALIP>\nObservation:\n1. Should see v4 version of application deployed\n```\n\n## Step-12: Test CI CD one more time\n- Update index.html to V5\n```t\n# Change Directory (GIT REPO)\ncd myapp1-app-repo\n\n# Update index.html\n      <p>Application Version: V5</p>\n\n# Git Commit and Push to Remote Repository\ngit status\ngit add .\ngit commit -am \"V5 Commit CI CD\"\ngit push\n\n# Verify Build process\nGo to Cloud Build -> myapp1-ci -> BUILD LOG \nGo to Cloud Build -> myapp1-cd -> BUILD LOG\n\n# Access Application\nhttp://<SERVICE-EXTERNALIP>\nObservation:\n1. Should see v5 version of application deployed\n```\n\n## Step-13: Verify Application Rollback by just rebuilding CD Pipeline\n- Go to ANY version of `myapp1-cd` and click on `REBUILD`\n- Verify by accessing Application\n```t\n# List Pods\nkubect get pods\n\n# List Deployments\nkubectl get deploy\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<SERVICE-EXTERNALIP>\nObservation:\n1. Should see V4 version of application deployed\n```\n\n## Step-14: Clean-Up\n```t\n# Disable / Delete CI CD Pipelines\n1. Go to Cloud Build -> myapp1-ci -> 3 dots -> Delete\n2. Go to Cloud Build -> myapp1-cd -> 3 dots -> Delete\n\n# Delete Cloud Source Repositories\nGo to Cloud Source (https://source.cloud.google.com/repos) \n1. myapp1-app-repo -> Settings -> Delete this repository\n2. myapp1-k8s-repo -> Settings -> Delete this repository\n\n# Delete Kubernetes Deployment\nkubect get deploy\nkubectl delete deploy myapp1-deployment\n\n# Delete Kubernetes Service\nkubectl get svc\nkubectl delete svc myapp1-lb-service \n\n# Delete Artifact Registry\nGo to Artifact Registry -> Repositories -> myapps-repository -> DELETE\n\n# Delete Local Repos\ncd course-repos\nrm -rf myapp1-app-repo\nrm -rf myapp1-k8s-repo\n```\n\n## References\n- https://github.com/GoogleCloudPlatform/gke-gitops-tutorial-cloudbuild\n"
  },
  {
    "path": "59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. \n## We are going to use this in our MySQL k8s Deployment  \n\n# 3. YAML Notation\n## YAML Notation: |-: \"strip\": remove the line feed, remove the trailing blank lines.\n## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines"
  },
  {
    "path": "59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          #env:\n          #  - name: MYSQL_ROOT_PASSWORD\n          #    value: dbpassword11\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password              \n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful."
  },
  {
    "path": "59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp  \n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            #- name: DB_PASSWORD\n            #  value: \"dbpassword11\"\n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password \n          # Liveness Probe Linux Command                   \n          livenessProbe:\n            exec:\n              command: \n                - /bin/sh\n                - -c \n                - nc -z localhost 8080\n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n\n\n# Types of Liveness Probes we can define\n# 1. Linux Command\n# 2. HTTP Request\n# 3. TCP Ping                                        \n\n# What happens ??\n# 1. To perform a probe, the kubelet executes the command  \n# Command: /bin/sh -c - nc -z localhost 8080 in the target container. \n# 2. If the command succeeds, it returns 0, and the kubelet considers the \n# container to be alive and healthy. \n# 3. If the command returns a non-zero value, the kubelet kills the \n# container and restarts it.         \n\n# More Details\n# 1. failureThreshold: When a probe fails, Kubernetes will try failureThreshold\n# times before giving up. Giving up in case of liveness probe means \n# restarting the container. In case of readiness probe the Pod will be \n# marked Unready. Defaults to 3. Minimum value is 1.\n\n# 2. successThreshold: Minimum consecutive successes for the probe to be \n# considered successful after having failed. Defaults to 1. \n# Must be 1 for liveness and startup Probes. Minimum value is 1.\n\n"
  },
  {
    "path": "59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "59-Kubernetes-liveness-probe/01-liveness-probe-linux-command/07-kubernetes-secret.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\n#type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured.\n#It can contain arbitrary key-value pairs. \ntype: Opaque\ndata:\n  # Output of echo -n 'dbpassword11' | base64\n  db-password: ZGJwYXNzd29yZDEx"
  },
  {
    "path": "59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. \n## We are going to use this in our MySQL k8s Deployment  \n\n# 3. YAML Notation\n## YAML Notation: |-: \"strip\": remove the line feed, remove the trailing blank lines.\n## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines"
  },
  {
    "path": "59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          #env:\n          #  - name: MYSQL_ROOT_PASSWORD\n          #    value: dbpassword11\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password              \n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful."
  },
  {
    "path": "59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp  \n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            #- name: DB_PASSWORD\n            #  value: \"dbpassword11\"\n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password \n          # Liveness Probe HTTP Request\n          livenessProbe:\n            httpGet:\n              path: /login\n              port: 8080\n              httpHeaders:\n              - name: Custom-Header\n                value: Awesome          \n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n\n\n# Types of Liveness Probes we can define\n# 1. Linux Command\n# 2. HTTP Request\n# 3. TCP Ping                                        \n\n# What happens ??\n# 1. To perform a probe, the kubelet sends an HTTP GET request to the \n# server that is running in the container and listening on port 8080. \n# 2. If the handler for the server's /login path returns a success code,\n# the kubelet considers the container to be alive and healthy. \n# 3. If the handler returns a failure code, the kubelet kills the \n# container and restarts it.                                  \n# 4. Any code greater than or equal to 200 and less than 400 \n# indicates success. Any other code indicates failure."
  },
  {
    "path": "59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "59-Kubernetes-liveness-probe/02-liveness-probe-HTTP-Request/07-kubernetes-secret.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\n#type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured.\n#It can contain arbitrary key-value pairs. \ntype: Opaque\ndata:\n  # Output of echo -n 'dbpassword11' | base64\n  db-password: ZGJwYXNzd29yZDEx"
  },
  {
    "path": "59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. \n## We are going to use this in our MySQL k8s Deployment  \n\n# 3. YAML Notation\n## YAML Notation: |-: \"strip\": remove the line feed, remove the trailing blank lines.\n## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines"
  },
  {
    "path": "59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          #env:\n          #  - name: MYSQL_ROOT_PASSWORD\n          #    value: dbpassword11\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password              \n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful."
  },
  {
    "path": "59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp  \n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            #- name: DB_PASSWORD\n            #  value: \"dbpassword11\"\n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password \n          # Liveness Probe TCP request\n          livenessProbe:\n            tcpSocket:\n              port: 8080\n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n\n\n# Types of Liveness Probes we can define\n# 1. Linux Command\n# 2. HTTP Request\n# 3. TCP Ping                                        \n\n# What happens ??\n# 1. The kubelet will run the first liveness TCP probe 60 seconds after the \n# container starts. \n# 2. This will attempt to connect to the UMS container on port 8080. \n# If the liveness probe fails, the container will be restarted."
  },
  {
    "path": "59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "59-Kubernetes-liveness-probe/03-liveness-probe-TCP-Request/07-kubernetes-secret.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\n#type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured.\n#It can contain arbitrary key-value pairs. \ntype: Opaque\ndata:\n  # Output of echo -n 'dbpassword11' | base64\n  db-password: ZGJwYXNzd29yZDEx"
  },
  {
    "path": "59-Kubernetes-liveness-probe/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Kubernetes Liveness Probes\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Implement `Liveness Probe` and Test it\n\n## Step-02:  Understand Liveness Probe \n1. Liveness probes lets Kubernetes know whether our application running in a container inside a pod is healthy or not.\n2. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy.\n3. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy.\n4. In short, Use liveness probe to remove unhealthy pods\n\n## Step-03: Liveness Probe Type: Command\n### Step-03-01: Review Liveness Probe Type: Command\n- **File Name:** `01-liveness-probe-linux-command/05-UserMgmtWebApp-Deployment.yaml`\n```yaml\n          # Liveness Probe Linux Command                   \n          livenessProbe:\n            exec:\n              command: \n                - /bin/sh\n                - -c \n                - nc -z localhost 8080\n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value                      \n```\n\n### Step-03-02: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-liveness-probe-linux-command\n\n# List Pods\nkubectl get pods\nObservation:\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n### Step-03-03: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 01-liveness-probe-linux-command\n```\n\n\n## Step-04: Liveness Probe Type: HTTP Request\n### Step-04-01: Review Liveness Probe Type: HTTP Request\n- **File Name:** `02-liveness-probe-HTTP-Request/05-UserMgmtWebApp-Deployment.yaml`\n```yaml\n          # Liveness Probe HTTP Request\n          livenessProbe:\n            httpGet:\n              path: /login\n              port: 8080\n              httpHeaders:\n              - name: Custom-Header\n                value: Awesome          \n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n                    \n```\n\n### Step-04-02: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 02-liveness-probe-HTTP-Request\n\n# List Pods\nkubectl get pods\nObservation:\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n### Step-04-03: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 02-liveness-probe-HTTP-Request\n```\n\n\n\n## Step-05: Liveness Probe Type: TCP Request\n### Step-05-01: Review Liveness Probe Type: TCP Request\n- **File Name:** `03-liveness-probe-TCP-Request/05-UserMgmtWebApp-Deployment.yaml`\n```yaml\n          # Liveness Probe TCP request\n          livenessProbe:\n            tcpSocket:\n              port: 8080\n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n                    \n```\n\n### Step-05-02: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 03-liveness-probe-TCP-Request\n\n# List Pods\nkubectl get pods\nObservation:\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n### Step-05-03: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 03-liveness-probe-TCP-Request\n```\n\n\n"
  },
  {
    "path": "60-Kubernetes-Startup-Probe/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Kubernetes Startup Probes\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Startup Probes\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Implement `Startup Probe` and Test it\n\n## Step-02:  Understand Startup Probe \n1. Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. \n2. The application will have a maximum of 5 minutes (30 * 10 = 300s) to  finish its startup. \n3. Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. \n4. If the startup probe never succeeds, the container is killed after 300s and subject to the pod's restartPolicy.\n\n## Step-03: Review Startup Probe YAML\n- **File Name:** 05-UserMgmtWebApp-Deployment.yaml\n```yaml\n          # Startup Probe - Wait for 5 minutes till the application starts            \n          startupProbe:\n            httpGet:\n              path: /login\n              port: 8080\n            initialDelaySeconds: 60              \n            periodSeconds: 10            \n            failureThreshold: 30  # The application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup.\n            successThreshold: 1 # Default value                         \n```\n\n## Step-04: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests-startup-probe\n\n# List Pods\nkubectl get pods\nObservation:\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n## Step-05: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests-startup-probe\n```"
  },
  {
    "path": "60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. \n## We are going to use this in our MySQL k8s Deployment  \n\n# 3. YAML Notation\n## YAML Notation: |-: \"strip\": remove the line feed, remove the trailing blank lines.\n## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines"
  },
  {
    "path": "60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          #env:\n          #  - name: MYSQL_ROOT_PASSWORD\n          #    value: dbpassword11\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password              \n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful."
  },
  {
    "path": "60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp  \n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            #- name: DB_PASSWORD\n            #  value: \"dbpassword11\"\n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password \n          # Liveness Probe HTTP Request\n          livenessProbe:\n            httpGet:\n              path: /login\n              port: 8080\n              httpHeaders:\n              - name: Custom-Header\n                value: Awesome          \n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n          # Startup Probe - Wait for 5 minutes till the application starts            \n          startupProbe:\n            httpGet:\n              path: /login\n              port: 8080\n            initialDelaySeconds: 60              \n            periodSeconds: 10            \n            failureThreshold: 30  # The application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup.\n            successThreshold: 1 # Default value\n            \n# Understand Startup Probe ?\n# 1. Sometimes, you have to deal with legacy applications that might require an additional startup time \n# on their first initialization. \n# 2. The application will have a maximum of 5 minutes (30 * 10 = 300s) to \n# finish its startup. \n# 3. Once the startup probe has succeeded once, the liveness probe takes \n# over to provide a fast response to container deadlocks. \n# 4. If the startup probe never succeeds, the container is killed after \n# 300s and subject to the pod's restartPolicy.\n\n\n"
  },
  {
    "path": "60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "60-Kubernetes-Startup-Probe/kube-manifests-startup-probe/07-kubernetes-secret.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\n#type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured.\n#It can contain arbitrary key-value pairs. \ntype: Opaque\ndata:\n  # Output of echo -n 'dbpassword11' | base64\n  db-password: ZGJwYXNzd29yZDEx"
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Kubernetes Readiness Probes\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Readiness Probes\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Implement `Readiness Probe` and Test it\n\n## Step-02:  Understand Readiness Probe \n1. Sometimes, applications are temporarily unable to serve traffic. \n2. For example, an application might need to load large data or configuration files during startup, or depend on external services after startup. \n3. In such cases, you don't want to kill the application, but you don't want to send it requests either. \n4. Kubernetes provides readiness probes to detect and mitigate these situations. \n5. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.\n6. Readiness probes runs on the container during its whole lifecycle.\n7. Liveness probes do not wait for readiness probes to succeed. \n8. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.\n9. Readiness and liveness probes can be used in parallel for the same container. \n10. Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.\n\n## Step-03: Review Readiness Probe YAML\n- **File Name:** 05-UserMgmtWebApp-Deployment.yaml\n```yaml\n          # Readiness Probe HTTP Request            \n          readinessProbe:\n            httpGet:\n              path: /login\n              port: 8080\n              httpHeaders:\n              - name: Custom-Header\n                value: Awesome   \n            initialDelaySeconds: 60\n            periodSeconds: 10\n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value            \n```\n\n## Step-04: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests-readiness-probe\n\n# List Pods\nkubectl get pods\nObservation:\n1. You can see that Pod is running but it will not be ready for 60 seconds. \n2. \"initialDelaySeconds=60\" is defined in readiness probe so it will mark\nthe pod as ready only after 60 seconds\n3. Liveness probe will start working after \"initialDelaySeconds: 120\"\n4. This way first Readiness probe will run later liveness probe will run. \n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n## Step-05: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests-readiness-probe\n```"
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/01-persistent-volume-claim.yaml",
    "content": "apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n\n"
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/02-UserManagement-ConfigMap.yaml",
    "content": "apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n\n\n# CONFIG MAP\n# 1. A ConfigMap is an API object used to store non-confidential data in \n# key-value pairs. \n\n# 2. Pods can consume ConfigMaps as \n## 2.1: environment variables, \n## 2.2: command-line arguments, \n## 2.3: or as configuration files in a volume. \n## We are going to use this in our MySQL k8s Deployment  \n\n# 3. YAML Notation\n## YAML Notation: |-: \"strip\": remove the line feed, remove the trailing blank lines.\n## Additional YAML Notation Reference: https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines"
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/03-mysql-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          #env:\n          #  - name: MYSQL_ROOT_PASSWORD\n          #    value: dbpassword11\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password              \n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: /var/lib/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a “max unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful."
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/04-mysql-clusterip-service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    "
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/05-UserMgmtWebApp-Deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp  \n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            #- name: DB_PASSWORD\n            #  value: \"dbpassword11\"\n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password \n          # Liveness Probe HTTP Request\n          livenessProbe:\n            httpGet:\n              path: /login\n              port: 8080\n              httpHeaders:\n              - name: Custom-Header\n                value: Awesome          \n            initialDelaySeconds: 120 # initialDelaySeconds field tells  the kubelet that it should wait 120 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n          # Readiness Probe HTTP Request            \n          readinessProbe:\n            httpGet:\n              path: /login\n              port: 8080\n              httpHeaders:\n              - name: Custom-Header\n                value: Awesome   \n            initialDelaySeconds: 60\n            periodSeconds: 10\n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value            \n\n# Understand Readiness Probe ?\n# 1. Sometimes, applications are temporarily unable to serve traffic. \n# 2. For example, an application might need to load large data or configuration \n# files during startup, or depend on external services after startup. \n# 3. In such cases, you don't want to kill the application, but you don't \n#  want to send it requests either. \n# 4. Kubernetes provides readiness probes to detect and mitigate these \n# situations. \n# 5. A pod with containers reporting that they are not ready does not \n# receive traffic through Kubernetes Services.\n# 6. Readiness probes runs on the container during its whole lifecycle.\n# 7. Liveness probes do not wait for readiness probes to succeed. \n# 8. If you want to wait before executing a liveness probe you should use \n# initialDelaySeconds or a startupProbe.\n# 9. Readiness and liveness probes can be used in parallel for the same \n# container. \n# 10. Using both can ensure that traffic does not reach a container that \n# is not ready for it, and that containers are restarted when they fail.\n\n\n"
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/06-UserMgmtWebApp-LoadBalancer-Service.yaml",
    "content": "apiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port"
  },
  {
    "path": "61-Kubernetes-Readiness-Probe/kube-manifests-readiness-probe/07-kubernetes-secret.yaml",
    "content": "apiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\n#type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured.\n#It can contain arbitrary key-value pairs. \ntype: Opaque\ndata:\n  # Output of echo -n 'dbpassword11' | base64\n  db-password: ZGJwYXNzd29yZDEx"
  },
  {
    "path": "62-Kubernetes-Requests-and-Limits/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Kubernetes Requests and Limits\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Requests and Limits\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- We can specify how much each container a pod needs the resources like CPU & Memory. \n- When we provide this information in our pod, the scheduler uses this information to decide which node to place the Pod on. \n- When you specify a resource limit for a Container, the kubelet enforces those `limits` so that the running container is not allowed to use more of that resource than the limit you set. \n-  The kubelet also reserves at least the `request` amount of that system resource specifically for that container to use.\n\n## Step-02: Add Requests & Limits\n```yaml\n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                          \n```\n\n## Step-03: Create k8s objects & Test\n```t\n# Create All Objects\nkubectl apply -f kube-manifests/\n\n# List Pods\nkubectl get pods\n\n# Watch List Pods screen\nkubectl get pods -w\n\n# Describe Pod \nkubectl describe pod <myapp1-deployment-xxxxxx>\n\n# Access Application\nhttp://<LB-IP>/\n\n# List Nodes & Describe Node\nkubectl get nodes\nkubectl describe node <Node-Name>\n```\n## Step-04: Clean-Up\n- No Clean-Up.\n- We are going to use this app in next demo which is Cluster Autoscaling\n\n## References:\n- https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"
  },
  {
    "path": "62-Kubernetes-Requests-and-Limits/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 5 \n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                               \n    "
  },
  {
    "path": "62-Kubernetes-Requests-and-Limits/kube-manifests/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIP, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "63-GKE-Cluster-Autoscaling/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Cluster Autoscaling\ndescription: Implement GKE Cluster Autoscaler concept\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Test Cluster Autoscaler feature\n\n## Step-02: Verify Cluster Autoscaler enabled for Node Pool\n- Go to Kubernetes Engine -> standard-cluster-private-1 -> NODES Tab -> default-pool -> Click on **Edit**\n- Check **Enable cluster autoscaler**\n- Size limits type\n  - Check **Per zone limits**\n  - **Minimum number of nodes (per zone):** 0\n  - **Maximum number of nodes (per zone):** 3\n\n## Step-03: Verify the 5th Pod from previous Demo is still in Pending State\n```t\n# List Pods\nkubectl get pods\n\n# Describe Pod (PENDING POD)\nkubectl describe pod <PENDING-POD-NAME>\nObservation:\n1. Verify the pod events where we can find the autoscaling event triggered\n\n# List Kubernetes Nodes\nkubectl get nodes \nObservation:\n1. Nodes in NodePools will be increased from 3 to 4 (2 per zone max we configured)\n\n# Scale-In the demo application to 1 pod\nkubectl get pods\nkubectl get nodes \nkubectl scale --replicas=1 deploy myapp1-deployment \nkubectl get pods\n\n# List Kubernetes Nodes\nkubectl get nodes\n1. Nodes in NodePools will be decreased from 4 to 3 (Wait for 10 minutes for Nodes Scale-In)\n```\n\n## Step-04: Clean-up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n```\n\n"
  },
  {
    "path": "63-GKE-Cluster-Autoscaling/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 5 \n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                               \n    "
  },
  {
    "path": "63-GKE-Cluster-Autoscaling/kube-manifests/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "64-Kubernetes-Namespaces/01-kube-manifests-imperative/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment  \nspec: # Dictionary\n  replicas: 2 \n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                               \n    "
  },
  {
    "path": "64-Kubernetes-Namespaces/01-kube-manifests-imperative/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "64-Kubernetes-Namespaces/02-kube-manifests-declarative/00-kubernetes-namespace.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata:\n  name: qa"
  },
  {
    "path": "64-Kubernetes-Namespaces/02-kube-manifests-declarative/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\n  namespace: qa\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                               \n    "
  },
  {
    "path": "64-Kubernetes-Namespaces/02-kube-manifests-declarative/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  namespace: qa\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "64-Kubernetes-Namespaces/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Kubernetes Namespaces Imperative\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Namespaces allow to split-up resources into different groups.\n- Resource names should be unique in a namespace\n- We can use namespaces to create multiple environments like dev, staging and production etc\n- Kubernetes will always list the resources from `default namespace` unless we provide exclusively from which namespace we need information from.\n\n## Step-02: Namespaces Imperative - Create dev Namespace\n### Step-02-01: Create Namespace\n```t\n# List Namespaces\nkubectl get ns \n\n# Craete Namespace\nkubectl create namespace <namespace-name>\nkubectl create namespace dev\n\n# List Namespaces\nkubectl get ns \n```\n### Step-02-02: Deploy All k8s Objects\n```t\n# Deploy All k8s Objects\nkubectl apply -f 01-kube-manifests-imperative/ -n dev\n\n# List Namespaces\nkubectl get ns\n\n# List Deployments from dev Namespace\nkubectl get deploy -n dev\n\n# List Pods from dev Namespace\nkubectl get pods -n dev\n\n# List Services from dev Namespace\nkubectl get svc -n dev\n\n# List all objects from dev Namespaces\nkubectl get all -n dev\n\n# Access Application\nhttp://<LB-Service-External-IP>/\n```\n\n## Step-03: Namespace Declarative - Create qa Namespace\n\n### Step-03-01: Namespace Kubernetes YAML Manifest\n- **File Name:** 00-kubernetes-namespace.yaml\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: qa\n```\n\n### Step-03-02: Update Namespace in Deployment and Service YAML Manifest\n- We are going to update the `namespace: qa` in `metadata` section of Deployment and Service\n```yaml\n# Deployment YAML Manifest\napiVersion: apps/v1\nkind: Deployment \nmetadata: \n  name: myapp1-deployment\n  namespace: qa\nspec: \n\n# Service YAML Manifest\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  namespace: qa\nspec:\n```\n\n### Step-03-03: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 02-kube-manifests-declarative\n\n# List Namespaces\nkubectl get ns\n\n# List Deployments from qa Namespace\nkubectl get deploy -n qa\n\n# List Pods from qa Namespace\nkubectl get pods -n qa\n\n# List Services from qa Namespace\nkubectl get svc -n qa\n\n# List all objects from qa Namespaces\nkubectl get all -n qa\n\n# Access Application\nhttp://<LB-Service-External-IP>/\n```\n\n## Step-04: Clean-Up Resources\n- If we delete Namespace, all resources associated with namespace will get deleted.\n```t\n# Delete dev Namespace\nkubectl delete ns dev\n\n# List Namespaces\nkubectl get ns\nObservation:\n1. dev namespace should  not be present\n\n# Verify Pods from dev Namespace\nkubectl get pods -n dev\nObservation: We should not find any pods because namespace itself doesnt exists\n\n# Delete qa Namespace Resources (only)\nkubectl delete -f 02-kube-manifests-declarative\n\n# List Namespaces\nkubectl get ns\n\n# Delete qa Namespace\nkubectl delete ns qa\n\n# List Namespaces\nkubectl get ns\n```\n\n## References:\n- https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/"
  },
  {
    "path": "65-Kubernetes-Namespaces-ResourceQuota/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Kubernetes Resource Quota\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Resource Quota\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n# Step-01: Introduction\n1. Kubernetes Namespaces - ResourceQuota \n2. Kubernetes Namespaces - Declarative using YAML\n\n## Step-02: Create Namespace manifest\n- **Important Note:** File name starts with `01-`  so that when creating k8s objects namespace will get created first so it don't throw an error.\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: qa\n```\n\n## Step-03: Create Kubernetes ResourceQuota manifest\n- **File Name:** 02-kubernetes-resourcequota.yaml\n```yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: ns-resource-quota\n  namespace: qa\nspec:\n  hard:\n    requests.cpu: \"1\"\n    requests.memory: 1Gi\n    limits.cpu: \"2\"\n    limits.memory: 2Gi  \n    pods: \"3\"    \n    configmaps: \"3\" \n    persistentvolumeclaims: \"3\" \n    secrets: \"3\" \n    services: \"3\"                   \n```\n\n## Step-04: Create Kubernetes objects & Test\n```t\n# Create All Objects\nkubectl apply -f kube-manifests/\n\n# List Pods\nkubectl get pods -n qa -w\n\n# View Pod Specification (CPU & Memory)\nkubectl describe pod <pod-name> -n qa\n\n# Get Resource Quota  - default Namespace\nkubectl get resourcequota\nkubectl describe resourcequota gke-resource-quotas\nObservation:\n1. gke-resource-quotas will be precreated by GKE Cluster for each namespace. \n2. Any new quotas we define below the GKE Resource quota limits, that quota will be overrided by default GKE Resource Quota in a Namespace.   \n\n\n# Get Resource Quota - qa Namespace\nkubectl get resourcequota -n qa\n\n# Describe Resource Quota - qa Namespace\nkubectl describe resourcequota qa-namespace-resource-quota -n qa\n\n# Test Quota by increasing the pods to 4 where in resource quota is 3 pods only\nkubectl get deploy -n qa\nkubectl get pods -n qa\nkubectl scale --replicas=4 deployment/myapp1-deployment -n qa\nkubectl get pods -n qa\nkubectl get deploy -n qa\n\n# Verify Deployment and ReplicaSet Events\nkubectl describe deploy <Deployment-Name> -n qa\nkubectl describe rs <ReplicaSet-Name> -n qa\nObservation: In ReplicaSet Events we should find the error\n\n## WARNING MESSAGE IN REPLICASET EVENTS ABOUT RESOURCE QUOTA\nWarning  FailedCreate      77s                replicaset-controller  Error creating: pods \"myapp1-deployment-5b4bdfc49d-92t9z\" is forbidden: exceeded quota: qa-namespace-resource-quota, requested: pods=1, used: pods=3, limited: pods=3\n\n# List Services\nkubectl get svc -n qa\n\n# Access Application\nhttp://<SVC-EXTERNAL-IP>\n```\n## Step-05: Clean-Up\n- Delete all Kubernetes objects created as part of this section\n```t\n# Delete All\nkubectl delete -f kube-manifests/ -n qa\n\n# List Namespaces\nkubectl get ns\n```\n\n## References:\n- https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/\n- https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/\n\n\n## Additional References:\n- https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/ \n- https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/\n"
  },
  {
    "path": "65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/01-kubernetes-namespace.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata: \n  name: qa\n"
  },
  {
    "path": "65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/02-kubernetes-resourcequota.yaml",
    "content": "apiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: qa-namespace-resource-quota\n  namespace: qa\nspec:\n  hard:\n    requests.cpu: \"1\"\n    requests.memory: 1Gi\n    limits.cpu: \"2\"\n    limits.memory: 2Gi  \n    pods: \"3\"    \n    configmaps: \"3\" \n    persistentvolumeclaims: \"3\" \n    secrets: \"3\" \n    services: \"3\"\n"
  },
  {
    "path": "65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/03-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\n  namespace: qa\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                               \n    "
  },
  {
    "path": "65-Kubernetes-Namespaces-ResourceQuota/kube-manifests/04-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  namespace: qa\nspec:\n  type: LoadBalancer # ClusterIP, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/01-kubernetes-namespace.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata: \n  name: qa\n"
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/02-kubernetes-resourcequota-limitrange.yaml",
    "content": "apiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: qa-namespace-resource-quota\n  namespace: qa\nspec:\n  hard:\n    requests.cpu: \"1\"\n    requests.memory: 1Gi\n    limits.cpu: \"2\"\n    limits.memory: 2Gi  \n    pods: \"3\"    \n    configmaps: \"3\" \n    persistentvolumeclaims: \"3\" \n    secrets: \"3\" \n    services: \"3\" \n---    \napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: default-cpu-mem-limit-range\n  namespace: qa\nspec:\n  limits:\n    - default:\n        cpu: \"400m\"  # If not specified default limit is 1 vCPU per container     \n        memory: \"256Mi\" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace.\n      defaultRequest:\n        cpu: \"200m\" # If not specified default it will take from whatever specified in limits.default.cpu      \n        memory: \"128Mi\" # If not specified default it will take from whatever specified in limits.default.memory\n      max: \n        cpu: \"500m\"\n        memory: \"500Mi\"\n      min:       \n        cpu: \"100m\"\n        memory: \"100Mi\"\n      type: Container "
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/03-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\n  namespace: qa\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          #resources:\n          #  requests:\n          #    memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n          #    cpu: \"200m\" # `m` means milliCPU\n          #  limits:\n          #    memory: \"256Mi\"\n          #    cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                               \n    "
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/01-kube-manifests-LimitRange-defaults/04-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  namespace: qa\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/01-kubernetes-namespace.yaml",
    "content": "apiVersion: v1\nkind: Namespace\nmetadata: \n  name: qa\n"
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/02-kubernetes-resourcequota-limitrange.yaml",
    "content": "apiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: qa-namespace-resource-quota\n  namespace: qa\nspec:\n  hard:\n    requests.cpu: \"1\"\n    requests.memory: 1Gi\n    limits.cpu: \"2\"\n    limits.memory: 2Gi  \n    pods: \"3\"    \n    configmaps: \"3\" \n    persistentvolumeclaims: \"3\" \n    secrets: \"3\" \n    services: \"3\" \n---    \napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: default-cpu-mem-limit-range\n  namespace: qa\nspec:\n  limits:\n    - default:\n        cpu: \"400m\"  # If not specified default limit is 1 vCPU per container     \n        memory: \"256Mi\" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace.\n      defaultRequest:\n        cpu: \"200m\" # If not specified default it will take from whatever specified in limits.default.cpu      \n        memory: \"128Mi\" # If not specified default it will take from whatever specified in limits.default.memory\n      max: \n        cpu: \"500m\"\n        memory: \"500Mi\"\n      min:       \n        cpu: \"100m\"\n        memory: \"100Mi\"\n      type: Container "
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/03-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\n  namespace: qa\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"450m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              #cpu: \"600m\"  # This is above the max value defined in Limit Range, Pods will not be scheduled and error thrown when we refer ReplicaSet Events\n              cpu: \"500m\" # This is equal to Max value defined in LimitRange, Pods will be scheduled.   "
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/02-kube-manifests-LimitRange-MinMax/04-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  namespace: qa\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "66-Kubernetes-Namespaces-LimitRange/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Kubernetes Limit Range\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Limit Range\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n# Step-01: Introduction\n1. Kubernetes Namespaces - LimitRange \n2. Kubernetes Namespaces - Declarative using YAML\n\n## Step-02: Create Namespace manifest\n- **Important Note:** File name starts with `01-`  so that when creating k8s objects namespace will get created first so it don't throw an error.\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: qa\n```\n\n## Step-03: Create LimitRange manifest\n- Instead of specifying `resources like cpu and memory` in every container spec of a pod defintion, we can provide the default CPU & Memory for all containers in a namespace using `LimitRange`\n```yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: default-cpu-mem-limit-range\n  namespace: qa\nspec:\n  limits:\n    - default:\n        cpu: \"400m\"  # If not specified default limit is 1 vCPU per container     \n        memory: \"256Mi\" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace.\n      defaultRequest:\n        cpu: \"200m\" # If not specified default it will take from whatever specified in limits.default.cpu      \n        memory: \"128Mi\" # If not specified default it will take from whatever specified in limits.default.memory\n      max: \n        cpu: \"500m\"\n        memory: \"500Mi\"\n      min:       \n        cpu: \"100m\"\n        memory: \"100Mi\"\n      type: Container  \n```\n\n\n## Step-04: Demo-01: Create Kubernetes Resources & Test\n```t\n# Create Kubernetes Resources\nkubectl apply -f 01-kube-manifests-LimitRange-defaults\n\n# List Pods\nkubectl get pods -n qa -w\n\n# View Pod Specification (CPU & Memory)\nkubectl describe pod <pod-name> -n qa\nObservation: \n1. We will find the \"Limits\" in pod container equals to \"defaults\" from LimitRange\n2. We will find the \"Requests\" in pod container equals to \"defaultRequest\"\n\n# Sample from Pod description\n    Limits:\n      cpu:     400m\n      memory:  256Mi\n    Requests:\n      cpu:        200m\n      memory:     128Mi\n\n# Get & Describe Limits\nkubectl get limits -n qa\nkubectl describe limits default-cpu-mem-limit-range -n qa\n\n# List Services\nkubectl get svc -n qa\n\n# Access Application \nhttp://<SVC-External-IP>/\n```\n\n## Step-05: Demo-01: Clean-Up\n- Delete all Kubernetes objects created as part of this section\n```t\n# Delete All\nkubectl delete -f 01-kube-manifests-LimitRange-defaults/\n```\n\n## Step-06: Demo-02: Update Demo-02 Deployment Manifest with Requests & Limits\n- Negative case testing\n- When deployed with these `Requests & Limits`  where `cpu=600m in limits` which is above the `max cpu = 500m` in LimitRange `default-cpu-mem-limit-range` it should not schedule the pods and throw  error in `ReplicaSet Events`. \n- **File Name:** 03-kubernetes-deployment.yaml\n```t\n# Update Demo-02 Deployment Manifest with Requests & Limits\n          resources:\n            requests:\n              memory: \"128Mi\" \n              cpu: \"450m\" \n            limits:\n              memory: \"256Mi\"\n              cpu: \"600m\"  \n```\n\n## Step-07: Demo-02: Create Kubernetes Resources & Test\n```t\n# Create Kubernetes Resources\nkubectl apply -f 02-kube-manifests-LimitRange-MinMax\n\n# List Pods\nkubectl get pods -n qa\nObservation:\n1. No Pod should be scheduled\n\n# List Deployments\nkubectl get deploy -n qa\nObservation: 0/2 ready which means no pods scheduled. Verify ReplicaSet Events\n\n# List & Describe ReplicaSets\nkubectl get rs -n qa\nkubectl describe rs <ReplicaSet-Name> -n qa\nObservation: Below error will be displayed\n Warning  FailedCreate  18s (x5 over 56s)  replicaset-controller  (combined from similar events): Error creating: pods \"myapp1-deployment-5dd9f78fd8-k5th6\" is forbidden: maximum cpu usage per Container is 500m, but limit is 600m\n\n# Get & Describe Limits\nkubectl get limits -n qa\nkubectl describe limits default-cpu-mem-limit-range -n qa\n\n# List Services\nkubectl get svc -n qa\n\n# Access Application \nhttp://<SVC-External-IP>/\n```\n\n## Step-08: Demo-02: Update Deployment resources.limit=500m\n- **File Name:** 03-kubernetes-deployment.yaml\n```t\n# Demo-02: Update Deployment resources.limit=500m\n          resources:\n            requests:\n              memory: \"128Mi\" \n              cpu: \"450m\"\n            limits:\n              memory: \"256Mi\"\n              cpu: \"500m\" # This is equal to Max value defined in LimitRange, Pods will be scheduled.   \n```\n\n## Step-09: Demo-02: Deploy the updated Deployment\n```t\n# Deploy the Updated Deployment\nkubectl apply -f 02-kube-manifests-LimitRange-MinMax/03-kubernetes-deployment.yaml\n\n# List Pods\nkubectl get pods -n qa\nObservation:\n1. Pods should be scheduled now. \n```\n\n## Step-10: Demo-02: Clean-Up\n```t\n# Delete Demo-02 Kubernetes Resources\nkubectl delete -f 02-kube-manifests-LimitRange-MinMax\n```\n\n\n## References:\n- https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/\n- https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/\n- https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/\n\n\n"
  },
  {
    "path": "67-GKE-Horizontal-Pod-Autoscaler/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Horizontal Pod Autoscaling\ndescription: Implement GKE Cluster Horizontal Pod Autoscaling\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- Implement a Sample Demo with Horizontal Pod Autoscaler\n\n## Step-02: Review Kubernetes Manifests\n- Primarily review `HorizontalPodAutoscaler` Resource in file `03-kubernetes-hpa.yaml`\n1. 01-kubernetes-deployment.yaml\n2. 02-kubernetes-cip-service.yaml\n3. 03-kubernetes-hpa.yaml\n```yaml\napiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: hpa-myapp1\nspec:\n scaleTargetRef:\n   apiVersion: apps/v1\n   kind: Deployment\n   name: myapp1-deployment\n minReplicas: 1\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50\n```\n\n## Step-03: Deploy Sample App and Verify using kubectl\n```t\n# Deploy Sample\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\nObservation: \n1. Currently only 1 pod is running\n\n# List HPA\nkubectl get hpa\n\n\n# Run Load Test (New Terminal)\nkubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c \"while sleep 0.01; do wget -q -O- http://myapp1-cip-service; done\"\n\n\n# List Pods (SCALE UP EVENT)\nkubectl get pods\nObservation:\n1. New pods will be created to reduce the CPU spikes\n\n# List HPA (after few mins - approx 10 mins)\nkubectl get hpa\n\n# List Pods (SCALE IN EVENT)\nkubectl get pods\nObservation:\n1. Only 1 pod should be running\n```\n\n\n## Step-04: Clean-Up\n```t\n# Delete Load Generator Pod which is in Error State\nkubectl delete pod load-generator\n\n# Delete Sample App\nkubectl delete -f kube-manifests\n```\n\n\n"
  },
  {
    "path": "67-GKE-Horizontal-Pod-Autoscaler/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: \n  name: myapp1-deployment\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata:\n      name: myapp1-pod\n      labels:\n        app: myapp1  \n    spec:\n      containers: \n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"5Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"5m\" # `m` means milliCPU\n            limits:\n              memory: \"50Mi\"\n              cpu: \"50m\"  # 1000m is equal to 1 VCPU core                                               \n    "
  },
  {
    "path": "67-GKE-Horizontal-Pod-Autoscaler/kube-manifests/02-kubernetes-cip-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-cip-service\nspec:\n  type: ClusterIP # ClusterIP, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "67-GKE-Horizontal-Pod-Autoscaler/kube-manifests/03-kubernetes-hpa.yaml",
    "content": "apiVersion: autoscaling/v1\nkind: HorizontalPodAutoscaler\nmetadata:\n name: hpa-myapp1\nspec:\n scaleTargetRef:\n   apiVersion: apps/v1\n   kind: Deployment\n   name: myapp1-deployment\n minReplicas: 1\n maxReplicas: 10\n targetCPUUtilizationPercentage: 50\n "
  },
  {
    "path": "68-GKE-AutoPilot-Cluster/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Autopilot Cluster\ndescription: Implement GCP Google Kubernetes Engine GKE Autopilot Cluster\n---\n\n## Step-01: Introduction\n- Create GKE Autopilot Cluster\n- Understand in detail about GKE Autopilot cluster\n\n## Step-02: Pre-requisite: Verify if Cloud NAT Gateway created \n- Verify if Cloud NAT Gateway created in `Region:us-central1` where you are planning to create GKE Autopilot Private Cluster\n- This is required for Workload in Private subnets to connect to Internet.  \n- Primarily to Connect to Docker Hub to pull the Docker Images\n- Go to Network Services -> Cloud NAT\n\n## Step-03: Create GKE Autopilot Private Cluster\n- Go to Kubernetes Engine -> Clusters -> **CREATE**\n- Create Cluster -> GKE Autopilot -> **CONFIGURE**\n- **Name:** autopilot-cluster-private-1\n- **Region:** us-central1\n- **Network access:** Private Cluster\n- **Access control plane using its external IP address:** CHECK\n- **Control plane ip range:** 172.18.0.0/28\n- **Enable control plane authorized networks:** CHECK\n- **Authorized networks:** \n  - **Name:** internet-access\n  - **Network:** 0.0.0.0/0\n  - Click on **DONE**\n- **Network:** default  (LEAVE TO DEFAULTS)\n- **Node subnet:** default (LEAVE TO DEFAULTS)\n- **Cluster default pod address range:** /17 (LEAVE TO DEFAULTS)\n- **Service Address range:** /22 (LEAVE TO DEFAULTS)\n- **Release Channel:** Regular Channel (Default)\n- REST ALL LEAVE TO DEFAULTS\n- Click on **CREATE** \n\n## Step-04: Configure kubectl for kubeconfig\n```t\n# Configure kubectl for kubeconfig\ngcloud container clusters get-credentials CLUSTER-NAME --region REGION --project PROJECT-NAME\n\n# Replace values CLUSTER-NAME, REGION, PROJECT-NAME\ngcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\nkubectl get nodes -o wide\n```\n\n## Step-05: Review Kubernetes Manifests\n### Step-05-01: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 5 \n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                           \n```\n### Step-05-02: 02-kubernetes-loadbalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>\n```\n\n## Step-07: Scale your Application\n```t\n# Scale your Application\nkubectl scale --replicas=15 deployment/myapp1-deployment\n\n# List Pods\nkubectl get pods\n\n# List Nodes\nkubectl get nodes\n```\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete GKE Autopilot Cluster \n# NOTE: Dont delete this cluster, as we are going to use this in next demo.\nGo to Kubernetes Engine > Clusters -> autopilot-cluster-private-1 -> DELETE\n```\n\n\n## References\n- https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#default_container_resource_requests\n- https://cloud.google.com/kubernetes-engine/quotas#limits_per_cluster"
  },
  {
    "path": "68-GKE-AutoPilot-Cluster/kube-manifests/01-kubernetes-deployment.yaml",
    "content": "apiVersion: apps/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 5 \n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                                               \n            "
  },
  {
    "path": "68-GKE-AutoPilot-Cluster/kube-manifests/02-kubernetes-loadbalancer-service.yaml",
    "content": "apiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n"
  },
  {
    "path": "69-Access-To-Multiple-Clusters/README.md",
    "content": "---\ntitle: GCP Google Kubernetes Engine Access to Multiple Clusters\ndescription: Implement GCP Google Kubernetes Engine Access to Multiple Clusters\n---\n\n## Step-00: Pre-requisites\n- We should have the two clusters created and ready\n- standard-cluster-private-1\n- autopilot-cluster-private-1\n\n## Step-01: Introduction\n- Configure access to Multiple Clusters\n- Understand kube config file $HOME/.kube/config\n- Understand kubectl config command\n  - kubectl config view\n  - kubectl config current-context\n  - kubectl config use-context <context-name>\n  - kubectl config get-context\n  - kubectl config get-clusters\n\n\n## Step-02: Pre-requisite\n- Verify if you have any two GKE Clusters created and ready for use\n- standard-cluster-private-1\n- autopilot-cluster-private-1\n\n## Step-03: Clean-Up kube config file\n```t\n# Clean existing kube configs\ncd $HOME/.kube\n>config\ncat config\n```\n\n## Step-04: Configure Standard Cluster Access for kubectl\n- Understand commands \n  - kubectl config view\n  - kubectl config current-context\n```t\n# View kubeconfig\nkubectl config view\n\n# Configure kubeconfig for kubectl: standard-cluster-private-1 \ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# View kubeconfig\nkubectl config view\n\n# View Cluster Information\nkubectl cluster-info\n\n# View the current context for kubectl\nkubectl config current-context\n```\n\n## Step-05: Configure Autopilot Cluster Access for kubectl\n```t\n# Configure kubeconfig for kubectl: autopilot-cluster-private-1\ngcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123\n\n# View the current context for kubectl\nkubectl config current-context\n\n# View Cluster Information\nkubectl cluster-info\n\n# View kubeconfig\nkubectl config view\n```\n\n## Step-06: Switch Contexts between clusters\n- Understand the kubectl config command **use-context**\n```t\n# View the current context for kubectl\nkubectl config current-context\n\n# View kubeconfig\nkubectl config view \nGet contexts.context.name to which you want to switch \n\n# Switch Context\nkubectl config use-context gke_kdaida123_us-central1_standard-cluster-private-1\n\n# View the current context for kubectl\nkubectl config current-context\n\n# View Cluster Information\nkubectl cluster-info\n```\n\n## Step-07: List Contexts configured in kubeconfig\n```t\n# List Contexts\nkubectl config get-contexts\n```\n\n## Step-08: List Clusters configured in kubeconfig\n```t\n# List Clusters\nkubectl config get-clusters\n```"
  },
  {
    "path": "README.md",
    "content": "# [GCP GKE Google Kubernetes Engine DevOps 75 Real-World Demos](https://stacksimplify.com/courses/gcp-gke-kubernetes/)\n\n[![Image](images/course-title.png \"Google Kubernetes Engine GKE with DevOps 75 Real-World Demos\")](https://stacksimplify.com/courses/gcp-gke-kubernetes/)\n\n\n## Course Modules\n01. Google Cloud Account Creation\n02. Create GKE Standard Public Cluster\t\t\t\t\n03. Install gcloud CLI on mac OS\t\t\t\t\n04. Install gcloud CLI on Windows OS\t\t\t\t\n05. Docker Fundamentals\t\t\t\t\n06. Kubernetes Pods\t\t\t\t\n07. Kubernetes ReplicaSets\t\t\t\t\n08. Kubernetes Deployment - CREATE\t\t\t\t\n09. Kubernetes Deployment - UPDATE\t\t\t\t\n10. Kubernetes Deployment - ROLLBACK\t\t\t\t\n11. Kubernetes Deployments - Pause and Resume\t\t\t\t\n12. Kubernetes ClusterIP and Load Balancer Service\t\t\t\t\n13. YAML Basics\t\t\t\t\n14. Kubernetes Pod  & Service using YAML\t\t\t\t\n15. Kubernetes ReplicaSets using YAML\t\t\t\t\n16. Kubernetes Deployment using YAML\t\t\t\t\n17. Kubernetes Services using YAML\t\t\t\t\n18.  GKE Kubernetes NodePort Service\t\t\t\t\n19. GKE Kubernetes Headless Service\t\t\t\t\n20. GKE Private Cluster\t\t\t\t\n21. How to use GCP Persistent Disks in GKE ?\t\t\t\t\n22. How to use Balanced Persistent Disk in GKE ?\t\t\t\t\n23. How to use Custom Storage Class in GKE for Persistent Disks ?\t\t\t\t\n24. How to use Pre-existing Persistent Disks in GKE ?\t\t\t\t\n25. How to use Regional Persistent Disks in GKE ?\t\t\t\t\n26. How to perform Persistent Disk  Volume Snapshots and Volume Restore ?\t\t\t\t\n28. GKE Workloads and Cloud SQL with Public IP\t\t\t\t\n29. GKE Workloads and Cloud SQL with Private IP\t\t\t\t\n30. GKE Workloads and Cloud SQL with Private IP and No ExternalName Service\t\t\t\t\n31. How to use Google Cloud File Store in GKE ?\t\t\t\t\n32. How to use Custom Storage Class for File Store in GKE ?\t\t\t\t\n33. How to perform File Store Instance Volume Snapshots and Volume Restore ?\t\t\t\t\n34. Ingress Service Basics\t\t\t\t\n35. Ingress Context Path based Routing\t\t\t\t\n36. Ingress Custom Health Checks using Readiness Probes\t\t\t\t\n37. Register a Google Cloud Domain for some advanced Ingress Service Demos \t\t\t\t\n38. Ingress with Static External IP and Cloud DNS\t\t\t\t\n39. Google Managed SSL Certificates for Ingress\t\t\t\t\n40. Ingress HTTP to HTTPS Redirect\t\t\t\t\n41. GKE Workload Identity\t\t\t\t\n42. External DNS Controller Install\t\t\t\t\n43. External DNS - Ingress Service\t\t\t\t\n44. External DNS - Kubernetes Service\t\t\t\t\n45. Ingress Name based Virtual Host Routing\t\t\t\t\n46. Ingress SSL Policy\t\t\t\t\n47. Ingress with Identity-Aware Proxy\t\t\t\t\n48. Ingress with Self Signed SSL Certificates\t\t\t\t\n49. Ingress with Pre-shared SSL Certificates\t\t\t\t\n50. Ingress with Cloud CDN, HTTP Access Logging and Timeouts\t\t\t\t\n51. Ingress with Client IP Affinity\t\t\t\t\n52. Ingress with Cookie Affinity\t\t\t\t\n53. Ingress with Custom Health Checks using BackendConfig CRD\t\t\t\t\n54. Ingress Internal Load Balancer\t\t\t\t\n55. Ingress with Google Cloud Armor\t\t\t\t\n56. Google Artifact Registry\t\t\t\t\n57. GKE Continuous Integration\t\t\t\t\n58. GKE Continuous Delivery\t\t\t\t\n59. Kubernetes Liveness Probes\t\t\t\t\n60. Kubernetes Startup Probes\t\t\t\t\n61. Kubernetes Readiness Probe\t\t\t\t\n62. Kubernetes Requests and Limits\t\t\t\t\n63. GKE Cluster Autoscaling\t\t\t\t\n64. Kubernetes Namespaces\t\t\t\t\n65. Kubernetes Namespaces Resource Quota\t\t\t\t\n66. Kubernetes Namespaces Limit Range\t\t\t\t\n67. Kubernetes Horizontal Pod Autoscaler\t\t\t\t\n68. GKE Autopilot Cluster\t\t\t\t\n69. How to manage Multiple Cluster access in kubeconfig ?\t\t\t\t\n\t\n\n\n## Kubernetes Concepts Covered\n01. Kubernetes Deployments (Create, Update, Rollback, Pause, Resume)\n02. Kubernetes Pods\n03. Kubernetes Service of Type LoadBalancer\n04. Kubernetes Service of Type ClusterIP\n05. Kubernetes Ingress Service\n06. Kubernetes Storage Class\n07. Kubernetes Storage Persistent Volume\n08. Kubernetes Storage Persistent Volume Claim\n09. Kubernetes Cluster Autoscaler\n10. Kubernetes Horizontal Pod Autoscaler\n11. Kubernetes Namespaces\n12. Kubernetes Namespaces Resource Quota\n13. Kubernetes Namespaces Limit Range\n14. Kubernetes Service Accounts\n15. Kubernetes ConfigMaps\n16. Kubernetes Requests and Limits\n17. Kubernetes Worker Nodes\n18. Kubernetes Service of Type NodePort\n19. Kubernetes Service of Type Headless\n20. Kubernetes ReplicaSets\n\n## Google Services Covered\n01. Google GKE Standard Cluster\n02. Google GKE Autopilot Cluster\n03. Compute Engine - Virtual Machines\n04. Compute Engine - Storage Disks\n05. Compute Engine - Storage Snapshots\n06. Compute Engine - Storage Images\n07. Compute Engine - Instance Groups\n08. Compute Engine - Health Checks\n09. Compute Engine - Network Endpoint Groups\n10. VPC Networks - VPC\n11. VPC Network - External and Internal IP Addresses\n12. VPC Network - Firewall\n13. Network Services - Load Balancing\n14. Network Services - Cloud DNS\n15. Network Services - Cloud CDN\n16. Network Services - Cloud NAT\n17. Network Services - Cloud Domains\n18. Network Services - Private Service Connection\n19. Network Security - Cloud Armor\n20. Network Security - SSL Policies\n21. IAM & Admin - IAM\n22. IAM & Admin - Service Accounts\n23. IAM & Admin - Roles\n24. IAM & Admin - Identity-Aware Proxy\n25. DevOps - Cloud Source Repositories\n26. DevOps - Cloud Build\n27. DevOps - Cloud Storage\n28. SQL - Cloud SQL\n29. Storage - Filestore\n30. Google Artifact Registry\n31. Operations Logging\n32. GCP Monitoring\n\n\n## What will students learn in your course?\n- You will learn to master Kubernetes on Google GKE with 75 Real-world  demo's on Google Cloud Platform with 20+ Kubernetes and 30+ Google Cloud Services\n- You will learn Kubernetes Basics for 4.5 hours\n- You will create GKE Standard and Autopilot clusters with public and private networks\n- You will learn to implement Kubernetes Storage with Google Persistent Disks and Google File Store\n- You will also use Google Cloud SQL, Cloud Load Balancing to deploy a sample application outlining LB to DB usecase in GKE Cluster\n- You will master Kubernetes Ingress concepts in detail on GKE with 22 Real-world Demos\n- You will implement Ingress Context Path Routing and Name based vhost routing\n- You will implement Ingress with Google Managed SSL Certificates\n- You will master Google GKE Workload Identity with a detailed dedicated demo.\n- You will implement External DNS Controller to automatically add, delete DNS records automatically in Google Cloud DNS Service\n- You will implement Ingress with Preshared SSL and Self Signed Certificates\n- You will implement Ingress with Cloud CDN, Cloud Armor, Internal Load Balancer, Cookie Affinity, IP Affinity, HTTP Access Logging.\n- You will implement Ingress with Google Identity-Aware Proxy\n- You will learn to use Google Artifact Registry with GKE\n- You will implement DevOps Continuous Integration (CI) and Continuous Delivery (CD) with Cloud Build and Cloud Source Services\n- You will learn to master Kubernetes Probes (Readiness, Startup, Liveness)\n- You will implement Kubernetes Requests, Limits, Namespaces, Resource Quota and Limit Range\n- You will implement GKE Cluster Autoscaler and Horizontal Pod Autoscaler\n\n\n\n## What are the requirements or prerequisites for taking your course?\n- You must have an Google Cloud account to follow with me for hands-on activities.\n- You don't need to have any basic knowledge of Kubernetes. Course will get started from very very basics of Kubernetes and take you to very advanced levels\n- Any Cloud Platform basics is required to understand the terminology\n\n## Who is this course for?\n- Infrastructure Architects or Sysadmins or Developers who are planning to master Kubernetes from Real-World perspective on Google Cloud Platform (GCP)\n- Any beginner who is interested in learning Kubernetes with Google Cloud Platform (GCP) \n- Any beginner who is planning their career in DevOps\n\n\n## Github Repositories used for this course\n- [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https://github.com/stacksimplify/terraform-on-aws-eks)\n- [Course Presentation](https://github.com/stacksimplify/terraform-on-aws-eks/tree/main/course-presentation)\n- [Kubernetes Fundamentals](https://github.com/stacksimplify/kubernetes-fundamentals)\n- **Important Note:** Please go to these repositories and FORK these repositories and make use of them during the course.\n\n\n## Each of my courses come with\n- Amazing Hands-on Step By Step Learning Experiences\n- Real Implementation Experience\n- Friendly Support in the Q&A section\n- \"30-Day \"No Questions Asked\" Money Back Guaranteed by Udemy\"\n\n## My Other AWS Courses\n- [Udemy Enroll](https://www.stacksimplify.com/azure-aks/courses/stacksimplify-best-selling-courses-on-udemy/)\n\n## Instructor Profile\n- [Kalyan Reddy Daida - StackSimplify](https://stacksimplify.com/about/)\n\n# HashiCorp Certified: Terraform Associate - 50 Practical Demos\n[![Image](https://stacksimplify.com/course-images/hashicorp-certified-terraform-associate-highest-rated.png \"HashiCorp Certified: Terraform Associate - 50 Practical Demos\")](https://stacksimplify.com/courses/hashicorp-terraform-associate-aws/) \n\n# AWS EKS - Elastic Kubernetes Service - Masterclass\n[![Image](https://stacksimplify.com/course-images/AWS-EKS-Kubernetes-Masterclass-DevOps-Microservices-course.png \"AWS EKS Kubernetes - Masterclass\")](https://stacksimplify.com/courses/aws-eks-masterclass/)\n\n\n# Azure Kubernetes Service with Azure DevOps and Terraform \n[![Image](https://stacksimplify.com/course-images/azure-kubernetes-service-with-azure-devops-and-terraform.png \"Azure Kubernetes Service with Azure DevOps and Terraform\")](https://stacksimplify.com/courses/azure-aks-devops-terraform/)\n\n# Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos\n[![Image](https://stacksimplify.com/course-images/terraform-on-aws-best-seller.png \"Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos\")](https://stacksimplify.com/courses/terraform-on-aws-sre/)\n\n# Azure - HashiCorp Certified: Terraform Associate - 70 Demos\n[![Image](https://stacksimplify.com/course-images/azure-hashicorp-certified-terraform-associate-highest-rated.png \"Azure - HashiCorp Certified: Terraform Associate - 70 Demos\")](https://stacksimplify.com/courses/hashicorp-terraform-associate-azure/)\n\n# Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos\n\n[![Image](https://stacksimplify.com/course-images/terraform-on-azure-with-iac-azure-devops-sre-1.png \"Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos\")](https://stacksimplify.com/courses/terraform-on-azure/)\n\n# [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https://stacksimplify.com/courses/terraform-aws-eks/)\n\n[![Image](https://stacksimplify.com/course-images/terraform-on-aws-eks-kubernetes.png \"Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos \")](https://stacksimplify.com/courses/terraform-aws-eks/)\n\n---\n\n## My Other Courses (383,000+ Students, 20 Courses)\n\n> All courses available at [stacksimplify.com/courses](https://stacksimplify.com/courses/)\n\n### AWS Courses\n\n| Course | Students | Rating |\n|--------|----------|--------|\n| [AWS EKS Kubernetes Masterclass](https://stacksimplify.com/courses/aws-eks-masterclass/) | 70,041+ | 4.6 (5,495 ratings) |\n| [AWS VPC Transit Gateway](https://stacksimplify.com/courses/aws-vpc-transit-gateway/) | 52,243+ | 4.6 (790 ratings) |\n| [Terraform on AWS with SRE and IaC DevOps](https://stacksimplify.com/courses/terraform-on-aws-sre/) | 31,006+ | 4.6 (3,347 ratings) |\n| [Terraform on AWS EKS Kubernetes IaC SRE](https://stacksimplify.com/courses/terraform-aws-eks/) | 26,929+ | 4.5 (2,238 ratings) |\n| [HashiCorp Certified: Terraform Associate (AWS)](https://stacksimplify.com/courses/hashicorp-terraform-associate-aws/) | 16,835+ | 4.6 (1,754 ratings) |\n| [AWS CloudFormation Simplified](https://stacksimplify.com/courses/aws-cloudformation/) | 16,223+ | 4.3 (1,469 ratings) |\n| [AWS Fargate and ECS Masterclass](https://stacksimplify.com/courses/aws-fargate-ecs/) | 15,208+ | 4.4 (1,051 ratings) |\n| [AWS CodePipeline CI/CD](https://stacksimplify.com/courses/aws-codepipeline/) | 9,832+ | 4.0 (966 ratings) |\n| [AWS Elastic Beanstalk Master Class](https://stacksimplify.com/courses/aws-elastic-beanstalk/) | 7,588+ | 4.3 (373 ratings) |\n| [Ultimate DevOps Real-World Project on AWS](https://stacksimplify.com/courses/ultimate-devops-real-world-project-on-aws/) | 4,772+ | 4.72 (358 ratings) |\n\n### Azure Courses\n\n| Course | Students | Rating |\n|--------|----------|--------|\n| [Azure Kubernetes Service with Azure DevOps and Terraform](https://stacksimplify.com/courses/azure-aks-devops-terraform/) | 48,551+ | 4.6 (6,196 ratings) |\n| [Terraform on Azure with IaC DevOps SRE](https://stacksimplify.com/courses/terraform-on-azure/) | 17,918+ | 4.7 (1,911 ratings) |\n| [Azure HashiCorp Certified: Terraform Associate](https://stacksimplify.com/courses/hashicorp-terraform-associate-azure/) | 16,938+ | 4.5 (1,985 ratings) |\n| [Azure Kubernetes Service AGIC Ingress](https://stacksimplify.com/courses/azure-aks-agic/) | 2,012+ | 4.6 (112 ratings) |\n\n### GCP Courses\n\n| Course | Students | Rating |\n|--------|----------|--------|\n| [GCP Google Kubernetes Engine GKE with DevOps](https://stacksimplify.com/courses/gcp-gke-kubernetes/) | 8,769+ | 4.4 (779 ratings) |\n| [GCP Associate Cloud Engineer Certification](https://stacksimplify.com/courses/gcp-associate-cloud-engineer/) | 6,007+ | 4.6 (599 ratings) |\n| [GCP Terraform on Google Cloud](https://stacksimplify.com/courses/gcp-terraform/) | 2,600+ | 4.4 (213 ratings) |\n| [GCP GKE Terraform on Google Kubernetes Engine](https://stacksimplify.com/courses/gcp-gke-terraform/) | 2,040+ | 4.6 (155 ratings) |\n\n### DevOps and General\n\n| Course | Students | Rating |\n|--------|----------|--------|\n| [Helm Masterclass: 50 Practical Demos](https://stacksimplify.com/courses/helm-masterclass/) | 12,069+ | 4.7 (915 ratings) |\n| [Docker in a Weekend: 40 Practical Demos](https://stacksimplify.com/courses/docker-weekend/) | 3,802+ | 4.6 (361 ratings) |\n\n---\n\n## Instructor Profile\n- [Kalyan Reddy Daida - StackSimplify](https://stacksimplify.com/about/)\n\n---\n\n## Connect with Me\n- [YouTube - Cloud & DevOps Tutorials](https://www.youtube.com/@stacksimplify)\n- [LinkedIn - Kalyan Reddy](https://www.linkedin.com/in/kalyan-reddy/)\n- [GitHub - StackSimplify](https://github.com/stacksimplify)\n\n---\n\n"
  },
  {
    "path": "git-deploy.sh",
    "content": "#!/bin/sh\n\necho \"Add files and do local commit\"\ngit add .\ngit commit -am \"Welcome to StackSimplify\"\n\necho \"Pushing to Github Repository\"\ngit push\n"
  }
]