[
  {
    "path": "Day 01 Introduction-BaseLabCreation - Variables-Script-grep-awk-cut/README.md",
    "content": "#  Introduction-BaseLabCreation - Variables-Script-grep-awk-cut\n\n![1](https://github.com/user-attachments/assets/bb18e257-ad41-4d32-acfe-4963bb23cb8f)\n\n# DevSecOps Scripting Course - Day 01 & 02\n\n## Course Overview\nThis course is designed to help you get started with DevSecOps by covering shell scripting, cloud infrastructure, and essential security tools. You'll work with real-world tasks, using various tools and services to build a secure and functional DevSecOps environment.\n\n---\n\n## Prerequisites\n### Cloud Platforms:\n- **AWS**, **Azure**, or **GCP** – choose any one.\n\n### DevSecOps Tools:\n- **SonarQube** – for code quality and security analysis.\n- **HashiCorp Vault** – for managing secrets and passwords.\n- **Trivy** – for container image scanning.\n- **Ansible Vault** – for secure secret management.\n- **CISO** – for cybersecurity insights.\n\n### Tools Required for Scripting:\n- **JQ** – For parsing JSON data.\n- **Net-tools** – Network utilities like `ifconfig`, `nslookup`.\n- **Unzip** – To extract `.zip` files.\n\n---\n\n## Task: Create a Base Lab Environment\n\n### Objective:\nSet up a VPC, create a new key pair, deploy an instance, and access it using PuTTY.\n\n### Steps:\n1. **Create VPC and Instance**:\n   - Create a new VPC with a single EC2 instance.\n   - Generate a new key pair (PEM format).\n\n2. **Generate PPK File for PuTTY**:\n   - Open PuTTYgen and load the PEM file.\n   - Generate and save a new private key (PPK format).\n\n3. **Login via PuTTY**:\n   - Open PuTTY and connect to `ubuntu@<EC2-IP>`.\n   - Customize window settings (bold text, window size, colors).\n   - Under `Connection > SSH > Auth`, browse and load your PPK file.\n   - Save the session as \"SecOps Session\" for future use.\n\n> **Note:** In production, avoid running `sudo su -` as you may not have root access. Running root commands could result in access to sensitive operations, like deleting logs.\n\n4. **Install Required Tools**:\n   ```bash\n   sudo apt install jq -y && apt install net-tools -y && apt install unzip -y\n   ```\n\n---\n\n## Shell Scripting Tasks\n\n### Task 1: Using Tmux\nTo manage multiple servers or sessions, break the screen into two:\n- Use `Ctrl + b`, then `Shift + \"` to split the screen horizontally.\n- For vertical split: `Ctrl + b`, then `Shift + 5`.\n- Useful for monitoring multiple servers.\n\n### Task 2: Print Time Repeatedly\nPrint the date and time every second for 10 seconds:\n```bash\nfor i in {1..10}\ndo\n  echo $(date)\n  sleep 1\ndone\n```\n\n> **Note:** To get only the day, date, and time, modify the above script using `awk`:\n```bash\nfor i in {1..10}\ndo\n  echo $(date) | awk -F \" \" '{print $1, $2, $3, $4}'\n  sleep 1\ndone\n```\n\n### Task 3: Understanding Variables in Shell Scripting\nVariable declaration is useful for repeated values.\n1. Declaring a variable and using it:\n   ```bash\n   RG='Saikiran-SecOps'\n   echo $RG\n   echo \"${RG}\"\n   ```\n\n2. Using variables with single and double quotes:\n   ```bash\n   X=10\n   RG='Saikiran-SecOps-$X'  # Won't expand the variable\n   echo $RG  # Outputs: Saikiran-SecOps-$X\n\n   RG=\"Saikiran-SecOps-$X\"  # Will expand the variable\n   echo $RG  # Outputs: Saikiran-SecOps-10\n   ```\n\n---\n\n## Task 4: AWS CLI and Data Manipulation\n\n### Install AWS CLI:\nRun the following commands:\n```bash\nsudo apt install awscli -y\naws configure  # Configure AWS access and secret keys.\n```\n\n### S3 Bucket Example:\n1. List the contents of an S3 bucket:\n   ```bash\n   aws s3 ls\n   ```\n\n2. Use `cut` to extract specific fields:\n   ```bash\n   aws s3 ls | cut -d ' ' -f1,2,3\n   ```\n\n3. Use `awk` for more complex field manipulation:\n   ```bash\n   aws s3 ls | awk -F \" \" '{print $3,$2,$1}'\n   ```\n\n4. Use `grep` to find specific patterns:\n   ```bash\n   aws s3 ls | grep -E ^www[-]\n   ```\n\n---\n\n## Shell Script Example: `get_bucket.sh`\n\n```bash\n#!/bin/bash\naws s3 ls | cut -d ' ' -f 3 | grep -E ^www[-]\necho \"Hello Saikiran, welcome to DevSecOps!\"\n```\n\n### Execution:\n```bash\nchmod +x get_bucket.sh\n./get_bucket.sh\n```\n\n> **Note:** Do **not** use `chmod 777` as it grants full permissions to everyone, which is a security risk. Use `chmod 700` instead to restrict access to the owner.\n\n---\n\n## Debugging Scripts\n\nTo enable debugging in a script:\n```bash\n#!/bin/bash\nset -x  # Enable debugging\n```\n\nThis will print each command before executing it, helping you to debug.\n\n---\n\n## Conclusion\nThis README covers Day 01 of DevSecOps, focusing on basic shell scripting, AWS tools, and security best practices. You should now be familiar with setting up a basic lab, working with shell scripts, and using AWS CLI for DevSecOps tasks.\n"
  },
  {
    "path": "Day 02 Arguments-PassingSpecialparams/README.md",
    "content": "# Day 02 Arguments-PassingSpecialparams\n\n![02](https://github.com/user-attachments/assets/13165920-47f8-4843-b6d4-00af9ca7ac5f)\n\n\nWelcome to the **Arguments-PassingSpecialparams** repository! This project focuses on demonstrating the usage of parameter passing, special shell parameters, and output redirection in Bash scripting, specifically in the context of AWS VPC management.\n\n## Table of Contents\n\n- [Introduction](#introduction)\n- [Prerequisites](#prerequisites)\n- [Getting Started](#getting-started)\n- [Scripts Overview](#scripts-overview)\n  - [get_vpc.sh](#get_vpcsh)\n  - [script.sh](#scriptsh)\n- [Usage](#usage)\n  - [Running `get_vpc.sh`](#running-get_vpcsh)\n  - [Running `script.sh`](#running-scriptsh)\n- [Understanding Special Parameters](#understanding-special-parameters)\n  - [`$?`](#-exit-code)\n  - [`$@` and `$*`](#-and-)\n  - [`$#`](#-number-of-arguments)\n- [Error Handling and Output Redirection](#error-handling-and-output-redirection)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Introduction\n\nThis repository contains Bash scripts designed to interact with AWS EC2 to retrieve VPC (Virtual Private Cloud) details across different regions. The scripts demonstrate:\n\n- **Passing Parameters**: How to pass and utilize arguments in Bash scripts.\n- **Special Parameters**: Utilizing `$?`, `$@`, `$*`, and `$#` to handle script behavior based on inputs and command execution status.\n- **Output Redirection**: Managing script output and errors effectively.\n\n## Prerequisites\n\nBefore using the scripts, ensure you have the following installed and configured:\n\n- **AWS CLI**: [Installation Guide](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)\n- **jq**: A lightweight and flexible command-line JSON processor. [Installation Guide](https://stedolan.github.io/jq/download/)\n- **Bash Shell**: Most Unix-based systems come with Bash pre-installed.\n- **AWS Credentials**: Ensure your AWS credentials are configured with the necessary permissions to describe VPCs. [Configuration Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html)\n\n## Getting Started\n\n1. **Clone the Repository**\n\n   ```bash\n   git clone https://github.com/yourusername/Arguments-PassingSpecialparams.git\n   cd Arguments-PassingSpecialparams\n   ```\n\n2. **Make Scripts Executable**\n\n   ```bash\n   chmod +x get_vpc.sh script.sh\n   ```\n\n## Scripts Overview\n\n### `get_vpc.sh`\n\nThis script retrieves VPC IDs from a specified AWS region.\n\n**Script Content:**\n\n```bash\n#!/bin/bash\n\n# Check if at least one argument is provided\nif [ $# -gt 0 ]; then\n    REGIONS=$@\n    echo \"Fetching VPC IDs for regions: $REGIONS\"\n    for REGION in $REGIONS; do\n        aws ec2 describe-vpcs --region ${REGION} | jq \".Vpcs[].VpcId\" -r\n    done\nelse\n    echo \"You have provided $# arguments. Please provide at least one region.\"\n    exit 1\nfi\n```\n\n### `script.sh`\n\nThis script demonstrates the use of special parameters and error handling by checking the AWS CLI version before proceeding to retrieve VPC details.\n\n**Script Content:**\n\n```bash\n#!/bin/bash\n\n# Suppress AWS CLI version output\naws --version > /dev/null 2>&1\n\n# Check if the previous command was successful\nif [ $? -eq 0 ]; then\n    REGIONS=$@\n    echo \"Fetching VPC IDs for regions: $REGIONS\"\n    for REGION in $REGIONS; do\n        aws ec2 describe-vpcs --region ${REGION} | jq \".Vpcs[].VpcId\" -r\n    done\nelse \n    echo \"Incorrect AWS command. Please check your AWS CLI installation.\"\n    exit 1\nfi\n```\n\n## Usage\n\n### Running `get_vpc.sh`\n\nRetrieve VPC IDs from one or multiple AWS regions.\n\n**Example:**\n\n```bash\n./get_vpc.sh us-east-1 ap-south-1 us-east-2\n```\n\n**Output:**\n\n```\nvpc-0abcd1234efgh5678\nvpc-1bcde2345fghij678\n...\n```\n\n### Running `script.sh`\n\nEnsure AWS CLI is correctly installed and then retrieve VPC IDs.\n\n**Example:**\n\n```bash\n./script.sh us-east-1 us-east-2 ap-southeast-1\n```\n\n**Output:**\n\n```\nFetching VPC IDs for regions: us-east-1 us-east-2 ap-southeast-1\nvpc-0abcd1234efgh5678\nvpc-1bcde2345fghij678\n...\n```\n\n**Handling Errors:**\n\n- If AWS CLI is not installed or incorrectly configured, the script will output:\n\n  ```\n  Incorrect AWS command. Please check your AWS CLI installation.\n  ```\n\n- If no regions are provided as arguments, the script will output:\n\n  ```\n  You have provided 0 arguments. Please provide at least one region.\n  ```\n\n## Understanding Special Parameters\n\n### `$?` – Exit Code\n\n- Represents the exit status of the last executed command.\n- `0` indicates success, while any non-zero value indicates an error.\n\n**Example:**\n\n```bash\nls -al\necho $?  # Outputs 0 if successful\nls nonexistentfile\necho $?  # Outputs a non-zero value indicating an error\n```\n\n### `$@` and `$*` – All Positional Parameters\n\n- Both represent all the arguments passed to the script.\n- The difference lies in how they handle quoted arguments.\n\n**Usage in Scripts:**\n\n```bash\nREGIONS=$@\n# or\nREGIONS=$*\n```\n\n### `$#` – Number of Arguments\n\n- Represents the number of arguments passed to the script.\n\n**Example:**\n\n```bash\necho \"Number of arguments: $#\"\n```\n\n## Error Handling and Output Redirection\n\n**Output Redirection:**\n\n- **Standard Output (`stdout`)**: Default output stream.\n- **Standard Error (`stderr`)**: Output stream for errors.\n\n**Redirecting Outputs:**\n\n- Suppress standard output:\n\n  ```bash\n  aws --version > /dev/null\n  ```\n\n- Suppress both standard output and standard error:\n\n  ```bash\n  aws --version > /dev/null 2>&1\n  ```\n\n**Using Conditional Statements:**\n\nUtilize exit codes to control script flow.\n\n```bash\naws --version > /dev/null 2>&1\nif [ $? -eq 0 ]; then\n    # Proceed with script\nelse\n    echo \"AWS CLI not found. Exiting.\"\n    exit 1\nfi\n```\n## License\n\nThis project is licensed under the [MIT License](LICENSE).\n\n---\n\n*Happy Scripting!*\n\n"
  },
  {
    "path": "Day 03 OutputRedirection-For-While/README.md",
    "content": "![03](https://github.com/user-attachments/assets/6be236b3-3be1-4c2d-ade5-3341265b409d)\n\n# Day 03 OutputRedirection-For-While\n\nThis project demonstrates **output redirection** and the use of **for** and **while** loops in Bash scripting, along with examples using **standard input**, **output**, and **error** redirections.\n\n## Key Concepts\n\n### Standard Streams:\n- **stdin**: Standard Input (File descriptor 0)\n- **stdout**: Standard Output (File descriptor 1)\n- **stderr**: Standard Error (File descriptor 2)\n\n### Output Redirection:\n- `>` : Redirects the output and **overwrites** the content in the file.\n- `>>` : Redirects the output and **appends** it to the file.\n- **Tee Command**: Redirects the output to a file and **displays it on the screen** simultaneously.\n\n---\n\n## Script Example: `std-script.sh`\n\nThis Bash script demonstrates both valid and invalid commands. We'll focus on how to redirect output.\n\n### Script:\n\n```bash\n#!/bin/bash\nls -al           # Valid command, prints directory listing\nSaikiran         # Invalid command, will trigger an error\ndf -h            # Valid command, prints disk space usage\nAvinash          # Invalid command, will trigger an error\nfree             # Valid command, prints memory usage\nsai              # Invalid command, will trigger an error\ncat /etc/hostname # Valid command, prints hostname\navi              # Invalid command, will trigger an error\n```\n\n### How to Execute:\n1. Save the script as `std-script.sh`.\n2. Run it using `bash std-script.sh`.\n\n---\n\n### Requirements:\n1. **Print only successful commands**:\n   ```bash\n   bash std-script.sh 2> /dev/null\n   ```\n   - Redirects any errors (stderr) to `/dev/null`, so only the output of successful commands is shown.\n\n2. **Print only failed commands**:\n   ```bash\n   bash std-script.sh 1> /dev/null\n   ```\n   - Redirects standard output (stdout) to `/dev/null`, so only error messages (stderr) are displayed.\n\n---\n\n### Overwriting and Appending Output:\n- To redirect both **stdout** and **stderr** to a file:\n  ```bash\n  bash std-script.sh > /tmp/error 2>&1\n  ```\n  - This will **overwrite** the file with both standard output and errors.\n\n- To **append** instead of overwriting:\n  ```bash\n  bash std-script.sh >> /tmp/error 2>&1\n  ```\n\n---\n\n### Display and Save Output:\nTo display output on the screen **and** save it to a file:\n```bash\nbash std-script.sh | tee /tmp/tee1\n```\n- If you want to **append** to the file instead of overwriting:\n  ```bash\n  bash std-script.sh 2>&1 | tee -a /tmp/tee1\n  ```\n\n---\n\n## For Loops vs While Loops\n\n### For Loops:\nUsed when the number of iterations is known. For example, printing numbers from 1 to 100.\n\n#### Script: `loops.sh`\n\n```bash\n#!/bin/bash\nfor i in {1..100}\ndo\n    echo $i\ndone\n```\n\n### While Loops:\nUsed when the number of iterations is not known and continues as long as the condition is true.\n\n#### Example:\nCheck if a website is working using a **while loop**:\n```bash\nwhile true\ndo\n    curl https://www.google.com | grep -i google\n    sleep 1\ndone\n```\n\n---\n\n## Working with Python and Bash\n\n### Python Example:\n```python\nx = 5 * 4\nprint(x)\n```\n\n### Bash Equivalent:\n```bash\nx=$(expr 5 \\* 4)\necho $x\n```\n- In Bash, we need to use `expr` to perform arithmetic.\n\n---\n\n## Printing Even and Odd Numbers\n\n### Even Numbers:\n```bash\n#!/bin/bash\nfor i in {1..100}; do\n    if [ $((i % 2)) -eq 0 ]; then\n        echo \"$i is an even number\"\n    fi\ndone\n```\n\n### Even and Odd Numbers:\n```bash\n#!/bin/bash\nfor i in {1..100}\ndo\n    if [ $(( i % 2 )) -ne 0 ]; then\n        echo \"$i is an odd number\"\n    else\n        echo \"$i is an even number\"\n    fi\ndone\n```\n\n---\n\n## Conclusion\n\nThis project covers the basic concepts of output redirection in Linux, the usage of for and while loops, and demonstrates both valid and invalid command execution. Whether you are handling script output or automating tasks, understanding how to redirect outputs and loop through commands is essential for DevOps and system automation.\n\nFeel free to explore the scripts, modify them, and experiment with different redirection methods and loop structures!\n\n---\n\nHappy scripting! 😊\n\n--- \n\nThis `README.md` provides an overview of the key concepts, code snippets, and practical use cases from the notes.\n"
  },
  {
    "path": "Day 04 UserAutomation/README.md",
    "content": "# Day 04 UserAutomation\n\n![a-3d-render-of-a-dark-themed-cybersecurity-confere-TU2eVZcIRda9RcDkaObkyg-yt2DCIPgQIaI9w7_DYZnYw](https://github.com/user-attachments/assets/75314cc4-86a5-41bb-b47b-acb0d3765555)\n\nThis script automates the process of creating new users on a Linux system. It checks if a user already exists, creates a new user if they don't, generates a random password with a special character, and forces the user to reset their password on the first login.\n\n## Features:\n1. Checks if the provided username already exists in the system.\n2. If the user doesn’t exist, it creates the user with a randomly generated password.\n3. The password includes a special character and a random number.\n4. The user is forced to reset their password during their first login.\n5. Supports creating multiple users in one execution.\n6. Includes automated SSH configuration changes to enable password authentication.\n\n## Prerequisites:\n- You must have root or sudo privileges to run this script.\n- Ensure that `passwd` and `sed` are installed on your system.\n\n## How It Works:\n1. **Check for Existing Users:**  \n   The script checks the `/etc/passwd` file to see if the provided username already exists.\n   \n2. **Create New User:**  \n   If the user does not exist, it creates a new user with the `useradd` command and assigns a randomly generated password.\n   \n3. **Generate Random Password:**  \n   The password is created using a combination of random numbers and a randomly selected special character from a predefined set.\n   \n4. **SSH Configuration:**  \n   The script uses `sed` to modify the `/etc/ssh/sshd_config` file to enable password authentication. It also creates a backup of this file before making changes.\n   \n5. **Multiple Users Creation:**  \n   The script allows you to create multiple users by passing multiple arguments.\n\n## Script Example:\n\n```bash\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    USER=$1\n    echo $USER\nelse\n    echo \" Please Enter the Valid parameter \"\nfi\n\n##ADDING-USER##\n\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    USER=$1\n    echo $USER\n    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d \":\" -f1)\n    if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n        echo \"The $USER you have entered is already present in the machine, Please Enter the Another USername\"\n    else\n        echo \" Lets Create a New New username\"\n        sudo useradd -m $USER --shell /bin/bash\n    fi\nelse\n    echo \" Please Enter the Valid parameter \"\n\nfi\n\n##password ##\n\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    USER=$1\n    echo $USER\n    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d \":\" -f1)\n    if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n        echo \"The $USER you have entered is already present in the machine, Please Enter the Another USername\"\n    else\n        echo \" Lets Create a New New username\"\n        sudo useradd -m $USER --shell /bin/bash\n        SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)\n        PASSWORD=\"IndianArmy@${RANDOM}${SPEC}\"\n        echo \"$USER:$PASSWORD\" | sudo chpasswd\n        echo \"The temporary password the $USER is ${PASSWORD}\"\n        passwd -e $USER\n    fi\nelse\n    echo \" Please Enter the Valid parameter \"\n\nfi\n\n# Sed -i “58 s/.*PasswordAuthentication.*/PasswordAuthentication yes/g” /etc/ssh/sshd_config\n\n##Multi User passing ##\n\n#!/bin/bash\n\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    for USER in $@; do\n        echo $USER\n        EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d \":\" -f1)\n        if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n            echo \"The $USER you have entered is already present in the machine, Please Enter the Another USername\"\n        else\n            echo \" Lets Create a New New username\"\n            sudo useradd -m $USER --shell /bin/bash\n            SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)\n            PASSWORD=\"IndianArmy@${RANDOM}${SPEC}\"\n            echo \"$USER:$PASSWORD\" | sudo chpasswd\n            echo \"The temporary password the $USER is ${PASSWORD}\"\n            passwd -e $USER\n        fi\n    done\nelse\n    echo \" Please Enter the Valid parameter \"\n\nfi\n\n##regex##\n\n#regex- Regular Expressions#\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    for USER in $@; do\n        echo $USER\n        if [[ $USER =~ ^[a-zA-Z]+$ ]]; then\n            EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ':' -f1)\n            if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n                echo \"$USER is already exisitin, Please create a New user\"\n            else\n                echo \"Lets create the New $USER\"\n                sudo useradd -m $USER --shell /bin/bash\n                SPEC=$(echo '!@#$%^&*()_' | fold -w1 | shuf | head -1)\n                PASSWORD=\"IndianArmy@${RANDOM}${SPEC}\"\n                echo \"$USER:$PASSWORD\" | sudo chpasswd\n                echo \"The termporary password for the user is ${PASSWORD}\"\n                passwd -e $USER\n            fi\n        else\n            echo \"The User Must Contain Alphabets\"\n        fi\n    done\nelse\n    echo \"Please pass the Argument\"\nfi\n\n```\n\n## SSH Configuration (Optional):\nTo enable password authentication for newly created users, the script modifies the SSH configuration using `sed`. This is important for AWS instances, where password authentication is disabled by default.\n\n```bash\n# Backup the sshd_config file\ncp /etc/ssh/sshd_config /etc/ssh/sshd_config_backup\n\n# Modify the sshd_config file to enable password authentication\nsed -i \"s/.*PasswordAuthentication.*/PasswordAuthentication yes/g\" /etc/ssh/sshd_config\n\n# Restart the SSH service\nsudo service sshd restart\n```\n\n## How to Run the Script:\n1. Save the script as `user-automation.sh`.\n2. Run the script with a username as an argument:\n   ```bash\n   bash user-automation.sh username1 username2\n   ```\n   Example:\n   ```bash\n   bash user-automation.sh alice bob\n   ```\n\n## Notes:\n- Ensure that password authentication is enabled on your system if you want to use password-based login for the newly created users.\n- This script automatically forces the new user to reset their password on first login.\n\n---\n\nThis README provides an overview of the script in simple terms, helping users understand what it does and how to use it.\n"
  },
  {
    "path": "Day 04 UserAutomation/script.sh",
    "content": "#!/bin/bash\nif [ $# -gt 0 ]; then\n    USER=$1\n    echo $USER\nelse\n    echo \" Please Enter the Valid parameter \"\nfi\n\n##ADDING-USER##\n\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    USER=$1\n    echo $USER\n    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d \":\" -f1)\n    if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n        echo \"The $USER you have entered is already present in the machine, Please Enter the Another USername\"\n    else\n        echo \" Lets Create a New New username\"\n        sudo useradd -m $USER --shell /bin/bash\n    fi\nelse\n    echo \" Please Enter the Valid parameter \"\n\nfi\n\n##password ##\n\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    USER=$1\n    echo $USER\n    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d \":\" -f1)\n    if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n        echo \"The $USER you have entered is already present in the machine, Please Enter the Another USername\"\n    else\n        echo \" Lets Create a New New username\"\n        sudo useradd -m $USER --shell /bin/bash\n        SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)\n        PASSWORD=\"IndianArmy@${RANDOM}${SPEC}\"\n        echo \"$USER:$PASSWORD\" | sudo chpasswd\n        echo \"The temporary password the $USER is ${PASSWORD}\"\n        passwd -e $USER\n    fi\nelse\n    echo \" Please Enter the Valid parameter \"\n\nfi\n\n# Sed -i “58 s/.*PasswordAuthentication.*/PasswordAuthentication yes/g” /etc/ssh/sshd_config\n\n##Multi User passing ##\n\n#!/bin/bash\n\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    for USER in $@; do\n        echo $USER\n        EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d \":\" -f1)\n        if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n            echo \"The $USER you have entered is already present in the machine, Please Enter the Another USername\"\n        else\n            echo \" Lets Create a New New username\"\n            sudo useradd -m $USER --shell /bin/bash\n            SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)\n            PASSWORD=\"IndianArmy@${RANDOM}${SPEC}\"\n            echo \"$USER:$PASSWORD\" | sudo chpasswd\n            echo \"The temporary password the $USER is ${PASSWORD}\"\n            passwd -e $USER\n        fi\n    done\nelse\n    echo \" Please Enter the Valid parameter \"\n\nfi\n\n##regex##\n\n#regex- Regular Expressions#\n#!/bin/bash\nif [ $# -gt 0 ]; then\n    for USER in $@; do\n        echo $USER\n        if [[ $USER =~ ^[a-zA-Z]+$ ]]; then\n            EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ':' -f1)\n            if [ \"${USER}\" = \"${EXISTING_USER}\" ]; then\n                echo \"$USER is already exisitin, Please create a New user\"\n            else\n                echo \"Lets create the New $USER\"\n                sudo useradd -m $USER --shell /bin/bash\n                SPEC=$(echo '!@#$%^&*()_' | fold -w1 | shuf | head -1)\n                PASSWORD=\"IndianArmy@${RANDOM}${SPEC}\"\n                echo \"$USER:$PASSWORD\" | sudo chpasswd\n                echo \"The termporary password for the user is ${PASSWORD}\"\n                passwd -e $USER\n            fi\n        else\n            echo \"The User Must Contain Alphabets\"\n        fi\n    done\nelse\n    echo \"Please pass the Argument\"\nfi\n"
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/README.md",
    "content": "# Day 05 RegEx-Break-Continue-CustomExitCodes\n\n![05](https://github.com/user-attachments/assets/27fd624d-bb91-46d5-b710-3b04db991e75)\n\n\n## Features:\n1. **Regular Expressions in Shell Scripts**\n2. **Break and Continue for Iteration Control**\n3. **Custom Exit Codes**\n4. **Arrays in Shell Scripts**\n\n---\n\n## 1. **User Automation with Regex**\n\nRegular expressions are a powerful tool in shell scripts for tasks like input validation. In this repository, we demonstrate how to use regular expressions to enforce patterns in username creation, specifically requiring users to create usernames that follow a certain format (e.g., `3 lowercase letters followed by 3 numbers`).\n\n**Example:**\n```bash\nif [[ $USER =~ ^[a-z]{3}[0-9]{3}$ ]] ; then\n  echo \"Username is valid\"\nelse\n  echo \"Username is invalid\"\nfi\n```\n\n## 2. **Common Regex Patterns:**\n\n- `\\d` - Matches any digit.\n- `\\D` - Matches any non-digit character.\n- `\\s` - Matches any whitespace.\n- `\\W` - Matches any non-word character (like punctuation).\n\n**Example:**\nTo find a phone number pattern like `123-456-7890`, you can use:\n```regex\n\\d{3}-\\d{3}-\\d{4}\n```\n\n---\n\n## 3. **Iteration Control Using Break and Continue**\n\nIn shell scripting, `break` and `continue` are essential for controlling loops.\n\n- **Break**: Used to exit a loop when a condition is met.\n- **Continue**: Used to skip the current iteration of the loop and move on to the next iteration.\n\n**Example:**\n```bash\nfor i in {1..10}; do\n  if [[ $i -eq 5 ]]; then\n    break  # Stops the loop when i equals 5\n  fi\n  echo $i\ndone\n```\n\n## 4. **Custom Exit Codes**\n\nIn shell scripts, you can use custom exit codes to signal the success or failure of commands. For instance, if an AWS command runs successfully, but you encounter a regional endpoint issue, you can check the exit status to determine what happened.\n\n**Example:**\n```bash\naws ec2 describe-vpcs --region us-east-1\nif [[ $? -ne 0 ]]; then\n  echo \"Incorrect region, exiting\"\n  exit 1\nelse\n  echo \"Correct region\"\nfi\n```\n\n## 5. **Arrays in Shell Scripts**\n\nArrays are a useful way to handle multiple values in a shell script. You can manipulate strings or data using array operations.\n\n**Example:**\n```bash\nNAME='SaikiranPinapathruni'\necho ${#NAME}  # Outputs the length of the string\n\nfor i in {0..${#NAME}}; do\n  echo ${NAME:$i:1}  # Prints one character at a time\ndone\n```\n\n---\n\n## 6. **Practical Scenarios:**\n\n1. **Regex for Phone Numbers**:\n   - Extract phone numbers starting with a specific pattern like `1-234`.\n   \n   Example regex: `\\d-[234]\\d\\d-\\d\\d\\d-\\d\\d\\d\\d`\n\n2. **Shell Script for User Creation**:\n   - Create two users: one with lowercase letters and one with uppercase letters.\n   \n3. **Exit Code Handling**:\n   - Check whether a command executed successfully and handle errors gracefully based on the exit code.\n\n---\n\n## Conclusion\n\nThis repository provides a detailed guide on how to use regular expressions, break/continue, arrays, and exit codes in shell scripts. These concepts are essential for automating tasks and creating efficient shell scripts that handle various scenarios gracefully.\n\n---\n\n\n"
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/break.sh",
    "content": "aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2)\n\necho \"Running the function to list VPCs using the regions list\"\n\nfor region in \"${aws_regions[@]}\"; do\n    echo \"Getting VPCs in $region .. \"\n    vpc_list=$(aws ec2 describe-vpcs --region \"$region\" | jq -r .Vpcs[].VpcId)\n    vpc_arr=(${vpc_list[@]})\n\n    if [ ${#vpc_arr[@]} -gt 0 ]; then\n        for vpc in \"${vpc_list[@]}\"; do\n            echo \"The VPC-ID is: $vpc\"\n        done\n        echo \"##########\"\n    else\n        echo \"Invalid Region..!!\"\n        echo \"#######\"\n        echo \"# Breaking at $region #\"\n        echo \"################\"\n        break\n    fi\ndone\n"
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/continue.sh",
    "content": "# CONTINUE\n\n#!/bin/bash\naws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2)\n\necho \"Running the function to list VPCs using the regions list\"\n\nfor region in \"${aws_regions[@]}\"; do\n    echo \"Getting VPCs in $region .. \"\n    vpc_list=$(aws ec2 describe-vpcs --region \"$region\" | jq -r .Vpcs[].VpcId)\n    vpc_arr=(${vpc_list[@]})\n\n    if [ ${#vpc_arr[@]} -gt 0 ]; then\n        for vpc in \"${vpc_list[@]}\"; do\n            echo \"The VPC-ID is: $vpc\"\n        done\n        echo \"##########\"\n    else\n        echo \"Invalid Region..!!\"\n        echo \"#######\"\n        echo \"# Breaking at $region #\"\n        echo \"################\"\n        #break\n        #exit 99\n        continue\n    fi\ndone\n\n"
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/exit-code.sh",
    "content": "######EXIT CODE############\n#!/bin/bash\naws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2)\n\necho \"Running the function to list VPCs using the regions list\"\n\nfor region in \"${aws_regions[@]}\"; do\n    echo \"Getting VPCs in $region .. \"\n    vpc_list=$(aws ec2 describe-vpcs --region \"$region\" | jq -r .Vpcs[].VpcId)\n    vpc_arr=(${vpc_list[@]})\n\n    if [ ${#vpc_arr[@]} -gt 0 ]; then\n        for vpc in \"${vpc_list[@]}\"; do\n            echo \"The VPC-ID is: $vpc\"\n        done\n        echo \"##########\"\n    else\n        echo \"Invalid Region..!!\"\n        echo \"#######\"\n        echo \"# Breaking at $region #\"\n        echo \"################\"\n        #break\n        exit 99\n    fi\ndone\n"
  },
  {
    "path": "Day 06 Functions/README.md",
    "content": "# Day 06: Functions and Scripts\n\n## Overview\n\nIn this session, we explore the concept of functions in shell scripting and how they can be beneficial in managing code effectively. While functions might not be heavily utilized in shell scripting, they become crucial when you transition to languages like Python.\n\n## What is a Function?\n\nA **function** is a block of code that can be called whenever needed. It allows for code reuse and better organization.\n\n### Example in Python\n\n```python\ndef addition(a, b):  # Passing two parameters: a and b\n    return a + b  # Returns the sum of a and b\n\n# Calling the function\nresult_a = addition(2, 3)\nresult_b = addition(4, 5)\nresult_c = addition(10, 20)\n\nprint(result_a + result_b + result_c)  # Outputs the sum of all results\n```\n\n### Importance of Functions\n\nFunctions will only execute when they are called. For instance, in Terraform, you might use functions like:\n\n```hcl\ncount = 3 \nelement\nlength\n```\n\n### Installing Docker\n\nTo install Docker, you would typically call a function from a script like this: [Docker Installation](https://get.docker.com).\n\n## Defining Functions in Shell Scripting\n\nIn shell scripting, you can define functions in two ways:\n\n1. **Using the `function` keyword:**\n\n   ```bash\n   function hello {\n       # code\n   }\n   ```\n\n2. **Using parentheses:**\n\n   ```bash\n   hello() {\n       # code\n   }\n   ```\n\n## Checking Installed Commands\n\nYou can check if a command is installed using:\n\n```bash\ncommand -v jq\necho $?  # Returns the exit status of the last command\n\ncommand -v aq\necho $?\n```\n\nIf the `command_exist` function wasn’t used, you would need to enter these commands multiple times in your script, making functions very useful for reducing redundancy.\n\n## Running the Delete Volume Scripts\n\n1. **Create three 1 GB EBS volumes.**\n2. To automate this task daily, we’ll use **Cron Jobs**.\n\n### Understanding Cron Jobs\n\nTo set up a Cron job, you would:\n\n```bash\ncrontab -e  # Edit the crontab file\n# Add the following line:\n* * * * * sudo bash /root/deleteebs.sh us-east-1  # Adjust timing as needed\n```\n\nEnsure that your script is saved at `/root/deleteebs.sh`.\n\n## Scheduling Adjustments\n\nIf you want the task to run every 10 minutes, use:\n\n```\n*/10 * * * * sudo bash /root/deleteebs.sh us-east-1\n```\n\n## Nginx Server Installation and Test\n\n1. Install the Nginx server on your instance.\n2. Access it and generate a simple HTML game:\n\n   ```bash\n   nano /var/www/html/index.html  # Make your changes here\n   ```\n\n3. Set up uptime monitoring with StatusCake:\n\n   - Log in with Google.\n   - Create a new uptime test with the URL and desired parameters.\n\n## Calling Multiple Functions\n\nIn your script, you can call multiple functions. At the end of your script, you might have:\n\n```bash\nvpc $@  # Allows passing multiple regions\n```\n\n## Interview Question Example\n\n**Question:** In one system, how can I find files larger than 10 MB?\n\n**Answer:** You can list files and check their sizes with `du`, but using the `find` command is more efficient:\n\n```bash\nfind / -size +50M -size -60M 2>/dev/null\n```\n\n### Explanation:\n\n- `/`: The starting directory for the search (root).\n- `-size +50M`: Finds files larger than 50 MB.\n- `-size -60M`: Finds files smaller than 60 MB.\n- `2>/dev/null`: Redirects error messages (e.g., permission denied) to `/dev/null`.\n\n## Log Rotation Script\n\nLog rotation helps manage log files by preventing them from growing indefinitely. When log files reach a certain size, the rotation script will execute to keep things organized.\n\n---\n"
  },
  {
    "path": "Day 06 Functions/docker.sh",
    "content": "#!/bin/sh\nset -e\n# Docker Engine for Linux installation script.\n#\n# This script is intended as a convenient way to configure docker's package\n# repositories and to install Docker Engine, This script is not recommended\n# for production environments. Before running this script, make yourself familiar\n# with potential risks and limitations, and refer to the installation manual\n# at https://docs.docker.com/engine/install/ for alternative installation methods.\n#\n# The script:\n#\n# - Requires `root` or `sudo` privileges to run.\n# - Attempts to detect your Linux distribution and version and configure your\n#   package management system for you.\n# - Doesn't allow you to customize most installation parameters.\n# - Installs dependencies and recommendations without asking for confirmation.\n# - Installs the latest stable release (by default) of Docker CLI, Docker Engine,\n#   Docker Buildx, Docker Compose, containerd, and runc. When using this script\n#   to provision a machine, this may result in unexpected major version upgrades\n#   of these packages. Always test upgrades in a test environment before\n#   deploying to your production systems.\n# - Isn't designed to upgrade an existing Docker installation. When using the\n#   script to update an existing installation, dependencies may not be updated\n#   to the expected version, resulting in outdated versions.\n#\n# Source code is available at https://github.com/docker/docker-install/\n#\n# Usage\n# ==============================================================================\n#\n# To install the latest stable versions of Docker CLI, Docker Engine, and their\n# dependencies:\n#\n# 1. download the script\n#\n#   $ curl -fsSL https://get.docker.com -o install-docker.sh\n#\n# 2. verify the script's content\n#\n#   $ cat install-docker.sh\n#\n# 3. run the script with --dry-run to verify the steps it executes\n#\n#   $ sh install-docker.sh --dry-run\n#\n# 4. run the script either as root, or using sudo to perform the installation.\n#\n#   $ sudo sh install-docker.sh\n#\n# Command-line options\n# ==============================================================================\n#\n# --version <VERSION>\n# Use the --version option to install a specific version, for example:\n#\n#   $ sudo sh install-docker.sh --version 23.0\n#\n# --channel <stable|test>\n#\n# Use the --channel option to install from an alternative installation channel.\n# The following example installs the latest versions from the \"test\" channel,\n# which includes pre-releases (alpha, beta, rc):\n#\n#   $ sudo sh install-docker.sh --channel test\n#\n# Alternatively, use the script at https://test.docker.com, which uses the test\n# channel as default.\n#\n# --mirror <Aliyun|AzureChinaCloud>\n#\n# Use the --mirror option to install from a mirror supported by this script.\n# Available mirrors are \"Aliyun\" (https://mirrors.aliyun.com/docker-ce), and\n# \"AzureChinaCloud\" (https://mirror.azure.cn/docker-ce), for example:\n#\n#   $ sudo sh install-docker.sh --mirror AzureChinaCloud\n#\n# ==============================================================================\n\n# Git commit from https://github.com/docker/docker-install when\n# the script was uploaded (Should only be modified by upload job):\nSCRIPT_COMMIT_SHA=\"39040d838e8bcc48c23a0cc4117475dd15189976\"\n\n# strip \"v\" prefix if present\nVERSION=\"${VERSION#v}\"\n\n# The channel to install from:\n#   * stable\n#   * test\nDEFAULT_CHANNEL_VALUE=\"stable\"\nif [ -z \"$CHANNEL\" ]; then\n    CHANNEL=$DEFAULT_CHANNEL_VALUE\nfi\n\nDEFAULT_DOWNLOAD_URL=\"https://download.docker.com\"\nif [ -z \"$DOWNLOAD_URL\" ]; then\n    DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URL\nfi\n\nDEFAULT_REPO_FILE=\"docker-ce.repo\"\nif [ -z \"$REPO_FILE\" ]; then\n    REPO_FILE=\"$DEFAULT_REPO_FILE\"\nfi\n\nmirror=''\nDRY_RUN=${DRY_RUN:-}\nwhile [ $# -gt 0 ]; do\n    case \"$1\" in\n    --channel)\n        CHANNEL=\"$2\"\n        shift\n        ;;\n    --dry-run)\n        DRY_RUN=1\n        ;;\n    --mirror)\n        mirror=\"$2\"\n        shift\n        ;;\n    --version)\n        VERSION=\"${2#v}\"\n        shift\n        ;;\n    --*)\n        echo \"Illegal option $1\"\n        ;;\n    esac\n    shift $(($# > 0 ? 1 : 0))\ndone\n\ncase \"$mirror\" in\nAliyun)\n    DOWNLOAD_URL=\"https://mirrors.aliyun.com/docker-ce\"\n    ;;\nAzureChinaCloud)\n    DOWNLOAD_URL=\"https://mirror.azure.cn/docker-ce\"\n    ;;\n\"\") ;;\n*)\n    echo >&2 \"unknown mirror '$mirror': use either 'Aliyun', or 'AzureChinaCloud'.\"\n    exit 1\n    ;;\nesac\n\ncase \"$CHANNEL\" in\nstable | test) ;;\n*)\n    echo >&2 \"unknown CHANNEL '$CHANNEL': use either stable or test.\"\n    exit 1\n    ;;\nesac\n\ncommand_exists() {\n    command -v \"$@\" >/dev/null 2>&1\n}\n\n# version_gte checks if the version specified in $VERSION is at least the given\n# SemVer (Maj.Minor[.Patch]), or CalVer (YY.MM) version.It returns 0 (success)\n# if $VERSION is either unset (=latest) or newer or equal than the specified\n# version, or returns 1 (fail) otherwise.\n#\n# examples:\n#\n# VERSION=23.0\n# version_gte 23.0  // 0 (success)\n# version_gte 20.10 // 0 (success)\n# version_gte 19.03 // 0 (success)\n# version_gte 26.1  // 1 (fail)\nversion_gte() {\n    if [ -z \"$VERSION\" ]; then\n        return 0\n    fi\n    version_compare \"$VERSION\" \"$1\"\n}\n\n# version_compare compares two version strings (either SemVer (Major.Minor.Path),\n# or CalVer (YY.MM) version strings. It returns 0 (success) if version A is newer\n# or equal than version B, or 1 (fail) otherwise. Patch releases and pre-release\n# (-alpha/-beta) are not taken into account\n#\n# examples:\n#\n# version_compare 23.0.0 20.10 // 0 (success)\n# version_compare 23.0 20.10   // 0 (success)\n# version_compare 20.10 19.03  // 0 (success)\n# version_compare 20.10 20.10  // 0 (success)\n# version_compare 19.03 20.10  // 1 (fail)\nversion_compare() (\n    set +x\n\n    yy_a=\"$(echo \"$1\" | cut -d'.' -f1)\"\n    yy_b=\"$(echo \"$2\" | cut -d'.' -f1)\"\n    if [ \"$yy_a\" -lt \"$yy_b\" ]; then\n        return 1\n    fi\n    if [ \"$yy_a\" -gt \"$yy_b\" ]; then\n        return 0\n    fi\n    mm_a=\"$(echo \"$1\" | cut -d'.' -f2)\"\n    mm_b=\"$(echo \"$2\" | cut -d'.' -f2)\"\n\n    # trim leading zeros to accommodate CalVer\n    mm_a=\"${mm_a#0}\"\n    mm_b=\"${mm_b#0}\"\n\n    if [ \"${mm_a:-0}\" -lt \"${mm_b:-0}\" ]; then\n        return 1\n    fi\n\n    return 0\n)\n\nis_dry_run() {\n    if [ -z \"$DRY_RUN\" ]; then\n        return 1\n    else\n        return 0\n    fi\n}\n\nis_wsl() {\n    case \"$(uname -r)\" in\n    *microsoft*) true ;; # WSL 2\n    *Microsoft*) true ;; # WSL 1\n    *) false ;;\n    esac\n}\n\nis_darwin() {\n    case \"$(uname -s)\" in\n    *darwin*) true ;;\n    *Darwin*) true ;;\n    *) false ;;\n    esac\n}\n\ndeprecation_notice() {\n    distro=$1\n    distro_version=$2\n    echo\n    printf \"\\033[91;1mDEPRECATION WARNING\\033[0m\\n\"\n    printf \"    This Linux distribution (\\033[1m%s %s\\033[0m) reached end-of-life and is no longer supported by this script.\\n\" \"$distro\" \"$distro_version\"\n    echo \"    No updates or security fixes will be released for this distribution, and users are recommended\"\n    echo \"    to upgrade to a currently maintained version of $distro.\"\n    echo\n    printf \"Press \\033[1mCtrl+C\\033[0m now to abort this script, or wait for the installation to continue.\"\n    echo\n    sleep 10\n}\n\nget_distribution() {\n    lsb_dist=\"\"\n    # Every system that we officially support has /etc/os-release\n    if [ -r /etc/os-release ]; then\n        lsb_dist=\"$(. /etc/os-release && echo \"$ID\")\"\n    fi\n    # Returning an empty string here should be alright since the\n    # case statements don't act unless you provide an actual value\n    echo \"$lsb_dist\"\n}\n\necho_docker_as_nonroot() {\n    if is_dry_run; then\n        return\n    fi\n    if command_exists docker && [ -e /var/run/docker.sock ]; then\n        (\n            set -x\n            $sh_c 'docker version'\n        ) || true\n    fi\n\n    # intentionally mixed spaces and tabs here -- tabs are stripped by \"<<-EOF\", spaces are kept in the output\n    echo\n    echo \"================================================================================\"\n    echo\n    if version_gte \"20.10\"; then\n        echo \"To run Docker as a non-privileged user, consider setting up the\"\n        echo \"Docker daemon in rootless mode for your user:\"\n        echo\n        echo \"    dockerd-rootless-setuptool.sh install\"\n        echo\n        echo \"Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.\"\n        echo\n    fi\n    echo\n    echo \"To run the Docker daemon as a fully privileged service, but granting non-root\"\n    echo \"users access, refer to https://docs.docker.com/go/daemon-access/\"\n    echo\n    echo \"WARNING: Access to the remote API on a privileged Docker daemon is equivalent\"\n    echo \"         to root access on the host. Refer to the 'Docker daemon attack surface'\"\n    echo \"         documentation for details: https://docs.docker.com/go/attack-surface/\"\n    echo\n    echo \"================================================================================\"\n    echo\n}\n\n# Check if this is a forked Linux distro\ncheck_forked() {\n\n    # Check for lsb_release command existence, it usually exists in forked distros\n    if command_exists lsb_release; then\n        # Check if the `-u` option is supported\n        set +e\n        lsb_release -a -u >/dev/null 2>&1\n        lsb_release_exit_code=$?\n        set -e\n\n        # Check if the command has exited successfully, it means we're in a forked distro\n        if [ \"$lsb_release_exit_code\" = \"0\" ]; then\n            # Print info about current distro\n            cat <<-EOF\n\t\t\tYou're using '$lsb_dist' version '$dist_version'.\n\t\t\tEOF\n\n            # Get the upstream release info\n            lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]')\n            dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]')\n\n            # Print info about upstream distro\n            cat <<-EOF\n\t\t\tUpstream release is '$lsb_dist' version '$dist_version'.\n\t\t\tEOF\n        else\n            if [ -r /etc/debian_version ] && [ \"$lsb_dist\" != \"ubuntu\" ] && [ \"$lsb_dist\" != \"raspbian\" ]; then\n                if [ \"$lsb_dist\" = \"osmc\" ]; then\n                    # OSMC runs Raspbian\n                    lsb_dist=raspbian\n                else\n                    # We're Debian and don't even know it!\n                    lsb_dist=debian\n                fi\n                dist_version=\"$(sed 's/\\/.*//' /etc/debian_version | sed 's/\\..*//')\"\n                case \"$dist_version\" in\n                12)\n                    dist_version=\"bookworm\"\n                    ;;\n                11)\n                    dist_version=\"bullseye\"\n                    ;;\n                10)\n                    dist_version=\"buster\"\n                    ;;\n                9)\n                    dist_version=\"stretch\"\n                    ;;\n                8)\n                    dist_version=\"jessie\"\n                    ;;\n                esac\n            fi\n        fi\n    fi\n}\n\ndo_install() {\n    echo \"# Executing docker install script, commit: $SCRIPT_COMMIT_SHA\"\n\n    if command_exists docker; then\n        cat >&2 <<-'EOF'\n\t\t\tWarning: the \"docker\" command appears to already exist on this system.\n\n\t\t\tIf you already have Docker installed, this script can cause trouble, which is\n\t\t\twhy we're displaying this warning and provide the opportunity to cancel the\n\t\t\tinstallation.\n\n\t\t\tIf you installed the current Docker package using this script and are using it\n\t\t\tagain to update Docker, you can safely ignore this message.\n\n\t\t\tYou may press Ctrl+C now to abort this script.\n\t\tEOF\n        (\n            set -x\n            sleep 20\n        )\n    fi\n\n    user=\"$(id -un 2>/dev/null || true)\"\n\n    sh_c='sh -c'\n    if [ \"$user\" != 'root' ]; then\n        if command_exists sudo; then\n            sh_c='sudo -E sh -c'\n        elif command_exists su; then\n            sh_c='su -c'\n        else\n            cat >&2 <<-'EOF'\n\t\t\tError: this installer needs the ability to run commands as root.\n\t\t\tWe are unable to find either \"sudo\" or \"su\" available to make this happen.\n\t\t\tEOF\n            exit 1\n        fi\n    fi\n\n    if is_dry_run; then\n        sh_c=\"echo\"\n    fi\n\n    # perform some very rudimentary platform detection\n    lsb_dist=$(get_distribution)\n    lsb_dist=\"$(echo \"$lsb_dist\" | tr '[:upper:]' '[:lower:]')\"\n\n    if is_wsl; then\n        echo\n        echo \"WSL DETECTED: We recommend using Docker Desktop for Windows.\"\n        echo \"Please get Docker Desktop from https://www.docker.com/products/docker-desktop/\"\n        echo\n        cat >&2 <<-'EOF'\n\n\t\t\tYou may press Ctrl+C now to abort this script.\n\t\tEOF\n        (\n            set -x\n            sleep 20\n        )\n    fi\n\n    case \"$lsb_dist\" in\n\n    ubuntu)\n        if command_exists lsb_release; then\n            dist_version=\"$(lsb_release --codename | cut -f2)\"\n        fi\n        if [ -z \"$dist_version\" ] && [ -r /etc/lsb-release ]; then\n            dist_version=\"$(. /etc/lsb-release && echo \"$DISTRIB_CODENAME\")\"\n        fi\n        ;;\n\n    debian | raspbian)\n        dist_version=\"$(sed 's/\\/.*//' /etc/debian_version | sed 's/\\..*//')\"\n        case \"$dist_version\" in\n        12)\n            dist_version=\"bookworm\"\n            ;;\n        11)\n            dist_version=\"bullseye\"\n            ;;\n        10)\n            dist_version=\"buster\"\n            ;;\n        9)\n            dist_version=\"stretch\"\n            ;;\n        8)\n            dist_version=\"jessie\"\n            ;;\n        esac\n        ;;\n\n    centos | rhel)\n        if [ -z \"$dist_version\" ] && [ -r /etc/os-release ]; then\n            dist_version=\"$(. /etc/os-release && echo \"$VERSION_ID\")\"\n        fi\n        ;;\n\n    *)\n        if command_exists lsb_release; then\n            dist_version=\"$(lsb_release --release | cut -f2)\"\n        fi\n        if [ -z \"$dist_version\" ] && [ -r /etc/os-release ]; then\n            dist_version=\"$(. /etc/os-release && echo \"$VERSION_ID\")\"\n        fi\n        ;;\n\n    esac\n\n    # Check if this is a forked Linux distro\n    check_forked\n\n    # Print deprecation warnings for distro versions that recently reached EOL,\n    # but may still be commonly used (especially LTS versions).\n    case \"$lsb_dist.$dist_version\" in\n    centos.8 | centos.7 | rhel.7)\n        deprecation_notice \"$lsb_dist\" \"$dist_version\"\n        ;;\n    debian.buster | debian.stretch | debian.jessie)\n        deprecation_notice \"$lsb_dist\" \"$dist_version\"\n        ;;\n    raspbian.buster | raspbian.stretch | raspbian.jessie)\n        deprecation_notice \"$lsb_dist\" \"$dist_version\"\n        ;;\n    ubuntu.bionic | ubuntu.xenial | ubuntu.trusty)\n        deprecation_notice \"$lsb_dist\" \"$dist_version\"\n        ;;\n    ubuntu.mantic | ubuntu.lunar | ubuntu.kinetic | ubuntu.impish | ubuntu.hirsute | ubuntu.groovy | ubuntu.eoan | ubuntu.disco | ubuntu.cosmic)\n        deprecation_notice \"$lsb_dist\" \"$dist_version\"\n        ;;\n    fedora.*)\n        if [ \"$dist_version\" -lt 39 ]; then\n            deprecation_notice \"$lsb_dist\" \"$dist_version\"\n        fi\n        ;;\n    esac\n\n    # Run setup for each distro accordingly\n    case \"$lsb_dist\" in\n    ubuntu | debian | raspbian)\n        pre_reqs=\"ca-certificates curl\"\n        apt_repo=\"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL\"\n        (\n            if ! is_dry_run; then\n                set -x\n            fi\n            $sh_c 'apt-get -qq update >/dev/null'\n            $sh_c \"DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pre_reqs >/dev/null\"\n            $sh_c 'install -m 0755 -d /etc/apt/keyrings'\n            $sh_c \"curl -fsSL \\\"$DOWNLOAD_URL/linux/$lsb_dist/gpg\\\" -o /etc/apt/keyrings/docker.asc\"\n            $sh_c \"chmod a+r /etc/apt/keyrings/docker.asc\"\n            $sh_c \"echo \\\"$apt_repo\\\" > /etc/apt/sources.list.d/docker.list\"\n            $sh_c 'apt-get -qq update >/dev/null'\n        )\n        pkg_version=\"\"\n        if [ -n \"$VERSION\" ]; then\n            if is_dry_run; then\n                echo \"# WARNING: VERSION pinning is not supported in DRY_RUN\"\n            else\n                # Will work for incomplete versions IE (17.12), but may not actually grab the \"latest\" if in the test channel\n                pkg_pattern=\"$(echo \"$VERSION\" | sed 's/-ce-/~ce~.*/g' | sed 's/-/.*/g')\"\n                search_command=\"apt-cache madison docker-ce | grep '$pkg_pattern' | head -1 | awk '{\\$1=\\$1};1' | cut -d' ' -f 3\"\n                pkg_version=\"$($sh_c \"$search_command\")\"\n                echo \"INFO: Searching repository for VERSION '$VERSION'\"\n                echo \"INFO: $search_command\"\n                if [ -z \"$pkg_version\" ]; then\n                    echo\n                    echo \"ERROR: '$VERSION' not found amongst apt-cache madison results\"\n                    echo\n                    exit 1\n                fi\n                if version_gte \"18.09\"; then\n                    search_command=\"apt-cache madison docker-ce-cli | grep '$pkg_pattern' | head -1 | awk '{\\$1=\\$1};1' | cut -d' ' -f 3\"\n                    echo \"INFO: $search_command\"\n                    cli_pkg_version=\"=$($sh_c \"$search_command\")\"\n                fi\n                pkg_version=\"=$pkg_version\"\n            fi\n        fi\n        (\n            pkgs=\"docker-ce${pkg_version%=}\"\n            if version_gte \"18.09\"; then\n                # older versions didn't ship the cli and containerd as separate packages\n                pkgs=\"$pkgs docker-ce-cli${cli_pkg_version%=} containerd.io\"\n            fi\n            if version_gte \"20.10\"; then\n                pkgs=\"$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version\"\n            fi\n            if version_gte \"23.0\"; then\n                pkgs=\"$pkgs docker-buildx-plugin\"\n            fi\n            if ! is_dry_run; then\n                set -x\n            fi\n            $sh_c \"DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pkgs >/dev/null\"\n        )\n        echo_docker_as_nonroot\n        exit 0\n        ;;\n    centos | fedora | rhel)\n        repo_file_url=\"$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE\"\n        (\n            if ! is_dry_run; then\n                set -x\n            fi\n            if command_exists dnf5; then\n                # $sh_c \"dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core\"\n                # $sh_c\t\"dnf5 config-manager addrepo --save-filename=docker-ce.repo --from-repofile='$repo_file_url'\"\n\n                $sh_c \"dnf -y -q --setopt=install_weak_deps=False install curl dnf-plugins-core\"\n                # FIXME(thaJeztah); strip empty lines as workaround for https://github.com/rpm-software-management/dnf5/issues/1603\n                TMP_REPO_FILE=\"$(mktemp --dry-run)\"\n                $sh_c \"curl -fsSL '$repo_file_url' | tr -s '\\n' > '${TMP_REPO_FILE}'\"\n                $sh_c \"dnf5 config-manager addrepo --save-filename=docker-ce.repo --overwrite --from-repofile='${TMP_REPO_FILE}'\"\n                $sh_c \"rm -f '${TMP_REPO_FILE}'\"\n\n                if [ \"$CHANNEL\" != \"stable\" ]; then\n                    $sh_c \"dnf5 config-manager setopt \\\"docker-ce-*.enabled=0\\\"\"\n                    $sh_c \"dnf5 config-manager setopt \\\"docker-ce-$CHANNEL.enabled=1\\\"\"\n                fi\n                $sh_c \"dnf makecache\"\n            elif command_exists dnf; then\n                $sh_c \"dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core\"\n                $sh_c \"dnf config-manager --add-repo $repo_file_url\"\n\n                if [ \"$CHANNEL\" != \"stable\" ]; then\n                    $sh_c \"dnf config-manager --set-disabled \\\"docker-ce-*\\\"\"\n                    $sh_c \"dnf config-manager --set-enabled \\\"docker-ce-$CHANNEL\\\"\"\n                fi\n                $sh_c \"dnf makecache\"\n            else\n                $sh_c \"yum -y -q install yum-utils\"\n                $sh_c \"yum config-manager --add-repo $repo_file_url\"\n\n                if [ \"$CHANNEL\" != \"stable\" ]; then\n                    $sh_c \"yum config-manager --disable \\\"docker-ce-*\\\"\"\n                    $sh_c \"yum config-manager --enable \\\"docker-ce-$CHANNEL\\\"\"\n                fi\n                $sh_c \"yum makecache\"\n            fi\n        )\n        pkg_version=\"\"\n        if command_exists dnf; then\n            pkg_manager=\"dnf\"\n            pkg_manager_flags=\"-y -q --best\"\n        else\n            pkg_manager=\"yum\"\n            pkg_manager_flags=\"-y -q\"\n        fi\n        if [ -n \"$VERSION\" ]; then\n            if is_dry_run; then\n                echo \"# WARNING: VERSION pinning is not supported in DRY_RUN\"\n            else\n                if [ \"$lsb_dist\" = \"fedora\" ]; then\n                    pkg_suffix=\"fc$dist_version\"\n                else\n                    pkg_suffix=\"el\"\n                fi\n                pkg_pattern=\"$(echo \"$VERSION\" | sed 's/-ce-/\\\\\\\\.ce.*/g' | sed 's/-/.*/g').*$pkg_suffix\"\n                search_command=\"$pkg_manager list --showduplicates docker-ce | grep '$pkg_pattern' | tail -1 | awk '{print \\$2}'\"\n                pkg_version=\"$($sh_c \"$search_command\")\"\n                echo \"INFO: Searching repository for VERSION '$VERSION'\"\n                echo \"INFO: $search_command\"\n                if [ -z \"$pkg_version\" ]; then\n                    echo\n                    echo \"ERROR: '$VERSION' not found amongst $pkg_manager list results\"\n                    echo\n                    exit 1\n                fi\n                if version_gte \"18.09\"; then\n                    # older versions don't support a cli package\n                    search_command=\"$pkg_manager list --showduplicates docker-ce-cli | grep '$pkg_pattern' | tail -1 | awk '{print \\$2}'\"\n                    cli_pkg_version=\"$($sh_c \"$search_command\" | cut -d':' -f 2)\"\n                fi\n                # Cut out the epoch and prefix with a '-'\n                pkg_version=\"-$(echo \"$pkg_version\" | cut -d':' -f 2)\"\n            fi\n        fi\n        (\n            pkgs=\"docker-ce$pkg_version\"\n            if version_gte \"18.09\"; then\n                # older versions didn't ship the cli and containerd as separate packages\n                if [ -n \"$cli_pkg_version\" ]; then\n                    pkgs=\"$pkgs docker-ce-cli-$cli_pkg_version containerd.io\"\n                else\n                    pkgs=\"$pkgs docker-ce-cli containerd.io\"\n                fi\n            fi\n            if version_gte \"20.10\"; then\n                pkgs=\"$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version\"\n            fi\n            if version_gte \"23.0\"; then\n                pkgs=\"$pkgs docker-buildx-plugin\"\n            fi\n            if ! is_dry_run; then\n                set -x\n            fi\n            $sh_c \"$pkg_manager $pkg_manager_flags install $pkgs\"\n        )\n        echo_docker_as_nonroot\n        exit 0\n        ;;\n    sles)\n        if [ \"$(uname -m)\" != \"s390x\" ]; then\n            echo \"Packages for SLES are currently only available for s390x\"\n            exit 1\n        fi\n        repo_file_url=\"$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE\"\n        pre_reqs=\"ca-certificates curl libseccomp2 awk\"\n        (\n            if ! is_dry_run; then\n                set -x\n            fi\n            $sh_c \"zypper install -y $pre_reqs\"\n            $sh_c \"zypper addrepo $repo_file_url\"\n            if ! is_dry_run; then\n                cat >&2 <<-'EOF'\n\t\t\t\t\t\tWARNING!!\n\t\t\t\t\t\topenSUSE repository (https://download.opensuse.org/repositories/security:/SELinux) will be enabled now.\n\t\t\t\t\t\tDo you wish to continue?\n\t\t\t\t\t\tYou may press Ctrl+C now to abort this script.\n\t\t\t\t\t\tEOF\n                (\n                    set -x\n                    sleep 30\n                )\n            fi\n            opensuse_repo=\"https://download.opensuse.org/repositories/security:/SELinux/openSUSE_Factory/security:SELinux.repo\"\n            $sh_c \"zypper addrepo $opensuse_repo\"\n            $sh_c \"zypper --gpg-auto-import-keys refresh\"\n            $sh_c \"zypper lr -d\"\n        )\n        pkg_version=\"\"\n        if [ -n \"$VERSION\" ]; then\n            if is_dry_run; then\n                echo \"# WARNING: VERSION pinning is not supported in DRY_RUN\"\n            else\n                pkg_pattern=\"$(echo \"$VERSION\" | sed 's/-ce-/\\\\\\\\.ce.*/g' | sed 's/-/.*/g')\"\n                search_command=\"zypper search -s --match-exact 'docker-ce' | grep '$pkg_pattern' | tail -1 | awk '{print \\$6}'\"\n                pkg_version=\"$($sh_c \"$search_command\")\"\n                echo \"INFO: Searching repository for VERSION '$VERSION'\"\n                echo \"INFO: $search_command\"\n                if [ -z \"$pkg_version\" ]; then\n                    echo\n                    echo \"ERROR: '$VERSION' not found amongst zypper list results\"\n                    echo\n                    exit 1\n                fi\n                search_command=\"zypper search -s --match-exact 'docker-ce-cli' | grep '$pkg_pattern' | tail -1 | awk '{print \\$6}'\"\n                # It's okay for cli_pkg_version to be blank, since older versions don't support a cli package\n                cli_pkg_version=\"$($sh_c \"$search_command\")\"\n                pkg_version=\"-$pkg_version\"\n            fi\n        fi\n        (\n            pkgs=\"docker-ce$pkg_version\"\n            if version_gte \"18.09\"; then\n                if [ -n \"$cli_pkg_version\" ]; then\n                    # older versions didn't ship the cli and containerd as separate packages\n                    pkgs=\"$pkgs docker-ce-cli-$cli_pkg_version containerd.io\"\n                else\n                    pkgs=\"$pkgs docker-ce-cli containerd.io\"\n                fi\n            fi\n            if version_gte \"20.10\"; then\n                pkgs=\"$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version\"\n            fi\n            if version_gte \"23.0\"; then\n                pkgs=\"$pkgs docker-buildx-plugin\"\n            fi\n            if ! is_dry_run; then\n                set -x\n            fi\n            $sh_c \"zypper -q install -y $pkgs\"\n        )\n        echo_docker_as_nonroot\n        exit 0\n        ;;\n    *)\n        if [ -z \"$lsb_dist\" ]; then\n            if is_darwin; then\n                echo\n                echo \"ERROR: Unsupported operating system 'macOS'\"\n                echo \"Please get Docker Desktop from https://www.docker.com/products/docker-desktop\"\n                echo\n                exit 1\n            fi\n        fi\n        echo\n        echo \"ERROR: Unsupported distribution '$lsb_dist'\"\n        echo\n        exit 1\n        ;;\n    esac\n    exit 1\n}\n\n# wrapped up in a function so that we have some protection against only getting\n# half the file during \"curl | sh\"\ndo_install\n"
  },
  {
    "path": "Day 06 Functions/ebs.sh",
    "content": "#!/bin/bash\n\ndelete_vols() {\n    # Fetch all volumes\n    vols=$(aws ec2 describe-volumes | jq \".Volumes[].VolumeId\" -r)\n\n    for vol in $vols; do\n        # Fetch volume details\n        volume_info=$(aws ec2 describe-volumes --volume-ids $vol)\n        size=$(echo \"$volume_info\" | jq \".Volumes[].Size\")\n        state=$(echo \"$volume_info\" | jq \".Volumes[].State\" -r)\n\n        # Check volume size and state\n        if [ \"$state\" == \"in-use\" ]; then\n            echo \"$vol is attached to an instance. Skipping deletion.\"\n        elif [ \"$size\" -gt 5 ]; then\n            echo \"$vol is larger than 5GB. Skipping deletion.\"\n        else\n            echo \"Deleting Volume $vol\"\n            aws ec2 delete-volume --volume-id $vol\n        fi\n    done\n}\n\n# Call the function\ndelete_vols\n"
  },
  {
    "path": "Day 06 Functions/log-rotation.sh",
    "content": "#!/bin/bash\n\n# Configuration\nLOG_FILE=\"/var/log/syslog\"          # Path to your log file\nMAX_SIZE=100000000                  # Maximum size in bytes (100 MB)\nBACKUP_DIR=\"/var/log/myapp/backups\" # Directory to store rotated logs\nTIMESTAMP=$(date +\"%Y%m%d_%H%M%S\")  # Timestamp for backup filename\n\n# Create backup directory if it doesn't exist\nmkdir -p \"$BACKUP_DIR\"\n\n# Function to rotate log files\nrotate_logs() {\n    if [ -f \"$LOG_FILE\" ]; then\n        echo \"Rotating log file: $LOG_FILE\"\n        mv \"$LOG_FILE\" \"$BACKUP_DIR/myapp_$TIMESTAMP.log\" # Rename the log file with a timestamp\n        touch \"$LOG_FILE\"                                 # Create a new empty log file\n        echo \"Log file rotated and stored as $BACKUP_DIR/myapp_$TIMESTAMP.log\"\n    else\n        echo \"Log file $LOG_FILE does not exist.\"\n    fi\n}\n\n# Check if the log file size exceeds the maximum size\nif [ -f \"$LOG_FILE\" ]; then\n    FILE_SIZE=$(stat -c%s \"$LOG_FILE\") # Get the size of the log file in bytes\n    if [ \"$FILE_SIZE\" -gt \"$MAX_SIZE\" ]; then\n        rotate_logs\n    else\n        echo \"Log file size is under control: ${FILE_SIZE} bytes\"\n    fi\nelse\n    echo \"Log file does not exist. No action taken.\"\nfi\n"
  },
  {
    "path": "Day 06 Functions/multi-function.sh",
    "content": "#!/bin/bash\nfunction subnets {\n    echo \"************************************************************\"\n    echo \"**Getting SUBNETS Info VPC $VPC in region $REGION**\"\n    echo \"************************************************************\"\n    aws ec2 describe-subnets --filters \"Name=vpc-id,Values=$VPC\" --region $REGION | jq \".Subnets[].SubnetId\"\n    echo \"---------------------------------------------\"\n}\n\nfunction sg {\n    echo \"********************************************************************\"\n    echo \"**Getting Security Group Info VPC $VPC in region $REGION**\"\n    echo \"********************************************************************\"\n    aws ec2 describe-security-groups --filters \"Name=vpc-id,Values=$VPC\" --region $REGION | jq \".SecurityGroups[].GroupName\"\n    echo \"---------------------------------------------\"\n}\n\nvpcs() {\n    for REGION in $@; do\n        echo \"Getting VPC List For Regions $REGION...\"\n        vpcs=$(aws ec2 describe-vpcs --region \"${REGION}\" | jq \".Vpcs[].VpcId\" | tr -d '\"')\n        echo $vpcs\n        echo \"--------------------------------------------------\"\n        for VPC in $vpcs; do\n            subnets $VPC\n            # sg $VPC\n        done\n        # for VPC in $vpcs; do\n        #     sg $VPC\n        # done\n    done\n}\n\nvpcs $@\n"
  },
  {
    "path": "Day 07 Git-1/README.md",
    "content": "# Day 07 GIT Azure Terraform JIRA\n\n![a-3d-scene-with-a-terraform-logo-on-one-side-and-a-UJgTFv-TSs-3jQkKJsSVGQ-TP18QzX3TRGEJlCl2aGlmA](https://github.com/user-attachments/assets/df80ecf8-a04e-45b1-9540-0759a6ea8fa2)\n\n\n## Overview\n\nThis project demonstrates using Git for version control while developing infrastructure with Terraform on Azure. We'll cover setting up Git, Terraform, and pushing infrastructure code to a remote GitHub repository.\n\n## Table of Contents\n1. [Git and Remote Repositories](#git-and-remote-repositories)\n2. [Setting Up Environment](#setting-up-environment)\n3. [Azure Service Principal](#azure-service-principal)\n4. [Terraform Project](#terraform-project)\n5. [Managing State and GitHub](#managing-state-and-github)\n6. [Branching Strategy](#branching-strategy)\n\n## Git and Remote Repositories\n\nGit is a tool that helps track changes in code and push it to a remote repository such as GitHub, GitLab, Bitbucket, or Azure DevOps. In a collaborative environment, all team members work on the same repository to manage changes effectively.\n\nFor this project, we are using Terraform to create infrastructure on Azure, and Git to version control the Terraform code.\n\n## Setting Up Environment\n\n### Step 1: Install Git and Terraform\n- **Git Installation**:\n  - Download Git and check the installation via PowerShell: \n    ```sh\n    git --version\n    ```\n\n- **Terraform Installation**:\n  - Create a folder named `software` in C drive.\n  - Download Terraform binary, save it in the folder, extract it, and add its path to the system environment variables:\n    ```sh\n    sysdm.cpl > Advanced > Environment Variables > Path > Edit > New (paste path)\n    ```\n\n### Step 2: Create Project Folder\n- Create a folder named `Azure-Tera-Git`.\n- Inside, create a file called `Credentials` to store credentials.\n\n## Azure Service Principal\n\nTo authenticate between Azure and Terraform:\n\n1. **Azure EntraID** > **App Registration** > **New Registration**.\n   - Register an app named `DevSecOps-Saikiran` (Service Principal).\n   - Collect `ClientID` and `TenantID`.\n\n2. Go to **Certificates & Secrets** and create a new client secret.\n\n3. Navigate to **Subscriptions**:\n   - Create a subscription and copy the `SubscriptionID`.\n   - Assign roles:\n     - **IAM** > **Role Assignment** > **Privilege Admin Roles** > **Contributor** > **Select Members**.\n\n## Terraform Project\n\n### Resources to Create\n- Resource Groups (RG)\n- Virtual Network & Subnets\n- Network Security Group (NSG) and Rules\n- Random Passwords\n- Save Passwords in Key Vault\n- Deploy Virtual Machine using passwords from the Key Vault\n\n### Code Structure\n- **provider.tf**: Configure Azure provider for Terraform.\n  ```hcl\n  provider \"azurerm\" {\n    features {}\n  }\n  ```\n\n### Commands\n- **Initialize Terraform**:\n  ```sh\n  terraform init\n  ```\n  (This downloads the Azure provider.)\n\n- **Deploy Resources**:\n  - Create Resource Groups, Virtual Networks, etc., using the keyword `resource`.\n  - The `resource` block is used for all resources, including security groups, VPCs, etc.\n\n- **Manage State File**:\n  - Keep track of the infrastructure state.\n  - Store the state file in an Azure Storage account to maintain consistency:\n    - **Storage Accounts** > **Containers** > Create container (`tfstate`).\n\n- **Apply Configuration**:\n  ```sh\n  terraform init; terraform fmt; terraform validate; terraform plan; terraform apply\n  ```\n\n## Managing State and GitHub\n\n### Initialize Git Repository\n- Create a GitHub repository as **private**.\n- Set up SSH keys for authentication:\n  ```sh\n  ssh-keygen\n  ```\n  Copy the `.pub` key and store it in GitHub.\n\n### Version Control Steps\n1. **Initialize Git**:\n   ```sh\n   git init\n   ```\n2. **Create `.gitignore`** to exclude unnecessary files.\n3. **Commit Changes**:\n   ```sh\n   git add . && git commit -m \"terraform Azure Base Code\"\n   ```\n4. **Push to Remote Repository**:\n   ```sh\n   git branch -m master development\n   git push origin development\n   ```\n\n### Virtual Network Deployment\n- Add code for virtual networks, apply changes, and push the updated code to GitHub.\n\n## Branching Strategy\n\n### Create Branches\n- **Production Branch**:\n  ```sh\n  git branch -b production\n  git push origin production\n  ```\n- **Feature Branch for Updates**:\n  - Create new features in separate branches:\n    ```sh\n    git checkout -b feature/subnet\n    ```\n  - Develop, test, and then create a Pull Request (PR) for merging changes into the **development** or **production** branch.\n\n### Merging with Pull Request\n- Create a PR in GitHub to merge changes from development to production.\n- Add comments and request approval from reviewers.\n- Once approved, merge the code.\n\n### Create JIRA Branch\n- Create a branch based on a JIRA ticket for tracking:\n  ```sh\n  git checkout -b JIRA-123\n  ```\n- Implement Azure Storage account code, commit, and push to the JIRA branch.\n- Create a PR to merge the feature, add relevant comments, and ensure code review.\n\n"
  },
  {
    "path": "Day 08 Git-2/README.md",
    "content": "# Day 08 Git-2\n"
  },
  {
    "path": "Day 09 Git-3/README.md",
    "content": "![an-eye-catching-illustration-of-a-git-merge-and-gi-mich74xdR-iNzhh-DPdCaw-dDLWCUYQQtKBuum9wR-h7w](https://github.com/user-attachments/assets/affbf339-6c43-4fa4-a9e5-a3edf2961a33)\n\n\n# Git Basics: Rebase, Reset, Stash, and Git Secrets\n\nThis repository provides practical examples and explanations on fundamental Git operations such as `rebase`, `reset`, `stash`, and securing sensitive information with `git-secrets`.\n\n## Table of Contents\n\n- [Rebase](#rebase)\n- [Reset](#reset)\n- [Stash](#stash)\n- [Git Secrets](#git-secrets)\n\n---\n\n## Rebase\n\n### What is Git Rebase?\n\nRebasing in Git is used to take the changes from one branch (usually a development branch) and apply them on top of another branch (typically the master branch). This results in a linear commit history, providing a cleaner log. However, it rewrites commit history, which can cause issues in a collaborative environment.\n\n### Example:\n\n1. Create the master branch and commit changes:\n   ```bash\n   mkdir rebase-example && cd rebase-example\n   git init\n\n   I=1\n   while [ $I -lt 6 ]\n   do\n       echo \"Master $I time\" > MasterFile$I\n       git add . && git commit -m \"Master Commit $I\"\n       I=$((I+1))\n   done\n   ```\n\n2. Create the development branch and add commits:\n   ```bash\n   git checkout -b development\n   I=1\n   while [ $I -lt 6 ]\n   do\n       echo \"Development $I time\" > DevFile$I\n       git add . && git commit -m \"Development Commit $I\"\n       I=$((I+1))\n   done\n   ```\n\n3. Now, rebase the `development` branch onto `master`:\n   ```bash\n   git checkout development\n   git rebase master\n   git log --oneline\n   ```\n\n### Golden Rule of Rebase:\n\nAccording to Google’s and Bitbucket's guidelines, **never rebase commits that you’ve already pushed to a shared repository**. This can cause confusion for your collaborators as it rewrites the commit history.\n\n---\n\n## Reset\n\n### Types of Git Reset:\n\n1. **Soft Reset**: Only resets the commit history, files remain intact.\n2. **Hard Reset**: Removes both commit history and files, reverting to a previous state.\n\n### Example:\n\n1. Create 20 commits in a repository:\n   ```bash\n   mkdir reset-example && cd reset-example\n   git init\n\n   I=1\n   while [ $I -lt 21 ]\n   do\n       echo \"Commit $I content\" > File$I\n       git add . && git commit -m \"Commit $I\"\n       I=$((I+1))\n   done\n   ```\n\n2. Perform a hard reset to an earlier commit:\n   ```bash\n   git reset --hard <commit-id>\n   git log --oneline\n   ls -al\n   ```\n\n3. Perform a soft reset:\n   ```bash\n   git reset --soft <commit-id>\n   ls -al  # Files will remain intact\n   ```\n\n4. If changes were pushed to the remote repository, use the following command to force-push after a reset:\n   ```bash\n   git push origin master --force\n   ```\n\n---\n\n## Stash\n\n### What is Git Stash?\n\nGit stash is used to temporarily save your uncommitted changes so that you can work on something else. Later, you can retrieve those changes using `git stash pop`.\n\n### Example:\n\n1. Modify `app.py`:\n   ```bash\n   nano app.py\n   # Add some code, like:\n   print(\"Hello Saikiran\")\n   ```\n\n2. If you need to switch to another task quickly without committing:\n   ```bash\n   git stash\n   ```\n\n3. To retrieve the stashed changes:\n   ```bash\n   git stash pop\n   ```\n\nIn interviews, mention that `stash` is primarily used for temporarily saving work without committing.\n\n---\n\n## Git Secrets\n\n### Protect Sensitive Information\n\nDevelopers or DevOps engineers sometimes mistakenly commit sensitive information (API keys, PEM files, etc.) into repositories. To prevent this, we can use `git-secrets`.\n\n### Example:\n\n1. Install `git-secrets`:\n   ```bash\n   git clone https://github.com/awslabs/git-secrets.git\n   cd git-secrets\n   sudo apt install make -y\n   make install\n   git secrets --install\n   ```\n\n2. Register AWS patterns:\n   ```bash\n   git secrets --register-aws\n   ```\n\n3. Create a file containing sensitive information and attempt to commit it:\n   ```bash\n   nano keys\n   # Add some AWS access keys\n   git add . && git commit -m \"AWS keys\"\n   ```\n\n4. `git-secrets` will block this commit if sensitive information is detected.\n\n---\n\n## Conclusion\n\nThis repository covers essential Git operations:\n- **Rebase** for cleaner history but with caution.\n- **Reset** for undoing commits.\n- **Stash** for temporarily saving work.\n- **Git Secrets** for protecting sensitive information.\n\nThese concepts are critical for anyone working with version control and especially useful in DevOps and development workflows.\n``\n"
  },
  {
    "path": "Day 10 AWS-Terraform-Part-1/README.md",
    "content": "![a-3d-render-of-a-youtube-thumbnail-with-the-text-d-6vFmUIlxRQ2-ERpv-XkPmg-98wY6FuxTTeyHEHWaD8X5w](https://github.com/user-attachments/assets/5ff94fd5-09ee-4fc9-87df-e16f87bab83c)\n\n\n# Terraform Day 01 Provider Block - Resource Block - S3 backend - Data Source - Remote Data Source Backend \n\n# Code used in video https://github.com/saikiranpi/Terraformsingleinstance.git\n\n# Infrastructure as Code (IaC) with Terraform and Cloud Native Tools (CNT)\n\n## Overview\n\nIn this repository, we explore Infrastructure as Code (IaC) using both Cloud Native Tools (CNT) and Terraform. We'll compare AWS CloudFormation (CFT), Azure Resource Manager (ARM), and GCP Deployment Manager with Terraform. Additionally, we'll cover practical Terraform code examples for AWS, including how to manage infrastructure with modules, data sources, and remote state management.\n\n### Tools Overview:\n\n1. **AWS**: CloudFormation (CFT)\n2. **Azure**: Azure Resource Manager (ARM)\n3. **GCP**: Deployment Manager\n\n### Key Differences between CNT (CFT, ARM) & Terraform:\n\n| Feature                          | CFT & ARM                           | Terraform                      |\n|-----------------------------------|--------------------------------------|---------------------------------|\n| Language                          | JSON or YAML (All configs in one file) | HashiCorp Configuration Language (HCL) |\n| Complexity                        | Learning JSON/YAML is difficult       | HCL is simpler and modular     |\n| Cloud Compatibility               | AWS (CFT), Azure (ARM) only          | Multi-cloud (AWS, Azure, GCP)  |\n| Module Support                    | No                                  | Yes, with reusable modules     |\n| Workspace Support                 | No                                  | Yes, supports multiple workspaces |\n| Dry-Run Capability                | Limited                             | `terraform plan` for effective dry-run |\n| Importing Resources               | Complex in AWS, not available in ARM | Simple with `terraform import` |\n\n---\n\n## Terraform and Other HashiCorp Tools:\n\nTerraform is a HashiCorp tool that is cloud-agnostic, which means you can use the same logic to deploy resources across multiple clouds, including AWS, Azure, and GCP. Alongside Terraform, HashiCorp also provides:\n\n- **Packer**: For image automation\n- **Consul**: For service discovery and cluster management\n- **Vault**: For secure secrets management\n- **Nomad**: For workload orchestration (alternative to Kubernetes)\n\n---\n\n## Getting Started with Terraform\n\n### 1. Main Configuration (`main.tf`):\nThis is the main file where we define which cloud provider we will be deploying resources to, in this case, AWS.\n\n```hcl\nprovider \"aws\" {\n  region = \"us-west-2\"\n}\n\n# Other resource definitions will follow...\n```\n\nYou don't need to hard-code your AWS credentials in the code; instead, you can configure them using the `aws configure` command after installing the AWS CLI.\n\n---\n\n### 2. Create Your First VPC (`vpc.tf`):\n\nIn Terraform, any service created is referred to as a **resource**.\n\n```hcl\nresource \"aws_vpc\" \"my_vpc\" {\n  cidr_block = \"10.0.0.0/16\"\n  tags = {\n    Name = \"My-VPC\"\n  }\n}\n\nresource \"aws_internet_gateway\" \"igw\" {\n  vpc_id = aws_vpc.my_vpc.id\n  tags = {\n    Name = \"My-Internet-Gateway\"\n  }\n}\n```\n\n### 3. Using Data Sources:\n\nData sources are used to fetch information from existing resources in your cloud environment. For example, we can fetch an existing VPC using its tag name:\n\n```hcl\ndata \"aws_vpc\" \"Test-Vpc\" {\n  filter {\n    name   = \"tag:Name\"\n    values = [\"Test-Vpc\"]\n  }\n}\n\nresource \"aws_internet_gateway\" \"igw\" {\n  vpc_id = data.aws_vpc.Test-Vpc.id\n}\n```\n\n### 4. Remote State Management:\n\nAfter deploying your resources, Terraform generates a state file. This state file can be reused to deploy the same infrastructure in another project. We can manage this using Terraform's remote state:\n\n```hcl\nterraform {\n  backend \"s3\" {\n    bucket = \"my-terraform-state-bucket\"\n    key    = \"project1/terraform.tfstate\"\n    region = \"us-west-2\"\n  }\n}\n```\n\nAfter setting this up, initialize the backend:\n\n```bash\nterraform init\n```\n\n---\n\n### Sample Workflow:\n\n1. **Write Terraform Config**: Create resource files (`vpc.tf`, `ec2.tf`).\n2. **Initialize**: Run `terraform init` to set up the environment.\n3. **Plan**: Run `terraform plan` to perform a dry-run and check for any potential issues.\n4. **Apply**: Run `terraform apply` to provision the resources.\n5. **State Management**: Use remote state for managing large infrastructures and multiple environments.\n\n### Additional Resources:\n\n- **AWS Resources**: VPC, Internet Gateway, Subnets, Security Groups, EC2 instances.\n- **Data Sources**: Reuse and reference existing resources.\n- **Remote State**: Manage infrastructure state across projects.\n\n---\n\n## Conclusion\n\nTerraform offers greater flexibility and multi-cloud support compared to cloud-native tools like CloudFormation (CFT) and Azure Resource Manager (ARM). It simplifies resource management through modules, reusable code, and a powerful state management system. This repository contains code examples and best practices for managing your cloud infrastructure using Terraform.\n"
  },
  {
    "path": "Day 11 AWS-Terraform-Part-2/README.md",
    "content": "![Untitled design](https://github.com/user-attachments/assets/d7d9ad96-e14e-40d8-ac6f-93004fb69da0)\n\n\n\n# Terraform Day 02 - Dependencies, Variables,  TFVars and Create Before Destroy\n\nToday, we'll dive into **dependencies in Terraform** and cover two main topics:  \n1. **Implicit and Explicit Dependencies**  \n2. **Variables and TFVars**\n\n## Dependencies in Terraform\n\nTerraform automatically handles resource dependencies in two ways:\n\n### 1. Implicit Dependencies\nAn **implicit dependency** occurs when one resource refers to the attribute of another resource. For example, when creating a VPC and then an Internet Gateway, the Internet Gateway doesn't inherently know that it must wait for the VPC to be created. However, when you reference the VPC ID in the Internet Gateway resource, Terraform understands that the VPC must be created first.\n\n- **Example:**  \n  When you declare a VPC, its ID is generated only after it is created. Any resource, like a subnet or Internet Gateway, that references this VPC ID creates an implicit dependency.\n\n### 2. Explicit Dependencies\nSometimes, implicit dependencies aren’t enough. For example, if we want the **S3 bucket** to be created only after the VPC is created, we need to use explicit dependencies. This is done using the `depends_on` argument in Terraform.\n\n- **Example:**  \n  A **NAT Gateway** should only be created after a **Route Table** has been established. If the NAT Gateway is created before the route table, it won’t function as expected. This is where **explicit dependencies** come into play using `depends_on`.\n\n### Task Example: VPC, Internet Gateway, and S3 Bucket\n- First, we’ll create a **VPC** and an **S3 bucket**. Since there's no direct dependency between the VPC and the S3 bucket, Terraform may create the S3 bucket first.\n- To enforce order, we’ll explore how to use `depends_on` to make sure that resources like the **NAT Gateway** and **S3 bucket** are created in the correct sequence.\n\n### Create S3 Buckets\n1. Create an `s3.tf` file.\n2. In it, define three S3 buckets.\n3. Observe that the S3 buckets and VPC will deploy in parallel because there is no dependency between them.\n\nTo ensure that the S3 bucket is created **after** the VPC, we’ll add explicit dependencies using the `depends_on` argument.\n\n---\n\n## Variables and TFVars\n\n### Variables\nVariables allow us to easily change values without editing the code directly. This makes managing infrastructure more flexible and reusable.\n\n### TFVars\nTerraform variable values can be stored in separate `.tfvars` files, helping to:\n- Keep the code clean.\n- Manage sensitive data or multiple environments efficiently.\n\n### Removing Lock Files\nRemember to clean up all `terraform.tfstate.lock` files before redeploying to avoid locking issues.\n\n---\n\n## Create Before Destroy\n\nWhen replacing resources, Terraform often follows the **create before destroy** pattern. This ensures minimal downtime by creating a replacement resource before destroying the original.\n\n- **Example:**  \n  When updating a resource like a **Key Pair** or upgrading a component, Terraform will first create the new key, then destroy the old one after the new one is functional.\n\n### Task: Example Deployment\n1. Deploy the resource.\n2. Run `terraform plan` and observe the changes. (Copy the output to a Notepad for reference.)\n3. Deploy the resource.\n4. Add an additional name to the S3 bucket and reapply the changes to see how Terraform manages updates.\n\n---\n\n## Prevent Destroy\n\nUse `prevent_destroy` to safeguard critical resources. This is especially useful for resources like databases or sensitive buckets where destruction could cause significant issues.\n\n---\n\nBy the end of this session, you’ll have a deeper understanding of how Terraform handles dependencies, the flexibility of variables, and the best practices for managing infrastructure deployment and updates.\n\n---\n\n"
  },
  {
    "path": "Day 12 AWS-Terraform-Part-3/README.md",
    "content": "\n![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-cTa2tZAgR1ShW2UwqRQdcQ-fbR1bkc9RlC23TynNHoRhA](https://github.com/user-attachments/assets/13e11914-f6c0-409a-9c9c-a9ce08f926be)\n\n\n# Terraform Workspaces for Multi-Environment Infrastructure\n\nThis repository demonstrates how to set up and manage multiple identical environments (Dev, UAT, and Prod) using Terraform Workspaces. Each environment will have 3 servers with unique naming conventions. The state management for each environment is handled separately using Terraform's state backend in S3 with DynamoDB for state locking.\n\n## Prerequisites\n\n- Terraform installed on your local machine.\n- AWS CLI configured with proper permissions.\n- S3 bucket for state backend.\n- DynamoDB table for state file locking.\n\n## Infrastructure Overview\n\nYou will be deploying three environments:\n- **Dev**: 3 Servers\n- **UAT**: 3 Servers\n- **Prod**: 3 Servers\n\nEach environment will have its own Terraform `.tfvars` file to manage configuration differences like naming conventions.\n\n## Step-by-Step Guide\n\n### 1. Clone the Base Infrastructure\n\nClone the base Terraform infrastructure and make the necessary changes to create multiple environments.\n\n### 2. Setup State Backend\n\nCreate an S3 bucket to store Terraform state files and configure it as a backend in your `main.tf`. Ensure that the bucket is set up before proceeding.\n\n### 3. Create Environment-Specific `.tfvars` Files\n\n- Rename the existing `terraform.tfvars` to `dev.tfvars`.\n- Create `uat.tfvars` and `prod.tfvars` with environment-specific changes (like naming conventions for servers).\n\n### 4. Initialize and Validate Terraform\n\n```bash\nterraform init\nterraform validate\nterraform fmt\n```\n\n### 5. Apply Terraform Configuration\n\nDeploy the infrastructure for each environment using the appropriate `.tfvars` file.\n\n#### For Dev Environment:\n```bash\nterraform apply -var-file=dev.tfvars\n```\n\n#### For UAT Environment:\n```bash\nterraform workspace new uat\nterraform apply -var-file=uat.tfvars\n```\n\n#### For Prod Environment:\n```bash\nterraform workspace new prod\nterraform apply -var-file=prod.tfvars\n```\n\n### 6. Managing State Files for Different Environments\n\nEach environment requires a separate state file. If you use the same state backend without separating the state files, Terraform will attempt to apply changes across environments.\n\nTo manage state files for different environments, use Terraform workspaces:\n\n```bash\nterraform workspace new dev\nterraform workspace new uat\nterraform workspace new prod\n```\n\nEach workspace will create a separate folder in the S3 bucket to store the respective environment’s state file.\n\n### 7. Adding EC2 Instances\n\nModify the `ec2.tf` file to add the EC2 instance configurations:\n- Use different AMI IDs for each environment.\n- Example of setting the server name:\n  ```hcl\n  server_name = \"${var.env}-Server-1\"\n  ```\n\n### 8. User Data Configuration\n\nAdd user data to the EC2 instances to update the web server’s index page:\n```bash\n#!/bin/bash\necho \"Hello from ${var.env}\" > /var/www/html/index.nginx-debian.html\n```\n\n### 9. Switch Between Workspaces\n\nTo switch between environments, use the `terraform workspace` commands:\n\n```bash\nterraform workspace select dev\nterraform plan -var-file=dev.tfvars\nterraform apply -var-file=dev.tfvars\n```\n\nRepeat the process for UAT and Prod environments by selecting their respective workspaces.\n\n### 10. Check Public IPs of All Servers\n\nAfter deployment, verify the public IP addresses of the servers in each environment.\n\n### 11. Clean Up (Destroy Infrastructure)\n\nTo destroy resources from each environment:\n```bash\nterraform workspace select prod\nterraform destroy -var-file=prod.tfvars\n\nterraform workspace select dev\nterraform destroy -var-file=dev.tfvars\n\nterraform workspace select uat\nterraform destroy -var-file=uat.tfvars\n```\n\n### 12. Delete Workspaces\n\nOnce the environments are destroyed, delete the workspaces:\n```bash\nterraform workspace delete dev\nterraform workspace delete uat\nterraform workspace delete prod\n```\n\n### 13. DynamoDB for State Locking\n\nTo avoid state file conflicts, implement state locking using DynamoDB.\n\n1. Create a `dynamodb.tf` file:\n    ```hcl\n    resource \"aws_dynamodb_table\" \"terraform_locks\" {\n      name         = \"terraform-state-lock\"\n      billing_mode = \"PAY_PER_REQUEST\"\n      hash_key     = \"LockID\"\n  \n      attribute {\n        name = \"LockID\"\n        type = \"S\"\n      }\n    }\n    ```\n\n2. Apply the DynamoDB configuration:\n    ```bash\n    terraform apply\n    ```\n\n3. Add the DynamoDB state locking configuration to your backend in `main.tf`:\n    ```hcl\n    backend \"s3\" {\n      bucket         = \"your-s3-bucket\"\n      key            = \"path/to/terraform.tfstate\"\n      region         = \"us-west-2\"\n      dynamodb_table = \"terraform-state-lock\"\n    }\n    ```\n\n### 14. Excluding DynamoDB from Terraform State\n\nIf you wish to manage DynamoDB outside of Terraform to prevent it from being destroyed, remove it from the state file:\n\n```bash\nterraform state rm aws_dynamodb_table.terraform_locks\n```\n\n### 15. Push Code to GitHub\n\nOnce all the files are ready, push them to your GitHub repository:\n\n```bash\ngit init\ngit add .\ngit commit -m \"Initial commit for Terraform multi-environment setup\"\ngit remote add origin https://github.com/your-username/terraform-multi-env.git\ngit push -u origin main\n```\n\n### 16. Deploying the Infrastructure from GitHub\n\n1. Clone the repository onto your local machine or remote instance:\n    ```bash\n    git clone https://github.com/your-username/terraform-multi-env.git\n    ```\n2. Run the Terraform commands to deploy the infrastructure:\n    ```bash\n    terraform init\n    terraform plan -var-file=dev.tfvars\n    terraform apply -var-file=dev.tfvars\n    ```\n\n---\n\n## Conclusion\n\nThis project demonstrates how to manage multiple identical environments (Dev, UAT, Prod) using Terraform Workspaces, S3 for state management, and DynamoDB for state locking. Be sure to separate your environments' state files to avoid conflicts and manage infrastructure more effectively.\n\nFeel free to explore, modify, and extend this setup for your own infrastructure needs.\n\n--- \n\n"
  },
  {
    "path": "Day 13 AWS-Terraform-Part-4/README.md",
    "content": "\n![Untitled design](https://github.com/user-attachments/assets/58f96a76-cbc0-4ba5-ae0c-41e6f85c9b2b)\n\n\n# Terraform Day 5: Enabling TF_LOG and Working with Sensitive Information\n\n## Overview\n\nIn this session, we explore how to enable logging in Terraform using environment variables, how to handle sensitive information such as passwords, and how to integrate AWS Secrets Manager for securely storing sensitive data. We also demonstrate deploying an RDS MySQL instance with Terraform.\n\n## Topics Covered\n\n1. **Enabling TF_LOG for Debugging**\n2. **Working with Sensitive Information**\n3. **Using AWS Secrets Manager with Terraform**\n4. **Deploying RDS MySQL Instance**\n\n## Enabling TF_LOG\n\nTerraform provides the `TF_LOG` environment variable for controlling log verbosity. You can choose from different levels like `TRACE`, `DEBUG`, `INFO`, `WARN`, and `ERROR`.\n\n### Steps to Enable TF_LOG\n\n1. **Set TF_LOG for detailed trace logs:**\n\n    ```powershell\n    $env:TF_LOG = \"TRACE\"\n    terraform destroy\n    ```\n\n2. **Set TF_LOG for error-level logging:**\n\n    ```powershell\n    $env:TF_LOG = \"ERROR\"\n    terraform destroy\n    ```\n\n3. **Write logs to a file:**\n\n    ```powershell\n    $env:TF_LOG = \"TRACE\"\n    $env:TF_LOG_PATH = \"./logs/terraform.log\"\n    terraform destroy\n    ```\n\n## Handling Sensitive Information\n\nWhen working with sensitive data like usernames and passwords, it is important to avoid hardcoding them in the Terraform scripts. Instead, use variables marked as `sensitive`.\n\n### Example\n\nIn your `variables.tf`:\n\n```hcl\nvariable \"username\" {\n  type      = string\n  sensitive = true\n}\n\nvariable \"password\" {\n  type      = string\n  sensitive = true\n}\n```\n\n### Storing Passwords Securely with AWS Secrets Manager\n\nTo securely store and retrieve sensitive information like passwords, you can use AWS Secrets Manager.\n\n1. **Generate a random password:**\n\n    ```hcl\n    resource \"random_password\" \"master\" {\n      length           = 16\n      special          = true\n      override_special = \"_!%^\"\n    }\n    ```\n\n2. **Store the password in AWS Secrets Manager:**\n\n    ```hcl\n    resource \"aws_secretsmanager_secret\" \"password\" {\n      name = \"test-db-password\"\n    }\n\n    resource \"aws_secretsmanager_secret_version\" \"password\" {\n      secret_id     = aws_secretsmanager_secret.password.id\n      secret_string = random_password.master.result\n    }\n    ```\n\n3. **Retrieve the password when deploying RDS:**\n\n    ```hcl\n    data \"aws_secretsmanager_secret_version\" \"password\" {\n      secret_id = aws_secretsmanager_secret.password.id\n    }\n\n    resource \"aws_db_instance\" \"default\" {\n      identifier           = \"testdb\"\n      allocated_storage    = 10\n      storage_type         = \"gp2\"\n      engine               = \"mysql\"\n      engine_version       = \"5.7\"\n      instance_class       = \"db.t2.medium\"\n      username             = \"dbadmin\"\n      password             = data.aws_secretsmanager_secret_version.password.secret_string\n      publicly_accessible  = true\n      db_subnet_group_name = aws_db_subnet_group.default.id\n    }\n    ```\n\n## Deploying RDS MySQL Instance\n\n### Steps:\n\n1. **Create a subnet group:**\n\n    ```hcl\n    resource \"aws_db_subnet_group\" \"default\" {\n      name       = \"main\"\n      subnet_ids = [\n        aws_subnet.subnet1-public.id,\n        aws_subnet.subnet2-public.id,\n      ]\n      tags = {\n        Name = \"My DB subnet group\"\n      }\n    }\n    ```\n\n2. **Deploy the RDS instance:**\n\n    ```hcl\n    resource \"aws_db_instance\" \"default\" {\n      identifier         = \"testdb\"\n      allocated_storage  = 10\n      engine             = \"mysql\"\n      engine_version     = \"5.7\"\n      instance_class     = \"db.t2.medium\"\n      name               = \"mydb\"\n      username           = \"dbadmin\"\n      password           = data.aws_secretsmanager_secret_version.password.secret_string\n      publicly_accessible = true\n      db_subnet_group_name = aws_db_subnet_group.default.id\n    }\n    ```\n\n### Connecting to RDS via MySQL Workbench:\n\n1. In AWS Console, go to **RDS > Databases > testdb** and copy the **endpoint**.\n2. In **MySQL Workbench**, use:\n   - Hostname: `<copied endpoint>`\n   - Username: `dbadmin`\n   - Password: Fetch from **AWS Secrets Manager**.\n\n### Destroy the Infrastructure\n\nAfter testing, remember to clean up:\n\n```bash\nterraform destroy\n```\n\n## Interview Tip: Handling Sensitive Information\n\nWhen asked how to handle sensitive information in Terraform, you can explain that Terraform can integrate with AWS Secrets Manager to securely store and retrieve sensitive data. Sensitive variables should be defined in Terraform to avoid exposing sensitive information directly in the code.\n\n---\n\nThis README provides an overview of how to enable logging, securely manage sensitive information, and deploy an RDS MySQL instance using Terraform.\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/README.md",
    "content": "\n# Terraform Functions Part: 1\n\n![Thumb](https://github.com/user-attachments/assets/69bc2680-9ffe-4852-a7f0-f2b9ed8496c5)\n\n\nThis repository demonstrates the efficient use of Terraform functions to manage infrastructure as code without duplicating resources. The focus is on creating modular, scalable, and maintainable Terraform configurations.\n\n## Overview\n\nIn this project, we will utilize Terraform functions and techniques to create a cloud infrastructure with multiple instances and subnets efficiently. We aim to minimize duplication in our code by using various Terraform functionalities such as `count`, `for_each`, `locals`, and dynamic blocks.\n\n### Key Objectives\n\n- Clone the repository.\n- Streamline Terraform configuration files by removing unnecessary variables and resources.\n- Implement best practices for variable management and resource creation.\n\n## Repository Structure\n\n- **main.tf**: Main configuration file containing resource definitions.\n- **variables.tf**: File for variable definitions.\n- **terraform.tfvars**: File for variable values.\n- **locals.tf**: File for local variables.\n- **subnet.tf**: File dedicated to managing subnet resources.\n- **routing_table.tf**: File for route table configurations.\n- **sg.tf**: File for security group configurations.\n\n## Step-by-Step Tasks\n\n### 1. Clone Repository\n\nStart by cloning the repository to your local environment.\n\n### 2. Clean Up Terraform Files\n\n#### variables.tf\n- **Remove**:\n  - Access Key and Secret Key\n  - AMI\n  - Internet Gateway (IGW)\n  - All CIDR and Subnet entries\n- **Keep**:\n  - Availability Zones (AZs)\n  - Environment (ENV)\n- **Define Variables**:\n  - Create a variable for `Public_cidr_block` to manage the creation of 6 subnets (3 private and 3 public).\n  - Define `Private_cidr_block`.\n\n#### terraform.tfvars\n- Copy all relevant variables from `variables.tf` and paste them into `terraform.tfvars`.\n- **Remove** routing table configurations to let them inherit the VPC name.\n\n### 3. Modify main.tf\n\n- **Remove** Access Key and Secret Key entries.\n- **Paste** remote backend configuration.\n- **Update VPC Tags**: Instead of passing values for each tag, utilize `locals` for common tag values.\n\n### 4. Create locals.tf\n\n- Define local variables for common tag values.\n- Access local variables in the VPC configuration using the appropriate syntax.\n\n### 5. Update Subnet Configurations\n\n#### Public Subnets\n- Remove additional public subnets (subnet 2 and 3).\n- Use `count = 3` to create the necessary number of public subnets.\n- Utilize the `element` function to reference specific CIDR blocks based on the count index.\n\n#### Private Subnets\n- Rename resources to reflect they are private.\n- Adjust tags accordingly.\n\n### 6. Route Tables Configuration\n\n- Define separate route tables for public and private subnets.\n- **Comment Out** route table associations temporarily.\n- Use `terraform plan` to preview subnet configurations.\n\n### 7. Organize Subnets into subnet.tf\n\n- Move all subnet resources to `subnet.tf`.\n- Use `count.index + 1` to manage subnet indexing dynamically.\n\n### 8. Create routing_table.tf\n\n- Move all route table blocks to this file.\n- Address subnet ID issues by ensuring the correct variable references.\n- Introduce Splat syntax for managing multiple subnet associations.\n\n### 9. Dynamic Security Group Management\n\n#### sg.tf\n- Copy necessary configurations from `main.tf` into `sg.tf`.\n- Add ports 443 and 22 to the security group.\n- Implement dynamic ingress rules by creating a `service_ports` variable.\n- Populate this variable with values for multiple ports: `[\"80\", \"8080\", \"443\", \"8443\", \"22\", \"3306\", \"1433\"]`.\n\n### 10. Finalization\n\n- Run `terraform fmt` to format the configuration files.\n- Execute `terraform plan` and `terraform apply` to validate and deploy the infrastructure.\n- Check inbound and outbound rules to ensure proper configuration.\n\n## Conclusion\n\nBy following these steps and utilizing Terraform functions, we can efficiently manage our cloud infrastructure with minimal duplication and improved scalability. This project serves as a template for creating robust Terraform configurations.\n\n---\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/RTA.tf",
    "content": "resource \"aws_route_table_association\" \"public-subnets\" {\n  #   count          = 3\n  count          = length(var.public_cird_block)\n  subnet_id      = element(aws_subnet.public-subnet.*.id, count.index)\n  route_table_id = aws_route_table.public-route-table.id\n}\n\n\nresource \"aws_route_table_association\" \"private-subnets\" {\n  #   count          = 3\n  count          = length(var.private_cird_block)\n  subnet_id      = element(aws_subnet.private-subnet.*.id, count.index)\n  route_table_id = aws_route_table.private-route-table.id\n}\n\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/locals.tf",
    "content": "locals {\n  Owner      = \"Prod-Team\"\n  costcenter = \"Hyd-8080\"\n  TeamDL     = \"Saikiran.pinapathruni18@gmail.com\"\n}\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/main.tf",
    "content": "#This Terraform Code Deploys Basic VPC Infra.\nprovider \"aws\" {\n  region = var.aws_region\n}\n\nterraform {\n  backend \"s3\" {\n    bucket = \"workspacesbucket01\"\n    key    = \"function.tfstate\"\n    region = \"us-east-1\"\n  }\n}\n\n\nresource \"aws_vpc\" \"default\" {\n  cidr_block           = var.vpc_cidr\n  enable_dns_hostnames = true\n\n  tags = {\n    Name        = \"${var.vpc_name}\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = \"${var.environment}\"\n  }\n}\n\nresource \"aws_internet_gateway\" \"default\" {\n  vpc_id = aws_vpc.default.id\n  tags = {\n    Name = \"${var.vpc_name}-IGW\"\n  }\n}\n\nresource \"aws_route_table\" \"public-route-table\" {\n  vpc_id = aws_vpc.default.id\n\n  route {\n    cidr_block = \"0.0.0.0/0\"\n    gateway_id = aws_internet_gateway.default.id\n  }\n\n  tags = {\n    Name        = \"${var.vpc_name}-Public-RT\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = \"${var.environment}\"\n\n  }\n}\n\n\nresource \"aws_route_table\" \"private-route-table\" {\n  vpc_id = aws_vpc.default.id\n\n  route {\n    cidr_block = \"0.0.0.0/0\"\n    gateway_id = aws_internet_gateway.default.id\n  }\n\n  tags = {\n    Name        = \"${var.vpc_name}-private-RT\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = \"${var.environment}\"\n\n  }\n}\n\n\n\n\n# data \"aws_ami\" \"my_ami\" {\n#      most_recent      = true\n#      #name_regex       = \"^sai\"\n#      owners           = [\"232323232323232323\"]\n# }\n\n\n# resource \"aws_instance\" \"web-1\" {\n#     ami = \"${data.aws_ami.my_ami.id}\"\n#     #ami = \"ami-0d857ff0f5fc4e03b\"\n#     availability_zone = \"us-east-1a\"\n#     instance_type = \"t2.micro\"\n#     key_name = \"LaptopKey\"\n#     subnet_id = \"${aws_subnet.subnet1-public.id}\"\n#     vpc_security_group_ids = [\"${aws_security_group.allow_all.id}\"]\n#     associate_public_ip_address = true\t\n#     tags = {\n#         Name = \"Server-1\"\n#         Env = \"Prod\"\n#         Owner = \"sai\"\n# \tCostCenter = \"ABCD\"\n#     }\n#      user_data = <<- EOF\n#      #!/bin/bash\n#      \tsudo apt-get update\n#      \tsudo apt-get install -y nginx\n#      \techo \"<h1>${var.env}-Server-1</h1>\" | sudo tee /var/www/html/index.html\n#      \tsudo systemctl start nginx\n#      \tsudo systemctl enable nginx\n#      EOF\n\n# }\n\n# resource \"aws_dynamodb_table\" \"state_locking\" {\n#   hash_key = \"LockID\"\n#   name     = \"dynamodb-state-locking\"\n#   attribute {\n#     name = \"LockID\"\n#     type = \"S\"\n#   }\n#   billing_mode = \"PAY_PER_REQUEST\"\n# }\n\n##output \"ami_id\" {\n#  value = \"${data.aws_ami.my_ami.id}\"\n#}\n#!/bin/bash\n# echo \"Listing the files in the repo.\"\n# ls -al\n# echo \"+++++++++++++++++++++++++++++++++++++++++++++++++++++\"\n# echo \"Running Packer Now...!!\"\n# packer build -var=aws_access_key=AAAAAAAAAAAAAAAAAA -var=aws_secret_key=BBBBBBBBBBBBB packer.json\n# echo \"+++++++++++++++++++++++++++++++++++++++++++++++++++++\"\n# echo \"Running Terraform Now...!!\"\n# terraform init\n# terraform apply --var-file terraform.tfvars -var=\"aws_access_key=AAAAAAAAAAAAAAAAAA\" -var=\"aws_secret_key=BBBBBBBBBBBBB\" --auto-approve\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/sg.tf",
    "content": "resource \"aws_security_group\" \"allow_all\" {\n  name        = \"${var.vpc_name}-allow-all\"\n  description = \"Allow all Inbound traffic\"\n  vpc_id      = aws_vpc.default.id\n\n  # Ingress rule block with dynamic iteration over service_ports\n  dynamic \"ingress\" {\n    for_each = var.ingress_value\n    content {\n      from_port   = ingress.value\n      to_port     = ingress.value\n      protocol    = \"tcp\"\n      cidr_blocks = [\"0.0.0.0/0\"]  # Allow traffic from any IP\n    }\n  }\n\n  # Egress rule block\n  egress {\n    from_port   = 0\n    to_port     = 0\n    protocol    = \"-1\"\n    cidr_blocks = [\"0.0.0.0/0\"]  # Allow outbound traffic to any IP\n  }\n\n  # Tags block\n  tags = {\n    Name        = \"${var.vpc_name}-allow-all\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = var.environment\n  }\n}\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/subnet.tf",
    "content": "resource \"aws_subnet\" \"public-subnet\" {\n  #count             = 3 #012\n  count             = length(var.public_cird_block)\n  vpc_id            = aws_vpc.default.id\n  cidr_block        = element(var.public_cird_block, count.index + 1)\n  availability_zone = element(var.azs, count.index)\n\n  tags = {\n    Name        = \"${var.vpc_name}-public-subnet-${count.index + 1}\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = \"${var.environment}\"\n\n  }\n}\n\nresource \"aws_subnet\" \"private-subnet\" {\n  #   count             = 3 #012\n  count             = length(var.private_cird_block)\n  vpc_id            = aws_vpc.default.id\n  cidr_block        = element(var.private_cird_block, count.index + 1)\n  availability_zone = element(var.azs, count.index)\n\n  tags = {\n    Name        = \"${var.vpc_name}-private-subnet-${count.index + 1}\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = \"${var.environment}\"\n\n  }\n}\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/terraform.tfvars",
    "content": "aws_region         = \"us-east-1\"\nvpc_cidr           = \"172.18.0.0/16\"\nvpc_name           = \"DevSecOps-Vpc\"\nkey_name           = \"SecOps-Key\"\nazs                = [\"us-east-1a\", \"us-east-1b\", \"us-east-1c\"]\npublic_cird_block  = [\"172.18.1.0/24\", \"172.18.2.0/24\", \"172.18.3.0/24\", \"172.18.4.0/24\", \"172.18.5.0/24\"]\nprivate_cird_block = [\"172.18.10.0/24\", \"172.18.20.0/24\", \"172.18.30.0/24\", \"172.18.40.0/24\", \"172.18.50.0/24\"]\nenvironment        = \"Prod\"\ningress_value      = [\"80\", \"8080\", \"443\", \"8443\", \"22\", \"3306\", \"1900\", \"1443\"]\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/variables.tf",
    "content": "variable \"aws_region\" {}\nvariable \"vpc_cidr\" {}\nvariable \"vpc_name\" {}\nvariable \"key_name\" {}\nvariable \"azs\" {}\nvariable \"public_cird_block\" {}\nvariable \"private_cird_block\" {}\nvariable \"environment\" {}\nvariable \"ingress_value\" {}\n"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/README.md",
    "content": "![a-futuristic-3d-scene-featuring-an-astronaut-sitti-JmnDsV37TdiaW1tmnfgktg-hPykpO-xSY6aYtvVHr0G_g](https://github.com/user-attachments/assets/5bd8031e-c1a2-4305-b371-b7551ad62055)\n\n\n# Terraform Functions - 2 \n\nThis repository demonstrates the usage of various Terraform functions such as `lookup`, `count`, and `condition`, along with implementing file provisioners (`remote-exec`, `local-exec`). The goal is to dynamically manage infrastructure using variables, conditional logic, and provisioning tasks.\n\n## Project Structure\n\n- **`ec2.tf`**: Main file to create EC2 instances.\n- **`variables.tf`**: Define variables such as AMIs, instance type, keyname, and environment.\n- **`terraform.tfvars`**: Assign values to variables such as AMI IDs for different regions and the environment.\n- **`null.tf`**: Implements `null_resource` to run scripts without recreating instances.\n- **`userdata.sh`**: Script to install software on EC2 instances after they are created.\n\n## Terraform Functions Overview\n\n### 1. AMI Lookup\n\nThe `lookup` function helps dynamically retrieve AMI IDs based on the region. \n\nExample:\n```hcl\nvariable \"amis\" {\n  type = map(string)\n}\n\n# In terraform.tfvars\namis = {\n  us-east-1 = \"ami-0abcd1234efgh5678\"\n  us-east-2 = \"ami-0wxyz1234mnop5678\"\n}\n\n# In ec2.tf\nami = lookup(var.amis, var.aws_region)\n```\n\nThis setup allows us to deploy EC2 instances using region-specific AMIs. For example, AMIs in `us-east-1` may not work in `us-east-2`.\n\n### 2. Instance Count with Subnet Mapping\n\nWe declare three subnets, and each subnet must map to one EC2 instance. By using `count`, we can define how many instances to create based on the length of subnets.\n\n```hcl\ncount = length(var.public_cidr_block)\n\nsubnet_id = element(var.subnets, count.index)\n```\n\n### 3. Conditional Deployment\n\nUsing a condition, we can decide how many instances to create based on the environment.\n\n```hcl\ncount = var.environment == \"Prod\" ? 3 : 1\n```\n\nThis means if the environment is `Prod`, 3 instances are created; otherwise, 1 instance is created.\n\n## Provisioners\n\n### File Provisioning with `remote-exec`\n\nWe use provisioners to apply scripts after EC2 instances are created without recreating the instances.\n\n- **User Data**: Initially, the user data script is passed during instance creation.\n- **Provisioners**: To avoid recreating instances for every change, we use `null_resource` to run scripts or commands on existing instances.\n\nExample:\n```hcl\nresource \"null_resource\" \"cluster\" {\n  count = length(var.public_cidr_block)\n  \n  provisioner \"remote-exec\" {\n    connection {\n      type     = \"ssh\"\n      user     = \"ec2-user\"\n      private_key = file(\"path/to/key.pem\")\n      host     = aws_instance.example.public_ip\n    }\n    inline = [\n      \"sudo bash /tmp/script.sh\"\n    ]\n  }\n}\n```\n\n### Tainting Resources\n\nIf we need to recreate a resource, we can use Terraform's `taint` feature. Marking a resource as \"tainted\" forces Terraform to recreate it during the next apply.\n\nExample:\n```bash\nterraform taint null_resource.cluster\n```\n\nThis marks the resource as needing recreation, allowing the new script to be applied without affecting the rest of the infrastructure.\n\n## Commands\n\n```bash\nterraform init      # Initialize Terraform\nterraform fmt       # Format the code\nterraform validate  # Validate the configuration\nterraform apply     # Apply the configuration\n```\n\n### Taint Example\n\n```bash\nterraform taint null_resource.cluster\nterraform apply\n```\n\n## Next Steps\n\n- Explore **Terraform Modules** for better structuring and reuse of code.\n\n## Interview Tips\n\n**What is taint in Terraform?**\nTaint marks a resource for recreation. You can manually taint a resource using the `terraform taint` command, causing Terraform to destroy and recreate it during the next `apply`. Conversely, you can \"untaint\" a resource to prevent it from being recreated.\n\n---\n\nStay tuned for the next session where we’ll dive into **Terraform Modules**!\n"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/private-ec2.tf",
    "content": "resource \"aws_instance\" \"private-server\" {\n  # count = length(var.private_cird_block)\n  count                  = var.environment == \"Prod\" ? 3 : 1\n  ami                    = lookup(var.amis, var.aws_region)\n  instance_type          = \"t2.micro\"\n  key_name               = var.key_name\n  subnet_id              = element(aws_subnet.private-subnet.*.id, count.index + 1)\n  vpc_security_group_ids = [\"${aws_security_group.allow_all.id}\"]\n  # associate_public_ip_address = true\t\n  tags = {\n    Name        = \"${var.vpc_name}-Private-Server-${count.index + 1}\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = \"${var.environment}\"\n  }\n  user_data = <<-EOF\n     #!/bin/bash\n     sudo apt update\n     sudo apt install nginx -y\n     sudo apt install git -y\n     sudo git clone https://github.com/saikiranpi/SecOps-game.git\n     sudo rm -rf /var/www/html/index.nginx-debian.html\n     sudo cp  SecOps-game/index.html /var/www/html/index.html\n     echo \"<h1>${var.vpc_name}-public-Server-${count.index + 1}</h1>\" >> /var/www/html/index.html\n     sudo systemctl start nginx\n     sudo systemctl enable nginx\n EOF\n}\n"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/public-ec2.tf",
    "content": "resource \"aws_instance\" \"public-server\" {\n  # count = length(var.public_cird_block)\n  count                       = var.environment == \"Prod\" ? 3 : 1\n  ami                         = lookup(var.amis, var.aws_region)\n  instance_type               = \"t2.micro\"\n  key_name                    = var.key_name\n  subnet_id                   = element(aws_subnet.public-subnet.*.id, count.index + 1)\n  vpc_security_group_ids      = [\"${aws_security_group.allow_all.id}\"]\n  associate_public_ip_address = true\n  tags = {\n    Name        = \"${var.vpc_name}-Public-Server-${count.index + 1}\"\n    Owner       = local.Owner\n    costcenter  = local.costcenter\n    TeamDL      = local.TeamDL\n    environment = \"${var.environment}\"\n  }\n\n}\n"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/terraform.tfvars",
    "content": "aws_region         = \"us-east-1\"\nvpc_cidr           = \"172.18.0.0/16\"\nvpc_name           = \"DevSecOps-Vpc\"\nkey_name           = \"SecOps-Key\"\nazs                = [\"us-east-1a\", \"us-east-1b\", \"us-east-1c\"]\npublic_cird_block  = [\"172.18.1.0/24\", \"172.18.2.0/24\", \"172.18.3.0/24\"]\nprivate_cird_block = [\"172.18.10.0/24\", \"172.18.20.0/24\", \"172.18.30.0/24\"]\nenvironment        = \"Dev\"\ningress_value      = [\"80\", \"8080\", \"443\", \"8443\", \"22\", \"3306\", \"1900\", \"1443\"]\namis = {\n  us-east-1 = \"ami-0866a3c8686eaeeba\"\n  us-east-2 = \"ami-0ea3c35c5c3284d82\"\n}\n"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/txt.tf",
    "content": "#   user_data = <<-EOF\n#     #!/bin/bash\n#     sudo apt update\n#     sudo apt install nginx -y\n#     sudo apt install git -y\n#     sudo git clone https://github.com/saikiranpi/SecOps-game.git\n#     sudo rm -rf /var/www/html/index.nginx-debian.html\n#     sudo cp  SecOps-game/index.html /var/www/html/index.html\n#     echo \"<h1>${var.vpc_name}-private-Server-${count.index + 1}</h1>\" >> /var/www/html/index.html\n#     sudo systemctl start nginx\n#     sudo systemctl enable nginx\n# EOF\n\n\n# provisioner \"file\" {\n#   source      = \"user_data.sh\"\n#   destination = \"/tmp/user_data.sh\"\n\n#   connection {\n#     type        = \"ssh\"\n#     user        = \"ubuntu\"\n#     private_key = file(\"LaptopKey.pem\")\n#     host        = element(aws_instance.public-servers.*.public_ip, count.index)\n#   }\n# }\n\n# provisioner \"remote-exec\" {\n#   inline = [\n#     \"sudo chmod 777 /tmp/userdata.sh\",\n#     \"sudo /tmp/userdata.sh\",\n#     \"sudo apt update\",\n#     \"sudo apt install jq unzip -y\",\n#   ]\n\n#   connection {\n#     type        = \"ssh\"\n#     user        = \"ubuntu\"\n#     private_key = file(\"SecOps-Key.pem\")\n#     host        = element(aws_instance.public-server.*.public_ip, count.index)\n#   }\n# }\n"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/user-data.sh",
    "content": "#!/bin/bash\nsudo apt update\nsudo apt install nginx -y\nsudo apt install git -y\nsudo git clone https://github.com/saikiranpi/SecOps-game.git\nsudo rm -rf /var/www/html/index.nginx-debian.html\nsudo cp  SecOps-game/index.html /var/www/html/index.html\necho \"<h1>${var.vpc_name}-public-Server-${count.index + 1}</h1>\" >> /var/www/html/index.html\nsudo systemctl start nginx\nsudo systemctl enable nginx\n#testing \n#testing\n#restng \n"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/variable.sh",
    "content": "variable \"aws_region\" {}\nvariable \"vpc_cidr\" {}\nvariable \"vpc_name\" {}\nvariable \"key_name\" {}\nvariable \"azs\" {}\nvariable \"public_cird_block\" {}\nvariable \"private_cird_block\" {}\nvariable \"environment\" {}\nvariable \"ingress_value\" {}\nvariable \"amis\" {}\n"
  },
  {
    "path": "Day 16 AWS-Terraform-Part-6 Modules-Part-1/README.md",
    "content": "# Terraform Project: Modularized Infrastructure Setup\n\n\n![a-vibrant-and-energetic-youtube-thumbnail-with-a-s-giqGaHBwT7yCh792W1jUEQ-NkAg-GSlQvynsgO8mL7hAw](https://github.com/user-attachments/assets/ca2885eb-cae5-4a18-90c1-461c349a7fb1)\n\n\nThis repository demonstrates how to modularize Terraform code for a scalable, manageable infrastructure deployment across multiple environments (e.g., dev, QA, production). The key idea is to break down the Terraform code into modules for various infrastructure components like networking, compute, security groups, load balancers, and NAT gateways. This modular approach minimizes manual changes and overhead when switching between environments.\n\n## Problem Overview\n\nIn typical infrastructure deployments, environments like dev, QA, and production might have different requirements (e.g., dev doesn’t need a load balancer or Route53). Managing these differences with a single Terraform codebase can lead to manual changes, which is inefficient. By breaking the code into modules, you can dynamically include/exclude components based on environment requirements, making the infrastructure easier to manage.\n\n## Solution\n\nWe break the infrastructure into the following modules:\n- **Network**: VPC, subnets, routing\n- **Compute**: EC2 instances (public and private)\n- **Security Groups (SG)**: For securing VPC resources\n- **NAT**: NAT gateway for private instance internet access\n- **ELB**: Elastic Load Balancers (optional)\n- **IAM**: Identity and Access Management\n\n### Folder Structure\n\n```\n/modules\n  ├── network\n  ├── compute\n  ├── sg\n  ├── nat\n  ├── elb\n  ├── iam\n/development\n  ├── main.tf\n  ├── variables.tf\n  ├── terraform.tfvars\n  └── ec2.tf\n/production\n  ├── infrastructure.tf\n  ├── variables.tf\n  ├── terraform.tfvars\n```\n\n## Step-by-Step Setup\n\n### 1. Create Network Module\n\n1. **Files in `/modules/network`:**\n   - `vpc.tf`: Defines the VPC and internet gateway.\n   - `public_subnets.tf`: Public subnets configuration.\n   - `private_subnets.tf`: Private subnets configuration.\n   - `routing.tf`: Routing tables for public and private subnets.\n   - `variables.tf`: Define necessary input variables.\n   - `outputs.tf`: Export important values (e.g., VPC ID, subnet IDs).\n   - `locals.tf`: Set local values for environment or naming conventions.\n\n2. **Import Network Module in Development:**\n   - In `/development/infra.tf`, import the network module:\n     ```hcl\n     module \"dev_vpc_1\" {\n       source = \"../modules/network\"\n       # Specify the necessary variables\n       vpc_cidr = var.vpc_cidr\n       ...\n     }\n     ```\n\n3. **Deploy the Network Module:**\n   ```bash\n   cd development\n   terraform init\n   terraform fmt\n   terraform validate\n   terraform apply\n   ```\n\n### 2. Configure for Production\n\n- **Copy Files**: Copy the infrastructure setup from `development` to `production`.\n  - Ensure variable values are updated (e.g., CIDR blocks should not overlap between environments).\n\n- **Customize Values**: Modify `terraform.tfvars` and `variables.tf` in the `production` folder to match production settings (e.g., CIDR range, environment = \"production\").\n\n```bash\ncd production\nterraform init\nterraform fmt\nterraform apply\n```\n\n### 3. Add Security Groups Module\n\n1. **Create `/modules/sg`:**\n   - `sg.tf`: Security group configurations.\n   - `variables.tf`: Define necessary input variables.\n   - `outputs.tf`: Export security group IDs.\n\n2. **Import in Development:**\n   - Add the security group module to `development`'s `infra.tf`:\n     ```hcl\n     module \"dev_sg_1\" {\n       source = \"../modules/sg\"\n       vpc_id = module.dev_vpc_1.vpc_id\n       ...\n     }\n     ```\n\n3. **Deploy SG Module:**\n   ```bash\n   cd development\n   terraform get\n   terraform apply\n   ```\n\n4. **Replicate for Production**: Similarly, copy the security group module to `production`, making necessary adjustments.\n\n### 4. EC2 (Compute) Module\n\n1. **Create `/modules/compute`:**\n   - `private_ec2.tf`: For private EC2 instances.\n   - `public_ec2.tf`: For public EC2 instances.\n   - `variables.tf`: Define EC2-related variables.\n   - `outputs.tf`: Export EC2 instance IDs or other resources.\n\n2. **Deploy in Development**: Add EC2 configuration in `development/ec2.tf`, referencing the module:\n   ```hcl\n   module \"dev_compute_1\" {\n     source = \"../modules/compute\"\n     vpc_id = module.dev_vpc_1.vpc_id\n     ...\n   }\n   ```\n\n3. **Replicate for Production**: Follow the same process for production, customizing as needed.\n\n### 5. NAT Gateway Module\n\n1. **Create `/modules/nat`:**\n   - `natgw.tf`: Defines the NAT gateway.\n   - `variables.tf`: Input variables like subnet ID.\n   - `outputs.tf`: Export NAT gateway ID.\n\n2. **Deploy NAT in Development and Production**:\n   - Ensure the NAT module is added in both environments, with appropriate changes in `terraform.tfvars`.\n\n### Final Steps\n\n- **Destroy**: To clean up, run the following in both environments:\n  ```bash\n  cd production\n  terraform destroy -auto-approve\n  cd development\n  terraform destroy -auto-approve\n  ```\n\n## Key Terraform Commands\n\n- **Format and Validate**:\n  ```bash\n  terraform fmt\n  terraform validate\n  ```\n- **Initialize**:\n  ```bash\n  terraform init\n  ```\n- **Apply Changes**:\n  ```bash\n  terraform apply\n  ```\n- **Check State**:\n  ```bash\n  terraform state list\n  ```\n\n## Notes on Output Values\n\nThe `output.tf` files in each module play a crucial role in passing data between modules. For example, the VPC module exports the `vpc_id`, which is consumed by the Security Group module and EC2 module. This modular approach helps ensure that all components are properly linked, and their dependencies are clear.\n\n## Conclusion\n\nThis repository demonstrates how to efficiently manage and deploy infrastructure across multiple environments using Terraform modules. By breaking infrastructure code into reusable modules, we reduce complexity, manual work, and potential errors, leading to a more scalable and maintainable solution.\n"
  },
  {
    "path": "Day 17 AWS-Terraform-Full-Course/README.md",
    "content": "![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/user-attachments/assets/64a1a02f-c8c8-4248-876e-685505d76e4b)\n\n\n# Day 17 Terraform Full Course Link here : https://youtu.be/bqvdpa649nU?si=EQJNm-VPDgypTkwc\n"
  },
  {
    "path": "Day 18 AWS-Terraform-Part-8 TerraformCloud/README.md",
    "content": "\n![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/user-attachments/assets/022eb8c9-67e4-4f71-b01c-2591e65ea62d)\n\n# Day 18 AWS-Terraform-Part-8 TerraformCloud - Covered under terraform full course. \n# TimeStamp Link : https://youtu.be/bqvdpa649nU?list=PLMj5OfHGyNU81vI77YRFg9WWvbGKqbyXD&t=23642\n"
  },
  {
    "path": "Day 19 AWS-Terraform-Part-9 GitLab-Pipeline/README.md",
    "content": "# Day 19 Terraform Modules with GitLab \n\n![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-MdC5XT42QNySa2zI6fo6Sw-mIHViexFR9C60umSgtcnBg](https://github.com/user-attachments/assets/aa7fdce1-98ee-448a-9c96-343b0fbdba0d)\n\n\nComplete Source file here : https://gitlab.com/saikiranpi1/modules-gitlab.git\n\n```markdown\n# Terraform - GitLab Integration\n\nThis repository contains instructions and YAML configurations for integrating Terraform with GitLab CI/CD, allowing for efficient infrastructure management and deployment.\n\n## Table of Contents\n- [Overview](#overview)\n- [Getting Started](#getting-started)\n- [GitLab CI Configuration](#gitlab-ci-configuration)\n- [Using tfenv](#using-tfenv)\n- [Installing GitLab Runner](#installing-gitlab-runner)\n- [Deploying an Ubuntu Server](#deploying-an-ubuntu-server)\n- [Cleaning Up](#cleaning-up)\n- [Troubleshooting](#troubleshooting)\n- [Conclusion](#conclusion)\n\n## Overview\n\nThis project demonstrates how to set up Terraform with GitLab CI/CD using YAML for configuration. We will focus on tasks such as pushing code to GitLab, setting up CI/CD variables, and deploying infrastructure.\n\n## Getting Started\n\n1. **Create a new GitLab project**:\n   - Go to your GitLab dashboard and click on \"New Project.\"\n   - Select \"Public\" and create the project.\n\n2. **Push your Terraform code**:\n   ```bash\n   git init\n   git add .\n   git commit -m \"Infra\"\n   git remote add origin <your-repo-url>\n   git push origin master\n   ```\n\n## GitLab CI Configuration\n\n1. **Access CI/CD Settings**:\n   - Navigate to your project, then go to `Settings` > `CI/CD`.\n\n2. **Upload Secure Files**:\n   - Under the \"Secure Files\" section, upload your PEM file.\n\n3. **Add CI/CD Variables**:\n   - Scroll to \"Variables\" and click \"Add.\"\n   - Add the following masked variables:\n     - `AWS_ACCESS_KEY`\n     - `AWS_SECRET_KEY`\n\n4. **Set Up a New GitLab Runner**:\n   - Navigate to `Runners` and select \"New project runner.\"\n   - Choose \"Linux\" and set the following:\n     - **Tags**: `terraform,AWS`\n     - **Description**: A brief description of your runner.\n     - **Timeout**: 600 seconds.\n   - Click \"Create Runner.\"\n\n## Using tfenv\n\nTo manage different Terraform versions easily, we will use `tfenv`. Follow these steps:\n\n1. **Install tfenv**:\n   - Follow the instructions available on the [tfenv GitHub page](https://github.com/tfutils/tfenv).\n\n2. **Install the Required Terraform Version**:\n   ```bash\n   sudo apt install unzip\n   tfenv list-remote  # Lists all available versions\n   tfenv install 1.5.5 # Installs the specified version\n   ```\n\n## Installing GitLab Runner\n\n1. **Install GitLab Runner**:\n   - Open your console and follow the installation commands provided on the [GitLab Runner page](https://docs.gitlab.com/runner/install/).\n\n2. **Register the Runner**:\n   - Enter the token and name for the runner, choose \"shell\" as the executor.\n\n3. **Modify Your Code and Push**:\n   - Make minor changes to your code and push it. This should trigger the CI/CD pipeline.\n\n4. **Run Commands as gitlab-runner**:\n   ```bash\n   cat /etc/passwd\n   sudo rm -r /home/gitlab-runner/.bash_logout\n   su - gitlab-runner  # Switch to gitlab-runner user\n   ```\n\n## Deploying an Ubuntu Server\n\nLog into the server and deploy the necessary infrastructure using your Terraform scripts.\n\n## Cleaning Up\n\nTo destroy the infrastructure, run:\n```bash\nterraform destroy -auto-approve\n```\n\nYou can use **Checkov**, a free tool, to scan your Terraform code for security issues:\n```bash\napt install -y python3-pip\n```\n\n## Troubleshooting\n\nIf you encounter errors:\n- Check the GitLab CI/CD pipeline logs for error messages.\n- Google any error codes for potential solutions.\n\n## Conclusion\n\nThis setup provides a streamlined approach to managing infrastructure with Terraform in a GitLab CI/CD environment. Feel free to customize the configurations as needed to fit your specific requirements.\n\nFor further assistance, refer to the [official Terraform documentation](https://www.terraform.io/docs/index.html) or [GitLab CI/CD documentation](https://docs.gitlab.com/ee/ci/).\n\n```\n\nFeel free to adjust any sections as needed or add more details specific to your project's requirements!\n"
  },
  {
    "path": "Day 20 AWS-Packer/README.md",
    "content": "# Day 20 AWS-Packer\n\n![a-vibrant-and-eye-catching-youtube-thumbnail-with--CWD0OBoeRVO1Jw5QXUd3iw-PZaqUMYdQ0eS9Tv6GFm_VQ](https://github.com/user-attachments/assets/5cc2de07-938e-4197-8e07-c99bdcdd0180)\n\n\nHere's an outline to help you implement and visualize this process:\n\n### 1. **Introduction to Packer and Ansible**\n\n- **Packer**: A tool to create images for multiple platforms from a single source configuration.\n- **Ansible**: A configuration management tool used for automation, specifically post-deployment configuration.\n\n### 2. **Why Ansible?**\n\nAfter deploying infrastructure with tools like **Terraform**, configuration management is needed for more specific setups on the deployed resources. Here’s where **Ansible** comes in:\n\n- **Controller-Client Model**:\n  - **Controller**: The machine where Ansible commands are run.\n  - **Clients** (Nodes): Machines receiving configuration commands from the controller.\n\n- **No Client Software Needed**: Ansible only requires SSH and Python on the nodes, simplifying the setup.\n\n### 3. **Diagram of Ansible Setup**\nFor a visual, imagine:\n   - A **controller node** communicating with **client nodes** using SSH.\n   - Commands are sent from the controller, received by nodes, and executed without needing any additional software on the client side.\n\n### 4. **Task: AMI Creation and Deployment**\n   1. **Create an AMI Image** using Packer for a base instance.\n   2. **Deploy an Instance** with this AMI.\n   3. Verify functionality, ensuring services like Node Exporter (on port 9100) are working.\n\n### 5. **Steps to Install and Configure Ansible on Deployed Instances**\n   - **Install Ansible**:\n     - Refer to [Ansible documentation](https://docs.ansible.com/) for the latest installation steps.\n   - **Configuration File**:\n     - Run `sudo ansible-config init --disabled > ansible.cfg` in `/etc` to generate the config file.\n   - **Update Ansible Configurations**:\n     - Open the file with `ctrl+w` to search and configure:\n       - Set `host_key_checking = false`.\n       - Define the `remote_user` as `ansibleadmin`.\n       - Define `private_key_file` as `/home/ansibleadmin/key.pem` (ensure key permissions are set to read-only, i.e., `chmod 444 key.pem`).\n\nFollowing these steps will provide a setup ready for deploying configurations across instances effectively using Ansible.\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/.gitignore",
    "content": ".terraform.lock.hcl\r\n.terraform/*\r\n6.ansible-playbook-nginx.yml\r\ninvfile*\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/1.provider.tf",
    "content": "provider \"aws\" {\r\n  region = var.aws_region\r\n}\r\n\r\nterraform {\r\n  required_version = \"<= 1.8.5\" #Forcing which version of Terraform needs to be used\r\n  required_providers {\r\n    aws = {\r\n      version = \"<= 6.0.0\" #Forcing which version of plugin needs to be used.\r\n      source  = \"hashicorp/aws\"\r\n    }\r\n  }\r\n  backend \"s3\" {\r\n    bucket         = \"workspacesbucket01\"\r\n    key            = \"Ansible.tfstate\"\r\n    region         = \"us-east-1\"\r\n    # dynamodb_table = \"-terraform-locks\"\r\n    encrypt        = true\r\n  }\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/10.locals.tf",
    "content": "#distinct takes a list and returns a new list with any duplicate elements removed.\r\n#toset takes a list will remove any duplicate elements and discard the ordering of the elements.\r\nlocals {\r\n  new_public_subnet_cidrs  = distinct(var.public_subnet_cidrs)\r\n  new_private_subnet_cidrs = distinct(var.private_subnet_cidrs)\r\n  new_environment          = lower(var.environment)\r\n  projid                   = format(\"%s-%s\", lower(var.vpc_name), lower(var.projid))\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/11.localfile_ansible_inventory.tf",
    "content": "resource \"local_file\" \"ansible-inventory-file\" {\r\n  content = templatefile(\"publicservers.tpl\",\r\n    {\r\n\r\n      testserver01    = aws_instance.webservers.0.public_ip\r\n      testserver02    = aws_instance.webservers.1.public_ip\r\n      testserver03    = aws_instance.webservers.2.public_ip\r\n      pvttestserver01 = aws_instance.webservers.0.private_ip\r\n      pvttestserver02 = aws_instance.webservers.1.private_ip\r\n      pvttestserver03 = aws_instance.webservers.2.private_ip\r\n    }\r\n  )\r\n  filename = \"${path.module}/invfile\"\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/12.localfile_ansible_inventory_yaml.tf",
    "content": "resource \"local_file\" \"ansible-inventory-file-yaml\" {\r\n  content = templatefile(\"publicservers_yaml.tpl\",\r\n    {\r\n\r\n      testserver01    = aws_instance.webservers.0.public_ip\r\n      testserver02    = aws_instance.webservers.1.public_ip\r\n      testserver03    = aws_instance.webservers.2.public_ip\r\n      pvttestserver01 = aws_instance.webservers.0.private_ip\r\n      pvttestserver02 = aws_instance.webservers.1.private_ip\r\n      pvttestserver03 = aws_instance.webservers.2.private_ip\r\n    }\r\n  )\r\n  filename = \"${path.module}/invfile.yaml\"\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/13.null-local-exec.tf",
    "content": "resource \"null_resource\" \"webservers\" {\r\n  provisioner \"local-exec\" {\r\n    command = <<EOH\r\n      sleep 10\r\n      ansible -i invfile pvt -m ping\r\n    EOH\r\n  }\r\n  depends_on = [local_file.ansible-inventory-file]\r\n}\r\n\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/14.outputs.tf",
    "content": "output \"vpc_id\" {\r\n  value = aws_vpc.default.id\r\n}\r\n\r\noutput \"vpc_arn\" {\r\n  value = aws_vpc.default.arn\r\n}\r\n\r\n# output \"subnet1_id\" {\r\n#   value = aws_subnet.subnet1-public.id\r\n# }\r\n\r\n\r\noutput \"sg_id\" {\r\n  value = aws_security_group.allow_all.id\r\n}\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/15.terraform.tfvars",
    "content": "aws_region           = \"us-east-1\"\r\nvpc_cidr             = \"10.37.0.0/16\"\r\nvpc_name             = \"Ansible-Vpc\"\r\nkey_name             = \"SecOps-Key\"\r\npublic_subnet_cidrs  = [\"10.37.1.0/24\", \"10.37.2.0/24\", \"10.37.3.0/24\"]    #List\r\nprivate_subnet_cidrs = [\"10.37.10.0/24\", \"10.37.20.0/24\", \"10.37.30.0/24\"] #List\r\nazs                  = [\"us-east-1a\", \"us-east-1b\", \"us-east-1c\"]          #List\r\nenvironment          = \"production\"\r\ninstance_type = {\r\n  development = \"t2.small\"\r\n  testing     = \"t2.small\"\r\n  production  = \"t2.small\"\r\n}\r\namis = {\r\n  us-east-1 = \"ami-0149b2da6ceec4bb0\" # Canonical, Ubuntu, 20.04 LTS, amd64 focal image\r\n  us-east-2 = \"ami-0430580de6244e02e\" # Canonical, Ubuntu, 20.04 LTS, amd64 focal image\r\n}\r\nprojid    = \"PHOENIX-123\"\r\nimagename = \"ami-0149b2da6ceec4bb0\"\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/16.variables.tf",
    "content": "variable \"aws_region\" { type = string }\r\nvariable \"amis\" { type = map(any) }\r\nvariable \"vpc_cidr\" { type = string }\r\nvariable \"vpc_name\" { type = string }\r\nvariable \"key_name\" { type = string }\r\nvariable \"public_subnet_cidrs\" { type = list(any) }\r\nvariable \"private_subnet_cidrs\" { type = list(any) }\r\nvariable \"azs\" { type = list(any) }\r\nvariable \"environment\" { type = string }\r\nvariable \"instance_type\" { type = map(any) }\r\nvariable \"projid\" { type = string }\r\nvariable \"imagename\" { type = string }\r\n\r\n\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/2.vpc.tf",
    "content": "resource \"aws_vpc\" \"default\" {\r\n  cidr_block           = var.vpc_cidr\r\n  enable_dns_hostnames = true\r\n  tags = {\r\n    Name              = var.vpc_name\r\n    Owner             = \"Saikiran Pinapathruni\"\r\n    environment       = local.new_environment\r\n    Terraform-Managed = \"Yes\"\r\n    ProjectID         = local.projid\r\n  }\r\n}\r\n\r\nresource \"aws_internet_gateway\" \"default\" {\r\n  vpc_id = aws_vpc.default.id\r\n  tags = {\r\n    Name              = \"${var.vpc_name}-IGW\"\r\n    Terraform-Managed = \"Yes\"\r\n    Env               = local.new_environment\r\n    ProjectID         = local.projid\r\n  }\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/3.public-subnets.tf",
    "content": "resource \"aws_subnet\" \"public-subnets\" {\r\n  #count             = 4 # 0 1 2\r\n  count             = length(local.new_public_subnet_cidrs)\r\n  vpc_id            = aws_vpc.default.id\r\n  cidr_block        = element(local.new_public_subnet_cidrs, count.index)\r\n  availability_zone = element(var.azs, count.index)\r\n  tags = {\r\n    Name              = \"${var.vpc_name}-PublicSubnet-${count.index + 1}\"\r\n    Terraform-Managed = \"Yes\"\r\n    Env               = local.new_environment\r\n    ProjectID         = local.projid\r\n  }\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/4.private-subnets.tf",
    "content": "resource \"aws_subnet\" \"private-subnets\" {\r\n  #count             = 4 # 0 1 2\r\n  count             = length(local.new_private_subnet_cidrs)\r\n  vpc_id            = aws_vpc.default.id\r\n  cidr_block        = element(local.new_private_subnet_cidrs, count.index)\r\n  availability_zone = element(var.azs, count.index)\r\n  tags = {\r\n    Name              = \"${var.vpc_name}-PrivateSubnet-${count.index + 1}\"\r\n    Terraform-Managed = \"Yes\"\r\n    Env               = local.new_environment\r\n    ProjectID         = local.projid\r\n  }\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/5.public-routing.tf",
    "content": "resource \"aws_route_table\" \"terraform-public\" {\r\n  vpc_id = aws_vpc.default.id\r\n\r\n  # route {\r\n  #   cidr_block = \"0.0.0.0/0\"\r\n  #   gateway_id = aws_internet_gateway.default.id\r\n  # }\r\n\r\n  tags = {\r\n    Name              = \"${var.vpc_name}-MAIN-RT\"\r\n    Terraform-Managed = \"Yes\"\r\n    Env               = local.new_environment\r\n    ProjectID         = local.projid\r\n  }\r\n}\r\n\r\n#VPC Peering Routes are getting recreated when we apply. To overcome this issue Routing Table\r\n#is created with out any routes & routes for igw,peering are created seperatly.\r\n#https://stackoverflow.com/questions/49174421/terraform-route-table-forcing-new-resource-every-apply\r\n\r\nresource \"aws_route\" \"igw-route\" {\r\n  route_table_id         = aws_route_table.terraform-public.id\r\n  destination_cidr_block = \"0.0.0.0/0\"\r\n  gateway_id             = aws_internet_gateway.default.id\r\n}\r\n\r\nresource \"aws_route_table_association\" \"terraform-public\" {\r\n  #count             = 4 # 0 1 2\r\n  count = length(local.new_public_subnet_cidrs)\r\n  #Using * is called Splat Syntax\r\n  subnet_id      = element(aws_subnet.public-subnets.*.id, count.index)\r\n  route_table_id = aws_route_table.terraform-public.id\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/6.private-routing.tf",
    "content": "resource \"aws_route_table\" \"terraform-private\" {\r\n  vpc_id = aws_vpc.default.id\r\n\r\n  tags = {\r\n    Name              = \"${var.vpc_name}-Private-RT\"\r\n    Terraform-Managed = \"Yes\"\r\n    Env               = local.new_environment\r\n    ProjectID         = local.projid\r\n  }\r\n}\r\n\r\nresource \"aws_route_table_association\" \"terraform-private\" {\r\n  #count             = 4 # 0 1 2\r\n  count = length(local.new_private_subnet_cidrs)\r\n  #Using * is called Splat Syntax\r\n  subnet_id      = element(aws_subnet.private-subnets.*.id, count.index)\r\n  route_table_id = aws_route_table.terraform-private.id\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/7.ec2.tf",
    "content": "data \"aws_ami\" \"my_ami\" {\r\n  most_recent = true\r\n  name_regex  = \"^DevSecOps\"\r\n  owners      = [\"211125710812\"]\r\n}\r\n\r\n\r\nresource \"aws_instance\" \"webservers\" {\r\n  #count                       = local.new_environment == \"production\" ? 3 : 1\r\n  count                       = 3\r\n  ami                         = data.aws_ami.my_ami.id\r\n  instance_type               = lookup(var.instance_type, local.new_environment)\r\n  key_name                    = var.key_name\r\n  subnet_id                   = element(aws_subnet.public-subnets.*.id, count.index)\r\n  vpc_security_group_ids      = [\"${aws_security_group.allow_all.id}\"]\r\n  associate_public_ip_address = true\r\n  tags = {\r\n    Name              = \"${var.vpc_name}-PublicServer-${count.index + 1}\"\r\n    Terraform-Managed = \"Yes\"\r\n    Env               = local.new_environment\r\n    ProjectID         = local.projid\r\n    ManagedBy         = \"Terraform\"\r\n  }\r\n}\r\n\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/8.sg.tf",
    "content": "resource \"aws_security_group\" \"allow_all\" {\r\n  name        = \"allow_all\"\r\n  description = \"Allow all inbound traffic\"\r\n  vpc_id      = aws_vpc.default.id\r\n\r\n  ingress {\r\n    from_port   = 0\r\n    to_port     = 0\r\n    protocol    = \"-1\"\r\n    cidr_blocks = [\"0.0.0.0/0\"]\r\n  }\r\n\r\n  ingress {\r\n    from_port   = 22\r\n    to_port     = 22\r\n    protocol    = \"tcp\"\r\n    cidr_blocks = [\"10.1.1.0/32\"]\r\n  }\r\n\r\n  ingress {\r\n    from_port   = 3389\r\n    to_port     = 3389\r\n    protocol    = \"tcp\"\r\n    cidr_blocks = [\"10.1.1.0/32\"]\r\n  }\r\n\r\n  ingress {\r\n    from_port   = 3306\r\n    to_port     = 3306\r\n    protocol    = \"tcp\"\r\n    cidr_blocks = [\"10.2.1.0/32\"]\r\n  }\r\n\r\n\r\n\r\n  egress {\r\n    from_port   = 0\r\n    to_port     = 0\r\n    protocol    = \"-1\"\r\n    cidr_blocks = [\"0.0.0.0/0\"]\r\n  }\r\n  # lifecycle {\r\n  #   ignore_changes = [\r\n  #     ingress,\r\n  #   ]\r\n  # }\r\n}\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/9.vpc-peering.tf",
    "content": "data \"aws_vpc\" \"ansible_vpc\" {\r\n  id = \"vpc-036e5c5d11bdf83de\"\r\n}\r\n\r\ndata \"aws_route_table\" \"ansible_vpc_rt\" {\r\n  subnet_id = \"subnet-05597e96c163e70fd\"\r\n  #If subnet_id giving errors use route table id as below\r\n  #route_table_id = data.aws_route_table.ansible_vpc_rt.id\r\n}\r\n\r\nresource \"aws_vpc_peering_connection\" \"ansible-vpc-peering\" {\r\n  peer_vpc_id = data.aws_vpc.ansible_vpc.id\r\n  vpc_id      = aws_vpc.default.id\r\n  auto_accept = true\r\n  accepter {\r\n    allow_remote_vpc_dns_resolution = true\r\n  }\r\n\r\n  requester {\r\n    allow_remote_vpc_dns_resolution = true\r\n  }\r\n\r\n  tags = {\r\n    Name = \"Ansible-${var.vpc_name}-Peering\"\r\n  }\r\n}\r\n\r\nresource \"aws_route\" \"peering-to-ansible-vpc\" {\r\n  route_table_id            = aws_route_table.terraform-public.id\r\n  destination_cidr_block    = \"10.0.0.0/16\"\r\n  vpc_peering_connection_id = aws_vpc_peering_connection.ansible-vpc-peering.id\r\n  #depends_on                = [aws_route_table.terraform-public]\r\n}\r\n\r\nresource \"aws_route\" \"peering-from-ansible-vpc\" {\r\n  route_table_id            = data.aws_route_table.ansible_vpc_rt.id\r\n  destination_cidr_block    = \"10.37.0.0/16\"\r\n  vpc_peering_connection_id = aws_vpc_peering_connection.ansible-vpc-peering.id\r\n  #depends_on                = [aws_route_table.terraform-public]\r\n}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/Playbooks",
    "content": "# CHECK HERE FOR PLAYBOOKS : https://github.com/saikiranpi/Ansible-Testing.git\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/README.md",
    "content": "# Day 21 AWS-Ansible-Part-1\n\n![image](https://github.com/user-attachments/assets/5cec40df-0b9a-4757-8399-d2fbe42fb064)\n\n# Project Setup with Packer, Ansible, and Terraform\n\n## Overview\n\nIn this project, we utilize several DevOps tools to set up, configure, and manage infrastructure and application deployment:\n- **Packer**: Used for building machine images.\n- **Ansible**: Configuration management tool, enabling automated configuration of our infrastructure post-deployment.\n- **Terraform**: Infrastructure as Code (IaC) tool for provisioning resources.\n\nWe’ll walk through how to integrate **Ansible** with **Terraform** to manage configurations on an infrastructure that's already deployed, setting up an Ansible Controller and ensuring communication between it and the client servers.\n\n## Architecture and Components\n\n1. **Ansible Controller**: Runs all configuration commands on the clients/nodes.\n2. **Ansible Clients**: Servers that Ansible manages remotely.\n\n**Note**: Ansible doesn’t require client software installation, as it connects to clients via SSH and Python.\n\n### Diagram\n- [Add a diagram here depicting the VPC peering, Ansible Controller, and Ansible Clients.]\n\n## Task Workflow\n\n### Step 1: Provisioning with Terraform\n\n1. **Modify the Terraform Configuration**:\n   - Update `ec2.tf` with the correct AWS account number.\n   - Set up **VPC Peering** to allow communication between the Ansible Controller VPC and the client VPC. Update the **Route Tables** accordingly.\n\n2. **Deploy Resources**:\n   - Use `terraform init`, `terraform fmt`, `terraform validate`, and finally `terraform apply -var-file=15.terraform.tfvars` to deploy the infrastructure.\n   - Verify that the public and private IPs are assigned correctly.\n\n### Step 2: Configure Ansible Inventory\n\n1. **Inventory File (invfile)**:\n   - This is a critical file listing all servers or hosts Ansible will manage.\n   - It identifies the target machines, making it easy for Ansible to know where to apply configuration changes.\n\n### Step 3: Set Up Ansible Controller\n\n1. **Prepare SSH Access**:\n   - Place your SSH key at `/etc/ansible/ansiblekey.pem` on the Ansible controller and set permissions using `chmod 600`.\n   \n2. **Install Terraform on the Controller**:\n   - Clone the Git repository in the root location of the controller.\n   - Navigate to `ansiblecore`, and initialize Terraform with `terraform init`.\n\n3. **Validate Connectivity**:\n   - Use Ansible to test connectivity with the client servers:\n     ```bash\n     ansible -i invfile pvt -m ping\n     ```\n\n### Step 4: Working with Ad-Hoc Commands in Ansible\n\n1. **Run Ad-Hoc Commands**:\n   - To check disk space across servers:\n     ```bash\n     ansible -i invfile pvt -m shell -a \"df -h\"\n     ```\n   - To filter for root volume only:\n     ```bash\n     ansible -i invfile pvt -m shell -a \"df -h | grep '/dev/root'\"\n     ```\n   - Increase verbosity by appending `-v`, `-vv`, or `-vvv` for debugging:\n     ```bash\n     ansible -i invfile pvt -m shell -a \"df -h | grep '/dev/root'\" -vv\n     ```\n\n2. **Target Specific Servers**:\n   - For example, to exclude a specific server:\n     ```bash\n     ansible -i invfile 'all:!server01' -m shell -a \"df -h | grep '/dev/root'\" -v\n     ```\n\n### Step 5: Using Ansible Playbooks for Complex Tasks\n\n1. **Create Playbooks Folder**:\n   - Organize playbooks in the `playbooks` folder.\n\n2. **Sample Nginx Playbook**:\n   - The sample playbook installs nginx on the client servers.\n   - Run syntax checks with:\n     ```bash\n     ansible-playbook -i invfile playbooks/1.nginx/o.sample-playbook.yml --syntax-check\n     ```\n\n3. **Run Playbooks**:\n   - Deploy nginx using:\n     ```bash\n     ansible-playbook -i invfile playbooks/1.nginx/1.nginx-local.yml -vvv\n     ```\n\n4. **Remote Module Usage**:\n   - For copying files from a remote location, use the remote module. To remove unnecessary files:\n     ```bash\n     ansible -i invfile pvt -m shell -a \"rm -rf /var/www/html/index.nginx-debian.html\" --become\n     ```\n\n### Step 6: User Management\n\n- Run the user creation playbook:\n  ```bash\n  ansible-playbook -i invfile playbooks/1.nginx/5.user_creation.yml -vv\n  ```\n\n### Step 7: Redis Caching (Optional)\n\n- Use Redis to cache Ansible facts for environments with a large number of servers:\n  ```bash\n  ansible -i invfile all -m setup\n  ```\n\n### Final Steps\n\n1. **Push Code Changes**:\n   - Regularly push updates from your local machine to Git.\n\n2. **Destroying Resources**:\n   - Use Terraform to destroy resources if needed:\n     ```bash\n     terraform destroy -var-file=15.terraform.tfvars\n     ```\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/publicservers.tpl",
    "content": "[pub]\r\nserver01 ansible_port=22 ansible_host=${testserver01}  ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem \r\nserver02 ansible_port=22 ansible_host=${testserver02} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem\r\nserver03 ansible_port=22 ansible_host=${testserver03} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem\r\n\r\n[pvt]\r\ntestserver01 ansible_port=22 ansible_host=${pvttestserver01}  ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem \r\ntestserver02 ansible_port=22 ansible_host=${pvttestserver02} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem\r\ntestserver03 ansible_port=22 ansible_host=${pvttestserver03} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem\r\n\r\n[pip]\r\n${testserver01}\r\n${testserver02}\r\n${testserver03}"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/publicservers_yaml.tpl",
    "content": "all:\r\n  hosts:\r\n    ${testserver01}:\r\n    ${testserver02}:\r\n    ${testserver03}:\r\n   \r\n  children:\r\n    pub:\r\n     hosts:\r\n       server01:\r\n         ansible_port: 22\r\n         ansible_host: ${testserver01}\r\n         ansible_user: ubuntu\r\n         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem\r\n       server02:\r\n         ansible_port: 22\r\n         ansible_host: ${testserver02}\r\n         ansible_user: ubuntu\r\n         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem\r\n       server03:\r\n         ansible_port: 22\r\n         ansible_host: ${testserver03}\r\n         ansible_user: ubuntu\r\n         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem\r\n    pvt:\r\n     hosts:\r\n       testserver01:\r\n         ansible_port: 22\r\n         ansible_host: ${pvttestserver01}\r\n         ansible_user: ubuntu\r\n         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem\r\n       testserver02:\r\n         ansible_port: 22\r\n         ansible_host: ${pvttestserver02}\r\n         ansible_user: ubuntu\r\n         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem\r\n       testserver03:\r\n         ansible_port: 22\r\n         ansible_host: ${pvttestserver03}\r\n         ansible_user: ubuntu\r\n         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem\r\n    pip:\r\n     hosts:\r\n       ${testserver01}:\r\n       ${testserver02}:\r\n       ${testserver03}:"
  },
  {
    "path": "Day 22 AWS-Ansible-Part-2/README.md",
    "content": "![an-eye-catching-image-with-the-glossy-text-ansible-29br2wpbTleCfuUILMcoiA-q17GtADBT7CWaIP5UT9zHQ](https://github.com/user-attachments/assets/a9d0ef13-a6c4-4b52-8666-f177ff397e69)\n\n\n\n\n# Ansible Redis & Vault Setup\n#Complete Repo Here: https://github.com/saikiranpi/Ansible-Testing.git\n\nThis repository contains Ansible playbooks to configure Redis caching for storing Ansible facts and demonstrates how to use Ansible Vault to securely manage sensitive information. This is a step-by-step guide to setting up Redis as a fast storage for Ansible facts, managing configurations with handlers, and using Ansible Vault to secure sensitive data.\n\n## Prerequisites\n\n- Ansible installed on the controller node\n- Python3 and Redis installed on the target servers\n- Proper SSH access and configured inventory file (`invfile`)\n\n---\n\n### Step 1: Initial Setup and Verification\n\n1. **Remove Old Playbooks:** Delete any previous playbook versions.\n2. **Copy New Playbook:** Paste the latest playbook to the Ansible playbooks location.\n3. **Test Connections:** Run a basic ping test to ensure connectivity:\n   ```bash\n   ansible -i invfile pvt -m ping\n   ansible -i invfile pub -m ping\n   ```\n\n### Step 2: Collecting Facts with Redis Caching\n\n**Collect Ansible Facts:** Use the following command to gather facts on `tstserver01`:\n   ```bash\n   ansible -i invfile tstserver01 -m setup\n   ```\n   \nTo reduce memory usage, configure Redis as an external caching server for storing these facts.\n\n#### Redis Configuration\n\n1. **Playbooks and Configuration Files:**\n   - `redis.config`: Specifies the IP address to bind (use public IP if necessary).\n   - `redis.service`: Ensures Redis service starts.\n   - `redis.yml`: Runs the Redis setup.\n\n2. **Run Playbook on Controller:**\n   ```bash\n   ansible-playbook -i invfile playbooks/2/redis.yaml --syntax-check -v\n   ```\n\n3. **Verify Installation on Test Server:**\n   ```bash\n   systemctl status redis\n   ```\n\n#### Handlers\n\nHandlers ensure the Redis service restarts only when necessary (if there are changes in `redis.config` or `redis.service`).\n\n### Step 3: Fetch and Store Files\n\nOnce Redis is configured:\n1. **Backup File Creation:** Backups are created and saved under `/tmp` on the server.\n2. **Download Backup File:** Use Ansible to fetch backup files from the test server to your local machine.\n\n```bash\nansible -i invfile all -m setup\n```\n\n### Step 4: Secure Sensitive Information with Ansible Vault\n\nAnsible Vault is used to manage sensitive data like AWS credentials securely.\n\n1. **Create Vault File:**\n   ```bash\n   ansible-vault create aws_creds\n   ```\n   Insert your AWS credentials (access key and secret key).\n\n2. **Encrypt and Decrypt Files:**\n   - Encrypt the file:\n     ```bash\n     ansible-vault encrypt aws_creds\n     ```\n   - Decrypt the file:\n     ```bash\n     ansible-vault decrypt aws_creds\n     ```\n\n3. **Run Playbook Using Vault:**\n   ```bash\n   ansible-playbook -i invfile playbooks/vault/vaulttesting.yml -v\n   ```\n\n### Step 5: Handling Failures with Block and Rescue\n\nDefine custom error handling with `block` and `rescue` in your playbooks to ensure playbook execution doesn’t halt due to failures.\n\n### Step 6: Secure Configuration with Vault Password File\n\n1. **Set up Vault Password File**:\n   - Create a vault password file, `/root/vaultpass`, and set permissions:\n     ```bash\n     chmod 600 /root/vaultpass\n     ```\n   - Update `ansible.cfg` to include the vault password file:\n     ```ini\n     [defaults]\n     vault_password_file=/root/vaultpass\n     ```\n\n### Next Steps\n\n- Explore the difference between shell, command, and raw modules in Ansible.\n- Automate playbook runs with a Cron Job to periodically update facts.\n\n### License\n\nThis project is licensed under the MIT License.\n\n---\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPENDING TASK DURING VIDEO \n\nON ANSIBLE CONTROLLER \n\nCD /ETC/ANSIBLE\nNANO ANSIBLE.CONF --> PASTE UNDER [defaults] -- > It looks like this\n\ngathering = smart\nfact_caching_timeout = 86400\nfact_caching = redis\nfact_caching_prefix = ansible_DevSecOps_Saikiran\nfact_caching_connection = PASTE-YOUR-CLIENT(TESTSERVER01)-PUBLICIP-HERE:6379:0\n\n![image](https://github.com/user-attachments/assets/5a3e46dd-2534-4b97-8fff-0a380c747433)\n\nCTLR+X --> Y > ENTER\n\napt update\napt install -y python3-pip\npip3 install redis\n\nON CONTROLLER --> ANSIBLE -I INVFILE PVT -M SETUP\n\nON CLIENT(TESTSERVER01) --> REDIS-CLI --> KEYS *\n\n\n"
  },
  {
    "path": "Day 23 AWS-Ansible-Part-3/README.md",
    "content": "# Day 23 AWS-Ansible-Part-3\n\n![a-3d-render-of-a-glowing-ansible-logo-below-the-lo-4QgGoilXQ36n-8iyPqrNXQ-mBzRZGehQfeGRujEXVxpTQ](https://github.com/user-attachments/assets/240ba7fd-de4a-4f64-9e16-f36c61ca5720)\n\n# Complete Code here : https://github.com/saikiranpi/Ansible-Testing\n\n---\n\n# Ansible Jinja2 Templating with MySQL and Nginx Playbooks\n\nThis project demonstrates the use of Jinja2 templates in Ansible to deploy and configure services on multiple servers. It includes examples of pre- and post-tasks, as well as how to manage MySQL and Nginx configurations using Ansible playbooks.\n\n## Project Setup\n\n1. **Initialize Ansible Configuration**\n   - Navigate to the Ansible directory:\n     ```bash\n     cd /etc/ansible/\n     ```\n   - Generate the default Ansible configuration:\n     ```bash\n     ansible-config init --disabled > ansible.cfg\n     ```\n   - Modify `ansible.cfg` for common settings:\n     ```bash\n     nano ansible.cfg\n     ```\n   - Update the following values:\n     ```ini\n     host_key_checking = False\n     remote_user = ansibleadmin\n     private_key_file = /home/ansibleadmin/key.pem\n     ```\n\n2. **Initialize and Apply Terraform**\n   - Ensure you are in the correct directory and apply the Terraform configuration to set up your infrastructure:\n     ```bash\n     terraform init\n     terraform apply\n     ```\n\n## Jinja2 Templating with Nginx\n\nThe `nginx-jinja2.yml` playbook uses Jinja2 templates to configure Nginx.\n\n1. Run the Nginx playbook:\n   ```bash\n   ansible-playbook -i invfile nginx-jinja2.yml -v\n   ```\n2. Once the playbook is complete, check the public IP of the server to verify that Nginx is running.\n\n## MySQL Setup with Jinja2\n\nThis section explains how to install and configure MySQL using Ansible and Jinja2 templates. All variable values are defined within the configuration file.\n\n1. Run the MySQL playbook:\n   ```bash\n   ansible-playbook -i invfile playbooks/mysql-jinja2.yml\n   ```\n2. Verify MySQL service status:\n   ```bash\n   ansible -i invfile pvt -m shell -a \"service mysql status\"\n   ```\n3. Once the MySQL service is running, log in to the server and confirm that you can access MySQL databases:\n   ```sql\n   mysql> SHOW DATABASES;\n   ```\n4. Add data to the `myflixdb` database:\n   ```sql\n   USE myflixdb;\n   SHOW TABLES;\n   SELECT * FROM movies;\n   ```\n\n## Pre-Tasks and Post-Tasks\n\nPre-tasks and post-tasks are used to prepare the system before the main tasks or clean up afterward.\n\n### Example Task: Checking `/tmp` Folder\n\n1. Run the playbook with pre-tasks and post-tasks:\n   ```bash\n   ansible-playbook -i invfile playbooks/pre_post_tasks.yml\n   ```\n\n## Running the Playbooks on Multiple Servers\n\nIf you need to run these playbooks across 100 or more servers, Ansible's inventory and parallel execution capabilities make this straightforward. Update your inventory file (`invfile`) with the list of servers, and then run the playbooks with the inventory specified.\n\n## Git Commands for Version Control\n\n1. To push any changes to your playbook repository:\n   ```bash\n   git push\n   ```\n2. To pull the latest updates:\n   ```bash\n   git pull\n   ```\n\n## File Structure\n\n```\n/etc/ansible/\n├── ansible.cfg               # Ansible configuration file\n├── invfile                   # Inventory file listing server IPs or hostnames\n├── playbooks/\n│   ├── nginx-jinja2.yml      # Nginx playbook using Jinja2 template\n│   ├── mysql-jinja2.yml      # MySQL playbook using Jinja2 template\n│   └── pre_post_tasks.yml    # Playbook with pre-tasks and post-tasks\n└── templates/\n    ├── nginx.j2              # Nginx configuration template\n    └── mysql.j2              # MySQL configuration template\n```\n\n## Requirements\n\n- Ansible 2.9+\n- Terraform (if using for infrastructure setup)\n- SSH access to the target servers\n\n## Usage Notes\n\nThis project is suitable for dynamic and scalable server setups. With Jinja2 templating, you can easily customize configurations for different environments or requirements, making it highly adaptable for both development and production needs.\n\n---\n"
  },
  {
    "path": "Day 24 Ansible-Part-4 DynamicInventory_AWX/README.md",
    "content": "# Ansible Dynamic Inventory and Ansible Tower\n\n\"Anible Dynamic Inventory\" title with Attractive Font for youtube Thumbnail \n\nThis guide explains how to use **Ansible Dynamic Inventory**  for managing dynamic environments, such as those involving auto-scaling groups. Unlike static inventory, dynamic inventory adapts to infrastructure changes, such as scaling up or down during load variations.\n\n---\n\n## Overview\n\n### Static vs Dynamic Use Case\n\n- **Static Use Case**: Targets predefined servers without HA (High Availability) or auto-scaling. Servers remain fixed, without scaling up or down.\n- **Dynamic Use Case**: Ideal for environments with auto-scaling groups. Servers scale automatically based on load, requiring a dynamic inventory for effective management.\n\n---\n\n## Prerequisites\n\n1. **Install Required Tools**:\n   ```bash\n   sudo apt-get update\n   sudo apt-get install python3-pip jq -y\n   sudo pip3 install boto3\n   sudo apt install -y awscli\n   aws --version\n   ```\n\n2. **Configure Ansible**:\n   - Navigate to the Ansible configuration directory:\n     ```bash\n     cd /etc/ansible\n     ```\n   - Back up the `ansible.cfg` file:\n     ```bash\n     cp ansible.cfg ansible.cfg.bak\n     ```\n   - Edit the `ansible.cfg` file and enable the **inventory plugins**:\n     ```bash\n     nano ansible.cfg\n     ```\n     Locate `[inventory]` and update as needed.\n\n3. **Create EC2 Plugin File**:\n   - Create a new file for the EC2 plugin:\n     ```bash\n     nano aws_ec2.yaml\n     ```\n   - Paste the following configuration:\n     ```yaml\n     plugin: aws_ec2\n     regions:\n       - us-east-1\n     keyed_groups:\n       - key: tags\n         prefix: tag\n       - prefix: instance_type\n         key: instance_type\n       - key: placement.region\n         prefix: aws_region\n     ```\n\n---\n\n## Steps to Use Dynamic Inventory\n\n### Deploy Infrastructure First\n1. Validate the dynamic inventory:\n   ```bash\n   ansible-inventory -i /etc/ansible/aws_ec2.yaml --list\n   ansible-inventory -i /etc/ansible/aws_ec2.yaml --list | jq\n   ```\n2. Test connectivity using tags:\n   ```bash\n   ansible -i /etc/ansible/aws_ec2.yaml tag_terraform_managed_yes -m ping\n   ```\n\n### Target Specific Resources\n- Set the dynamic inventory path:\n  ```bash\n  export dynamic='/etc/ansible/aws_ec2.yaml'\n  ```\n- Example command to run on specific instance types:\n  ```bash\n  ansible -i $dynamic instance_type_t2_small -m shell -a \"df -h\"\n  ```\n\n### Run Playbooks\n1. Create a playbook targeting specific tags:\n   - Edit or create the playbook under the `dynamic_inventory` folder:\n     ```bash\n     nano dynamic_nginx-jinja2.yaml\n     ```\n   - Update the `hosts` to:\n     ```yaml\n     hosts: tag_managedby_terraform\n     ```\n2. Run the playbook:\n   ```bash\n   ansible-playbook -i $dynamic playbook/dynamic_inventory/dynamic_nginx.yaml\n   ```\n\n3. Replace `nginx` with `mysql` or other playbooks as needed.\n\n---\n\n## Git Workflow for Dynamic Inventory\n1. Create and switch to a new branch:\n   ```bash\n   git checkout -b dynamic_inventory\n   ```\n2. Push changes to remote:\n   ```bash\n   git push origin dynamic_inventory\n   ```\n3. Pull updates to the local repo:\n   ```bash\n   git pull\n   ```\n\n---\n\n## Auto-Scaling Integration\nWhen an auto-scaling group provisions instances, the dynamic inventory automatically updates to target the new resources. Verify using:\n```bash\nansible-inventory -i aws_ec2.yaml --list\nansible-inventory -i aws_ec2.yaml --graph\n```\n\n---\n\n## Example Playbook Execution\n1. Modify the number of instances in your Terraform configuration:\n   ```bash\n   terraform apply -var-file=\"vars.tfvars\" -auto-approve\n   ```\n2. Run the playbook:\n   ```bash\n   ansible-playbook -i /etc/ansible/aws_ec2.yaml playbook/dynamic_inventory/dynamic_nginx.yaml\n   ```\n3. Validate with the updated inventory.\n\n---\n\n## Notes\n- Ensure that the `ansible.cfg` file is correctly configured for plugins.\n- Use `jq` to format and verify inventory JSON outputs.\n- Replace line endings with `LF` if issues arise during playbook execution.\n\nEnd of Dynamic Inventory.\n"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/0-steps.sh",
    "content": "1\nsudo certbot certonly --manual --preferred-challenges=dns --key-type rsa \\\n    --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \\\n    --agree-tos -d *.cloudvishwakarma.in\n\n# Certificate is saved at: /etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem\n# Key is saved at:         /etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem\n\n+++++IF ISSUE+++++\n\nfree -m\ntop\n#DRY-RUN\ncertbot certonly --dry-run --manual --preferred-challenges=dns --key-type rsa \\\n    --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \\\n    --agree-tos -d *.cloudvishwakarma.in\n\n+++++IF ISSUE+++++\n\n2\napt update && apt install -y unzip net-tools\n\n3\nwget https://releases.hashicorp.com/vault/1.13.2/vault_1.13.2_linux_amd64.zip\nunzip vault_1.13.2_linux_amd64.zip\ncp vault /usr/bin/vault\nmkdir -p /etc/vault\nmkdir -p /var/lib/vault/data\nvault version\n\n4\nnano config.hcl\ncp config.hcl /etc/vault/config.hcl\n\n5\nnano /etc/systemd/system/vault.service\n\n6\nsudo systemctl daemon-reload\nsudo systemctl stop vault\nsudo systemctl start vault\nsudo systemctl enable vault\nsudo systemctl status vault --no-pager\n\n7 #VAULT STATUS FROM CLI\nps -ef | grep -i vault | grep -v grep\n\n8\nexport VAULT_ADDR=https://kmsvault.cloudvishwakarma.in:8200\necho \"export VAULT_ADDR=https://kmsvault.cloudvishwakarma.in:8200\" >>~/.bashrc\n\nvault status\n\n9\nvault operator init | tee -a /etc/vault/init.file\n\n10\nvault operator init | tee -a /etc/vault/init.file\n"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/1-config.hcl",
    "content": "disable_cache = true\ndisable_mlock = true\nui            = true\nlistener \"tcp\" {\n  address                  = \"0.0.0.0:8200\"\n  tls_disable              = 0\n  tls_cert_file            = \"/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem\"\n  tls_key_file             = \"/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem\"\n  tls_disable_client_certs = \"true\"\n\n}\nstorage \"file\" {\n  path = \"/var/lib/vault/data\"\n}\napi_addr                = \"https://kmsvault.cloudvishwakarma.in:8200\"\nmax_lease_ttl           = \"10h\"\ndefault_lease_ttl       = \"10h\"\ncluster_name            = \"vault\"\nraw_storage_endpoint    = true\ndisable_sealwrap        = true\ndisable_printable_check = true"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-config-kms.hcl",
    "content": "disable_cache = true\ndisable_mlock = true\nui            = true\nlistener \"tcp\" {\n  address                  = \"0.0.0.0:8200\"\n  tls_disable              = 0\n  tls_cert_file            = \"/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem\"\n  tls_key_file             = \"/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem\"\n  tls_disable_client_certs = \"true\"\n\n}\nstorage \"s3\" {\n  bucket = \"workspacesbucket01\"\n}\n\nseal \"awskms\" {\n  region     = \"us-east-1\"\n  kms_key_id = \"KMSID here\"\n  endpoint   = \"kms.us-east-1.amazonaws.com\"\n}\n\napi_addr                = \"https://kmsvault.cloudvishwakarma.in:8200\"\nmax_lease_ttl           = \"10h\"\ndefault_lease_ttl       = \"10h\"\ncluster_name            = \"vault\"\nraw_storage_endpoint    = true\ndisable_sealwrap        = true\ndisable_printable_check = true\n\n\n"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-vault.service",
    "content": "[Unit]\nDescription=HashiCorp Vault - A tool for managing secrets\nDocumentation=https://www.vaultproject.io/docs/\nRequires=network-online.target\nAfter=network-online.target\nConditionFileNotEmpty=/etc/vault/config.hcl\n\n[Service]\nProtectSystem=full\nProtectHome=read-only\nPrivateTmp=yes\nPrivateDevices=yes\nSecureBits=keep-caps\nAmbientCapabilities=CAP_IPC_LOCK\nNoNewPrivileges=yes\nExecStart=/usr/bin/vault server -config=/etc/vault/config.hcl\nExecReload=/bin/kill --signal HUP\nKillMode=process\nKillSignal=SIGINT\nRestart=on-failure\nRestartSec=5\nTimeoutStopSec=30\nStartLimitBurst=3\nLimitNOFILE=65536\n\n[Install]\nWantedBy=multi-user.target\n"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/README.md",
    "content": "# Day 27 HashicorpVault AWSIntegration\nBelow is a structured GitHub repository content outline and README for the integration of HashiCorp Vault with Ansible, based on the provided instructions:\n\n---\n\n### Repository Structure\n\n```plaintext\nHashiCorp-Vault-Ansible-Integration/\n├── README.md\n├── terraform/\n│   ├── main.tf\n│   ├── variables.tf\n│   ├── outputs.tf\n├── vault/\n│   ├── config.hcl\n│   ├── config-kms.hcl\n│   ├── init.file\n├── ansible/\n│   ├── playbook.yml\n│   ├── vault_secret_retrieve.yml\n├── docs/\n│   ├── installation_steps.md\n│   ├── troubleshooting.md\n└── scripts/\n    ├── setup_docker.sh\n    ├── setup_ssl.sh\n```\n\n---\n\n### **README.md**\n\n```markdown\n# HashiCorp Vault Integration with Ansible\n\nThis repository demonstrates the integration of **HashiCorp Vault** with **Ansible** for managing secrets in real-world scenarios, specifically focusing on environments where servers need to retrieve sensitive information after unexpected reboots. The solution leverages Terraform for provisioning, AWS KMS for auto-unsealing, and Docker to host Vault.\n\n---\n\n## **Use Case**\n\nA Java application is running on a server. When the server reboots due to a disaster or maintenance:\n- The application must securely retrieve sensitive information (e.g., credentials) from a centralized Key Management System (KMS).\n- HashiCorp Vault is used for this purpose, ensuring compatibility with both on-premises and cloud environments.\n\n### Why not Ansible Vault?\n- **Ansible Vault** is ideal for encrypting sensitive data like API keys or database credentials within playbooks. However, it cannot autonomously retrieve secrets from another server when triggered by events like server reboots.\n- **HashiCorp Vault**, combined with AWS KMS, provides auto-unsealing capabilities and centralized secret management.\n\n---\n\n## **Solution Overview**\n\n1. **HashiCorp Vault Setup**:\n   - Install Vault on a t2.medium instance.\n   - Configure Vault with auto-unsealing using AWS KMS.\n   - Store Vault initialization keys securely in S3.\n\n2. **Terraform Configuration**:\n   - Provisions Vault server.\n   - Sets up IAM roles and S3 buckets for storing Vault keys.\n   - Configures KMS for encryption and auto-unsealing.\n\n3. **Ansible Integration**:\n   - Demonstrates how to retrieve secrets stored in Vault using Ansible playbooks.\n\n---\n\n## **Setup Instructions**\n\n### 1. Prerequisites\n- AWS Account with administrative access.\n- A t2.medium EC2 instance with Docker installed.\n- Terraform installed locally.\n- Ansible installed locally.\n\n### 2. Vault Installation\nFollow the steps in `docs/installation_steps.md` to:\n1. Start an EC2 instance.\n2. Install Docker and SSL.\n3. Configure Vault.\n\n### 3. Configuring AWS KMS\n- Navigate to AWS Management Console > KMS.\n- Create a symmetric key with \"Encrypt and Decrypt\" permissions.\n- Add the IAM role of the EC2 instance to allow access.\n\n### 4. Configuring Vault with KMS\n1. Replace the Vault config file:\n   ```bash\n   sudo nano /etc/vault/config.hcl\n   ```\n   Copy and paste the contents from `vault/config-kms.hcl`.\n2. Ensure S3 bucket details are correctly updated.\n3. Initialize Vault:\n   ```bash\n   vault operator init | tee -a /etc/vault/init.file\n   ```\n\n### 5. Terraform Setup\n- Navigate to the `terraform/` directory.\n- Update variables in `variables.tf` for your environment.\n- Apply the configuration:\n  ```bash\n  terraform apply\n  ```\n\n### 6. Reboot Handling\n- After rebooting the server:\n  ```bash\n  terraform apply\n  ```\n- Verify that Vault is accessible and unsealed automatically.\n\n---\n\n## **Ansible Playbook Example**\n\nRetrieve secrets from Vault after server reboot:\n```yaml\n---\n- name: Retrieve secrets from HashiCorp Vault\n  hosts: localhost\n  tasks:\n    - name: Fetch secret from Vault\n      uri:\n        url: \"http://<vault-server-ip>:8200/v1/secret/data/my-secret\"\n        method: GET\n        headers:\n          X-Vault-Token: \"{{ vault_token }}\"\n      register: secret_response\n\n    - name: Debug retrieved secret\n      debug:\n        msg: \"{{ secret_response.json }}\"\n```\n\n---\n\n## **Troubleshooting**\n- Refer to `docs/troubleshooting.md` for common issues, such as:\n  - Vault not unsealing after reboot.\n  - KMS misconfiguration.\n  - Terraform or Ansible errors.\n\n---\n\n## **License**\nThis repository is licensed under the MIT License. See `LICENSE` for details.\n```\n\n---\n\n### Additional Notes\n1. **Scripts**:\n   - `setup_docker.sh`: Automates Docker installation.\n   - `setup_ssl.sh`: Configures SSL for Vault.\n\n2. **Documentation**:\n   - `docs/installation_steps.md`: Step-by-step guide for setting up Vault and related components.\n   - `docs/troubleshooting.md`: Solutions for potential issues during setup and execution.\n"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/1-provider.tf",
    "content": "provider \"aws\" {\n}\n\nprovider \"vault\" {\n  address         = var.vault_addr\n  token           = var.vault_token\n  skip_tls_verify = true\n}"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/2-random-passwords.tf",
    "content": "#Generating random password for Linux Machines\nresource \"random_password\" \"linux-machine-passwords\" {\n  count            = var.vm_count\n  length           = 16\n  special          = true\n  override_special = \"!@#$%^\"\n  min_upper        = 4\n  min_lower        = 4\n  min_special      = 4\n  min_numeric      = 4\n}"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/3-hashi-vault-passwords.tf",
    "content": "resource \"vault_mount\" \"java-app-dev\" {\n  path        = \"java-app-dev\"\n  type        = \"kv\"\n  options     = { version = \"1\" }\n  description = \"KV Version 1 secret engine mount\"\n}\n\nresource \"vault_kv_secret\" \"linux-machine-1\" {\n  path = \"${vault_mount.java-app-dev.path}/linux-machine-1\"\n  data_json = jsonencode(\n    {\n      linux-machine-1 = random_password.linux-machine-passwords.0.result\n    }\n  )\n}\n\nresource \"vault_kv_secret\" \"linux-machine-2\" {\n  path = \"${vault_mount.java-app-dev.path}/linux-machine-2\"\n  data_json = jsonencode(\n    {\n      linux-machine-2 = random_password.linux-machine-passwords.1.result\n    }\n  )\n}\n\nresource \"vault_kv_secret\" \"linux-machine-3\" {\n  path = \"${vault_mount.java-app-dev.path}/linux-machine-3\"\n  data_json = jsonencode(\n    {\n      linux-machine-3 = random_password.linux-machine-passwords.2.result\n    }\n  )\n}"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/policy.yaml",
    "content": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": {\n    \"Effect\": \"Allow\",\n    \"Action\": \"kms:*\",\n    \"Resource\": \"*\"\n  }\n}\n"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/user.tf",
    "content": "resource \"random_password\" \"vm-passwords\" {\n  count            = 3\n  length           = 16\n  special          = true\n  override_special = \"!#$%&*()-_=+[]{}<>:?\"\n}\n\nresource \"vault_mount\" \"avinash\" {\n  path        = \"avinash\"\n  type        = \"kv-v2\"\n  description = \"This Container avinash Family Secrets\"\n}\n\nresource \"vault_mount\" \"saikiran\" {\n  path        = \"saikiran\"\n  type        = \"kv-v2\"\n  description = \"This Container saikiran Family Secrets\"\n}\n\n\nresource \"vault_kv_secret_v2\" \"Prod-secrets\" {\n  count               = 3\n  mount               = vault_mount.avinash.path\n  name                = \"linux-machine-${count.index + 1}\"\n  cas                 = 1\n  delete_all_versions = true\n  data_json = jsonencode(\n    {\n      username = \"adminsai\",\n      password = element(random_password.vm-passwords.*.result, count.index)\n    }\n  )\n  custom_metadata {\n    max_versions = 5\n    data = {\n      foo = \"vault@avinash.com\"\n    }\n  }\n}\n\n\n#Creating saikiran Secrets\nresource \"vault_kv_secret_v2\" \"super-secrets\" {\n  count               = 3\n  mount               = vault_mount.saikiran.path\n  name                = \"super-linux-machine-${count.index + 1}\"\n  cas                 = 1\n  delete_all_versions = true\n  data_json = jsonencode(\n    {\n      username = \"adminsai\",\n      password = element(random_password.vm-passwords.*.result, count.index)\n    }\n  )\n  custom_metadata {\n    max_versions = 5\n    data = {\n      foo = \"vault@saikiran.com\"\n    }\n  }\n}"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/variables.tf",
    "content": "variable \"vault_addr\" {\n  default = \"https://kmsvault.cloudvishwakarma.in:8200\"\n}\nvariable \"vault_token\" {\n  default = \"TOKEN-HERE\"\n}\n\nvariable \"vm_count\" {\n  default = 3\n}\n\n\n\n"
  },
  {
    "path": "Day 26 Docker-Full-Course/README.md",
    "content": "# Day 28 Docker-Full-Course\n\n![00](https://github.com/user-attachments/assets/77c9bf84-ffca-478a-b288-058f5e28b9ab)\n\nhttps://youtu.be/5GhbkrMukmk?si=SqzutdvGZy-A8Hex\n\n"
  },
  {
    "path": "Day 27 Maven-JFrog-Sonarqube/README.md",
    "content": "\n![Untitled design](https://github.com/user-attachments/assets/dfaf3392-9cfd-43b2-86c1-e1bdd956b3ee)\n\n\n# Maven-Jfrog Integration\n\nThis repository showcases the integration of **Maven**, **JFrog**, and **SonarQube** to build, manage, and analyze a Java-based Spring Boot application. Below are the detailed steps to set up and deploy a sample application.\n\n---\n\n## Table of Contents\n\n1. [Introduction](#introduction)\n2. [Prerequisites](#prerequisites)\n3. [Setup and Installation](#setup-and-installation)\n4. [Maven Lifecycle](#maven-lifecycle)\n5. [Integrating with JFrog](#integrating-with-jfrog)\n6. [Pushing Artifacts to JFrog](#pushing-artifacts-to-jfrog)\n7. [Version Management](#version-management)\n8. [License](#license)\n\n---\n\n## Introduction\n\nThis project demonstrates:\n- Building a Spring Boot application using Maven.\n- Managing dependencies with `pom.xml`.\n- Storing and managing build artifacts using JFrog Artifactory.\n- Incremental versioning of artifacts.\n- Deployment to a private repository for reuse in other projects.\n\n**Note:** While this project highlights all major steps, application-specific code and configurations will typically be managed by your development team.\n\n---\n\n## Prerequisites\n\n1. **AWS EC2 Instance**:\n   - Instance type: `T2.large`\n   - Storage: `20 GB`\n   - OS: Ubuntu 20.04+\n2. **Tools**:\n   - **Maven**: Installed and configured.\n   - **OpenJDK**: Version 17 or higher.\n   - **JFrog Artifactory**: Installed and licensed.\n   - **Git**: Configured with SSH authentication.\n3. **Networking**:\n   - Configure DNS using Route 53 (if applicable).\n\n---\n\n## Setup and Installation\n\n### 1. Create EC2 Instance\nLaunch an EC2 instance and install required tools:\n\n```bash\nsudo apt update\nsudo apt install -y openjdk-17-jdk maven git jq net-tools\n```\n\n### 2. Clone and Build the Application\n\n```bash\ngit clone https://github.com/spring-projects/spring-petclinic.git\ncd spring-petclinic\nmvn clean package\n```\n\n### 3. Push Code to Azure DevOps\n1. Initialize a new Git repository if needed:\n   ```bash\n   rm -rf .git\n   git init\n   ```\n2. Set up SSH authentication:\n   - Generate an SSH key: `ssh-keygen`\n   - Add the public key to Azure DevOps under **User Settings > SSH Public Keys**.\n   - Clone the repository using the SSH link.\n\n3. Push code:\n   ```bash\n   git add .\n   git commit -m \"Initial commit\"\n   git remote add origin <ssh-link>\n   git push -u origin master\n   ```\n\n---\n\n## Maven Lifecycle\n\n### Maven Commands Overview\n\n1. **Validate**:\n   ```bash\n   mvn validate\n   ```\n   Ensures the `pom.xml` is valid.\n\n2. **Compile**:\n   ```bash\n   mvn compile\n   ```\n   Compiles Java files into `.class` files.\n\n3. **Package**:\n   ```bash\n   mvn package\n   ```\n   Packages the compiled code into `.jar` or `.war` artifacts.\n\n4. **Run Application**:\n   ```bash\n   java -jar target/*.jar\n   ```\n\n5. **Clean**:\n   ```bash\n   mvn clean\n   ```\n   Deletes previous build artifacts.\n\n---\n\n## Integrating with JFrog\n\n1. **Install JFrog**:\n   ```bash\n   wget -O jfrog-deb-installer.tar.gz \"https://releases.jfrog.io/artifactory/jfrog-prox/org/artifactory/pro/deb/jfrog-platform-trial-prox/[RELEASE]/jfrog-platform-trial-prox-[RELEASE]-deb.tar.gz\"\n   tar -xvzf jfrog-deb-installer.tar.gz\n   cd jfrog-platform-trial-pro*\n   sudo ./install.sh\n   sudo systemctl start artifactory.service\n   ```\n\n2. **Configure JFrog**:\n   - Access JFrog via `http://<instance-ip>:8082`.\n   - Apply the trial license.\n   - Create a Maven repository (`libs-release-local`).\n\n3. **Update Maven Configuration**:\n   Add the following in your `settings.xml`:\n\n   ```xml\n   <servers>\n      <server>\n         <id>central</id>\n         <username>YOUR_USERNAME</username>\n         <password>YOUR_PASSWORD</password>\n      </server>\n   </servers>\n   ```\n\n---\n\n## Pushing Artifacts to JFrog\n\n1. **Add Distribution Management to `pom.xml`**:\n\n   ```xml\n   <distributionManagement>\n      <repository>\n         <id>central</id>\n         <name>libs-release</name>\n         <url>http://<jfrog-instance>:8081/artifactory/libs-release-local</url>\n      </repository>\n      <snapshotRepository>\n         <id>snapshots</id>\n         <name>libs-snapshot</name>\n         <url>http://<jfrog-instance>:8081/artifactory/libs-snapshot-local</url>\n      </snapshotRepository>\n   </distributionManagement>\n   ```\n\n2. **Deploy Artifact**:\n   ```bash\n   mvn clean install deploy\n   ```\n\n3. Verify the artifact in JFrog's repository.\n\n---\n\n## Version Management\n\nUpdate versions dynamically using Maven's version plugin:\n\n```bash\nmvn versions:set -DnewVersion=1.0.0\nmvn clean install deploy\n```\n\nRepeat for subsequent versions:\n```bash\nmvn versions:set -DnewVersion=1.0.1\n```\n\n---\n\n## License\n\nThis project is licensed under the [MIT License](LICENSE).\n"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/0-maven.sh",
    "content": "Create T2-xl\ncreate Simplerecord for Jfrog with publicIP\nsudo apt update && apt install -y openjdk-17-jdk && sudo apt update && apt install -y maven\n\nclone same in local from powershell and push to azuredevops repo\n\nclone petclinicapp https://github.com/saikiranpi/springboot-petclinic.git on linux and make sure you ssh keys\n\nmvn clean install deploy\n\n-----\nnow lets deploy jfrog for storing our artifacts\n\ncd /usr/local/bin\nwget -O jfrog-deb-installer.tar.gz \"https://releases.jfrog.io/artifactory/jfrog-prox/org/artifactory/pro/deb/jfrog-platform-trial-prox/[RELEASE]/jfrog-platform-trial-prox-[RELEASE]-deb.tar.gz\"\ntar -xvzf jfrog-deb-installer.tar.gz\nsudo apt install jq -y && sudo apt install net-tools -y\ncd jfrog-platform-trial-pro*\n# sudo chown -R postgres:postgres /var/opt/jfrog/postgres/data\n# sudo chmod -R 700 /var/opt/jfrog/postgres/data\nsudo ./install.sh\nsudo systemctl start artifactory.service\nsudo systemctl start xray.service\n\nYou need license trail license\ncopy antifactory license and paste it the key  next  next next\nwe need maven repo here - >jfrog >http://jfrog.cloudvishwakarma.in\nfinish\n\n\nhttp://localhost:8082/\n\n\ngenerate settings in the mainfile under settings.xml and change the jfrog\nusername and password\nsnapshot as true\nchange the Jfrog URL accordingly\npaste the settings.yml under /root/.m2/settings/xml\nstay in petapp dir and run \"mvn clean install deploy\"\n\n################################################################################\njava -jar target/*.jar\n\nmvn versions:set -DnewVersion=1.0.0\nmvn clean install deploy\n"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/0-sonarqube.sh",
    "content": "# 1. Set Up PostgreSQL Instance for SonarQube\n\nsudo mkdir -p /var/lib/postgresql/sonarqube\nsudo chown postgres:postgres /var/lib/postgresql/sonarqube\nsudo su - postgres\n/usr/lib/postgresql/15/bin/initdb -D /var/lib/postgresql/sonarqube\n\n# 2. Configure PostgreSQL\nEdit postgresql.conf:\n\nsudo nano /var/lib/postgresql/sonarqube/postgresql.conf\n\nAdd:\n\nlisten_addresses = 'localhost'\nport = 5433\nunix_socket_directories = '/var/run/postgresql'\n\nEdit pg_hba.conf:\nsudo nano /var/lib/postgresql/sonarqube/pg_hba.conf\n\nAdd:\n\nlocal all postgres trust\nlocal all all md5\nhost all all 127.0.0.1/32 md5\nhost all all ::1/128 md5\n\n# 3. Create and Start PostgreSQL Service\n\nsudo nano /etc/systemd/system/postgresql-sonarqube.service\n\nAdd service content:\n\n[Unit]\nDescription=PostgreSQL for SonarQube\nAfter=network.target\n\n[Service]\nType=forking\nUser=postgres\nGroup=postgres\nExecStart=/usr/lib/postgresql/15/bin/pg_ctl -D /var/lib/postgresql/sonarqube -l /var/log/postgresql/postgresql-sonarqube.log start\nExecStop=/usr/lib/postgresql/15/bin/pg_ctl -D /var/lib/postgresql/sonarqube stop\nTimeoutSec=300\n\n[Install]\nWantedBy=multi-user.target\n\nStart service:\n\nsudo systemctl daemon-reload\nsudo systemctl start postgresql-sonarqube\nsudo systemctl enable postgresql-sonarqube\n\n# 4. Create Database and User\n\npsql -p 5433 -U postgres\n\nCREATE USER sonar WITH ENCRYPTED PASSWORD 'my_strong_password'\nCREATE DATABASE sonarqube OWNER sonar\nGRANT ALL PRIVILEGES ON DATABASE sonarqube to sonar\n\\q\n\n# 5. Install SonarQube\n\nsudo apt-get install zip -y\ncd /opt\nsudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.7.1.62043.zip\nsudo unzip sonarqube-9.7.1.62043.zip\nsudo mv sonarqube-9.7.1.62043 sonarqube\nrm -rf sonarqube-9.7.1.62043.zip\n\n# 6. Configure SonarQube User and Permissions\n\nsudo groupadd sonar\nsudo useradd -d /opt/sonarqube -g sonar sonar\nsudo chown sonar:sonar /opt/sonarqube -R\n\nEdit sonar.properties:\n\nsudo nano /opt/sonarqube/conf/sonar.properties\n\nAdd:\nproperties\nsonar.jdbc.username=sonar\nsonar.jdbc.password=my_strong_password\nsonar.jdbc.url=jdbc:postgresql://localhost:5433/sonarqube\n\n# 7. System Configuration\nCreate SonarQube service:\n\nsudo nano /etc/systemd/system/sonar.service\n\nAdd:\n\n[Unit]\nDescription=SonarQube service\nAfter=syslog.target network.target\n\n[Service]\nType=forking\nExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start\nExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop\nUser=sonar\nGroup=sonar\nRestart=always\nLimitNOFILE=65536\nLimitNPROC=4096\n\n[Install]\nWantedBy=multi-user.target\n\nConfigure system limits:\n\nsudo nano /etc/sysctl.conf\n\nAdd:\n\nvm.max_map_count=262144\nfs.file-max=65536\n\nConfigure user limits:\n\nsudo nano /etc/security/limits.conf\n\nAdd:\n\nsonar soft nofile 65536\nsonar hard nofile 65536\nsonar soft nproc 4096\nsonar hard nproc 4096\n\n# 8. Start SonarQube\n\nsudo sysctl -p\nsudo systemctl daemon-reload\nsudo systemctl start sonar\nsudo systemctl enable sonar\n\n# 9. Access SonarQube\n- Wait 5 minutes\n- Access: http://your-server:9000\n- Login: admin/admin\n\n\n\n"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/1-ado-tools.sh",
    "content": "sudo apt update && apt install -y unzip jq net-tools\r\napt install openjdk-17-jdk -y\r\napt install maven -y && curl https://get.docker.com | bash\r\nusermod -a -G docker adminsai\r\n\r\n# aws cli install\r\ncurl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\r\nunzip awscliv2.zip\r\nsudo ./aws/install\r\n\r\n# azurecli ubuntu install\r\ncurl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash\r\n\r\n# terraform.io and packer.io copy the link and install in /usr/local/bin\r\n\r\ncd /usr/local/bin\r\nwget https://releases.hashicorp.com/terraform/1.10.3/terraform_1.10.3_linux_amd64.zip\r\nunzip\r\n\r\n# packer.io\r\nwget https://releases.hashicorp.com/packer/1.11.2/packer_1.11.2_linux_amd64.zip\r\nunzip\r\n\r\n# document.ansible.com  Select ubuntu and download the file accordingly\r\nsudo apt update\r\nsudo apt install software-properties-common\r\nsudo add-apt-repository --yes --update ppa:ansible/ansible\r\nsudo apt install ansible\r\n\r\ncd /etc/ansible\r\ncp ansible.cfg ansible.cfg_backup\r\nansible-config init --disabled >ansible.cfg\r\nnano ansible.cfg\r\n\r\nctrl w  host_key_checking = False\r\n\r\n# Install trivy https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb\r\n\r\ncd /usr/local/bin\r\nWget https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb\r\ndpkg -i trivy file\r\nTrivy\r\n\r\nreboot the system for configurations.\r\n\r\n\r\n"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/1-pipeline.yml",
    "content": "trigger:\r\n  - development\r\n  - uat\r\n  - production\r\n\r\npool:\r\n  name: LinuxAgentPool\r\n  demands:\r\n    - Java -equals Yes\r\n    - Terraform -equals Yes\r\n    - Agent.Name -equals ProdADO\r\nvariables:\r\n  global_version: \"1.0.0\"\r\n  global_email: \"pinapathruni.saikiran@gmail.com\"\r\n  # azure_dev_sub: \"1e9d13b0-73fc-43eb-b04e-4b4f5a5ea96f\"\r\n  isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/development')]\r\n  isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/production')]\r\n\r\nsteps:\r\n  - script: docker version && packer version && terraform version && aws --version && java -version && mvn --version\r\n    displayName: \"Testin A Newly Created Agent and rools\"\r\n"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/2-pipeline.yml",
    "content": "trigger:\r\n  branches:\r\n    include:\r\n      - development\r\n      - uat\r\n      - production\r\n    exclude: [\"master\", \"feature*\", \"README.md\"]\r\n"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/README.md",
    "content": "![escape](https://github.com/user-attachments/assets/63a9188f-edea-4fc4-9f94-c28b46c5bb37)\n\n# Day28 AzureDevOps_Part_1\n\n## CI-CD-CD (Continuous Integration, Continuous Delivery, Continuous Deployment)\n\nThis project focuses on setting up a CI/CD pipeline in Azure DevOps to automate the processes of code integration, delivery, and deployment. The pipeline ensures secure, efficient, and seamless transitions from development to production.\n\n---\n\n### **Continuous Integration**\n1. **Code Readiness:**\n   - Code is committed and merged into the repository.\n   - Static Application Security Testing (SAST) is performed to identify vulnerabilities.\n\n2. **Build:**\n   - Uses Maven to generate a JAR file.\n   - Docker is employed to create an image using a `Dockerfile`.\n\n3. **Artifacts Publishing:**\n   - Built artifacts are stored for further stages in the CI/CD pipeline.\n\n**Example Release Strategy:**\n- A versioning system is employed:\n  - Stable Version: `23.0.0` (Production-ready).\n  - Release Candidates: `23.0.0.0-RC1`, `23.0.0.0-RC2`, etc., for testing.\n  - Hotfix Versions: `23.0.0.1` for bug fixes post-release.\n\n---\n\n### **Continuous Delivery**\n- Automates deployment to development and staging environments after successful integration testing.\n- Focuses on delivering artifacts to lower environments for further testing.\n\n---\n\n### **Continuous Deployment**\n- Automates deployment to production after passing all previous stages.\n- Often skipped for production in many organizations due to additional manual checks.\n\n---\n\n### **Branching Strategy**\n1. **Main/Master Branch:**\n   - Represents production-ready code.\n\n2. **Development Branch:**\n   - Feature branches are created for changes and merged back into development after review.\n\n3. **Staging/Functional Testing:**\n   - Tracks and documents manual/automated test results in an organized manner.\n\n---\n\n### **CI/CD Tools**\nCommon tools for CI/CD pipelines include:\n- Azure DevOps (primary focus)\n- Jenkins\n- GitLab\n- GitHub Actions\n- GoCD\n- TravisCI\n- CircleCI\n\n---\n\n## **Setting up Azure DevOps Pipeline**\n\n### Task Overview:\n1. **Create a Pipeline Agent:**\n   - Use a self-hosted agent by creating a virtual machine (VM) with the necessary tools installed.\n   - VM Configuration:\n     - OS: Ubuntu 20.04\n     - Specs: 2 CPUs, 8GB RAM\n     - Disk: Standard SSD\n\n2. **Configure the Agent:**\n   - Install required tools (e.g., Terraform, Packer).\n   - Generate a Personal Access Token (PAT) for authentication.\n   - Create an agent pool and register the agent in Azure DevOps.\n\n3. **Pipeline Creation:**\n   - Create a pipeline for the repository in Azure DevOps.\n   - Use a `trigger` to specify branches for automatic execution.\n\n4. **Clone Repository Locally:**\n   - Use Git commands to clone the repository and manage changes.\n\n---\n\n### Step-by-Step Instructions:\n\n1. **Create VM:**\n   - Set up a virtual machine in Azure with the specified configuration.\n   - Configure networking to allow necessary inbound and outbound rules.\n\n2. **Install Required Tools:**\n   - Access the VM via SSH (e.g., PuTTY).\n   - Install dependencies and configure tools as the admin user.\n\n3. **Generate PAT:**\n   - Create a Personal Access Token in Azure DevOps with full access and save it securely.\n\n4. **Setup Agent Pool:**\n   - Create and configure an agent pool in Azure DevOps.\n   - Register the VM as an agent using provided setup scripts.\n\n5. **Pipeline Creation:**\n   - Use Azure Pipelines to create a YAML-based pipeline.\n   - Example configuration:\n     ```yaml\n     trigger:\n       branches:\n         include:\n           - master\n     pool:\n       name: LinuxAgentPool\n     steps:\n       - script: echo \"Hello, Azure DevOps!\"\n     ```\n\n6. **Test and Modify Pipeline:**\n   - Push changes to trigger pipeline execution.\n   - Use Visual Studio Code for editing pipeline configurations.\n\n7. **Add Variables:**\n   - Add SonarQube credentials or other required variables in the Azure DevOps pipeline UI.\n\n8. **Service Connections:**\n   - Connect the Azure DevOps pipeline to external tools like SonarQube for analysis.\n\n---\n\n## **Advanced Features**\n- Conditional expressions for environment-specific pipelines.\n- Integration with AWS instances or other external environments.\n- Dynamic agent capabilities for task-specific pipelines.\n\n---\n\n### **Additional Resources**\n- [Azure DevOps Documentation](https://learn.microsoft.com/en-us/azure/devops/)\n- [SonarQube Documentation](https://docs.sonarqube.org/)\n- [GitHub for Version Control](https://github.com/)\n\n---\n\n**Contributors:**\n- Admin Kiran (Contact: `adminkiran`)\n\n**License:**\n- This project is licensed under the MIT License. See the LICENSE file for details.\n"
  },
  {
    "path": "Day 29 AzureDevOps-Part-2/README.md",
    "content": "# PLEASE COPY THE POM.XML AND PIPELINE SCRIPT FIRST AND DO THE PRACTICALS. REST ALL SAME. \n\n\n# Prod-SpringBoot-Pet-App\n\nThis repository contains the production-ready Spring Boot application for the `Prod-ADO` instance. Follow the steps below to set up and run the CI/CD pipeline using Azure DevOps (ADO).\n\n## Prerequisites\n- AWS and Azure instances must be up and running.\n- Proper IP addresses should be updated in Route 53.\n\n## Steps to Set Up the Pipeline\n\n### Stage 1: Initial Setup\n1. **Start the agents** on AWS and Azure.\n2. **Update the IPs** in Route 53.\n3. **Clone the repository** and check the available branches.\n4. **Add the SonarQube stage** and build the pipeline accordingly.\n5. **Modify the `pom.xml` file** at lines 13 & 16:\n   ```xml\n   <artifactId>ado-spring-boot-app-dev</artifactId>\n   ```\n\n### Stage 2: Connecting the Pipeline to the EC2 Instance\n1. **Connect the pipeline to the EC2 instance** where SonarQube and Maven are installed using service connections:\n   - Navigate to **Project Settings > Service Connections**.\n   - Create a new service connection for the EC2 instance.\n2. **Add the token** in the pipeline and push the code to the development branch.\n3. **Run the pipeline** and push it to the development environment.\n4. If the Maven build fails, skip tests by adding the following line:\n   ```yaml\n   options: '-DskipTests'\n   ```\n   Add it above the `displayName` in your YAML file.\n5. Push the changes again.\n6. If you encounter issues with `sonar.branch.name`, set the development branch as the default branch.\n7. Once the job completes, check the results on SonarQube.\n\n### Stage 3: Building with Java and Copying Artifacts to JFrog\n1. **Build the application** using Maven and copy the artifact to JFrog.\n2. Ensure `settings.yaml` is securely managed:\n   - Go to **Libraries > Add Secure Files**.\n   - Browse and add the secure file.\n3. **Create the necessary directories** on the Azure agent:\n   ```bash\n   sudo mkdir /artifacts\n   sudo chown adminsai:adminsai /artifacts\n   ```\n   This folder will store the copied artifact.\n4. Save and push the changes.\n5. If errors occur during the Maven build, log in to the server and debug using:\n   ```bash\n   grep -i \"failure\" *.txt\n   ```\n   Example failure:\n   ```\n   org.springframework.samples.petclinic.system.CrashControllerIntegrationTests.txt\n   ```\n   Review and fix the `CrashControllerIntegrationTests` file accordingly.\n\n### Stage 4: Copying Artifacts to Azure Blob Storage\n1. **Create a storage account** in Azure:\n   - Name: `artifacts`\n   - Redundancy: Locally Redundant Storage (LRS)\n2. **Create a container** named `artifacts`.\n3. **Set up a service principal**:\n   - Navigate to **Microsoft Entra ID > App Registration**.\n   - Create a new service principal.\n   - In **Project Settings > Service Connections**, create a new Azure Resource Manager connection manually.\n   - Provide the following details:\n     - Tenant ID\n     - Client ID (Service Principal ID)\n     - Subscription ID\n     - Client Secret (Create a new secret under Certificates & Secrets).\n4. **Create a new pipeline variable**:\n   - Name: `STORAGE_ACCOUNT_KEY`\n   - Secret: Yes\n   - Value: Copy the access key from the storage account.\n5. Push the changes and run the pipeline.\n\n### Stage 5: Adding an S3 Bucket\n1. **Create an S3 bucket** with the name specified in the YAML file.\n2. **Grant S3 access**:\n   - Navigate to **IAM > Users** and grant S3 full access.\n3. **Create a new AWS service connection** in ADO:\n   - Use the access key and secret key.\n   - Connection name: `saikiransecops-s3`\n4. Push the changes and verify the artifacts in the S3 bucket.\n\n### Stage 6: Building a Docker Image and Scanning with Trivy\n1. **Create a template folder** in VSCode:\n   ```bash\n   mkdir template\n   cd template\n   touch junit.tpl\n   ```\n   Paste the required content into `junit.tpl`.\n2. **Create a Dockerfile** in VSCode and paste the necessary code.\n3. Push the changes.\n4. Test the pipeline step-by-step to ensure correctness.\n\n## Final Notes\n- The pipeline may require multiple iterations to achieve perfection. Ensure that each step is tested and validated before proceeding to the next.\n- Use secure methods to manage sensitive information such as credentials and keys.\n\n## Troubleshooting\n- For Maven build failures, use the following command:\n  ```bash\n  grep -i \"failure\" *.txt\n  ```\n- If issues are found in `CrashControllerIntegrationTests`, review the file and make the necessary changes without altering unrelated parts.\n- **SonarQube Upgrade Steps**:\n  1. **Stop SonarQube**:\n     ```bash\n     sudo systemctl stop sonar\n     ```\n  2. **Download and install a newer version (10.3)**:\n     ```bash\n     cd /opt\n     sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-10.3.0.82913.zip\n     sudo unzip sonarqube-10.3.0.82913.zip\n     sudo rm -rf sonarqube\n     sudo mv sonarqube-10.3.0.82913 sonarqube\n     ```\n  3. **Fix permissions**:\n     ```bash\n     sudo chown -R sonar:sonar /opt/sonarqube\n     sudo chmod -R 755 /opt/sonarqube\n     ```\n  4. **Update `sonar.properties` to configure JDK 17 module path**:\n     ```bash\n     sudo nano /opt/sonarqube/conf/sonar.properties\n     ```\n     Add the following line:\n     ```properties\n     sonar.web.javaAdditionalOpts=--add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED\n     ```\n  5. **Restart SonarQube**:\n     ```bash\n     sudo systemctl restart sonar\n     ```\n  This newer version has better compatibility with Java 17. Let the DevOps team know if further errors occur.\n\n## Acknowledgments\nSpecial thanks to the team for their support in setting up and validating the pipeline.\n\n---\nFor further assistance, please contact the DevOps team.\n\n\n\n---\n\n\n"
  },
  {
    "path": "Day 29 AzureDevOps-Part-2/azure-pipelines.yml",
    "content": "trigger:\n  - development\n  - uat\n  - production\n\npool:\n  name: LinuxAgentPool\n  demands:\n    - JDK -equals 17\n    - Terraform -equals Yes\n    - Agent.Name -equals ProdADO\n\nvariables:\n  global_version: \"1.0.0\"\n  global_email: \"saikiran@gmail.com\"\n  # azure_dev_sub: \"9ce91e05-4b9e-4a42-95c1-4385c54920c6\"\n  # azure_prod_sub: \"298f2c19-014b-4195-b821-e3d8fc25c2a8\"\n  isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/development')]\n  isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/production')]\n\nstages:\n  - stage: CheckingTheAgent\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: LinuxAgentPool\n      demands:\n        - Terraform -equals Yes\n    variables:\n      stage_version: \"2.0.0\"\n      stage_email: \"saikiran.pinapathruni18@gmail.com\"\n    jobs:\n      - job: CheckingTerraformAndPacker\n        variables:\n          job_version: \"3.0.0\"\n          job_email: \"saiaws@gmail.com\"\n        timeoutInMinutes: 5\n        steps:\n          - script: echo $(Build.BuildId)\n            displayName: \"Display The Build-ID\"\n          - script: terraform version && packer version\n            displayName: \"Display Terraform & Packer Version\"\n          - script: docker version && docker ps && docker images && docker ps -a\n            displayName: \"Display Docker Version\"\n          - script: pwd && ls -al\n            displayName: \"List Folder & Files\"\n\n  - stage: SASTWithSonarQube\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: LinuxAgentPool\n      demands:\n        - JDK -equals 17\n    jobs:\n      - job: RunningSASTWithSonarqube\n        timeoutInMinutes: 10\n        steps:\n          #SonarQube User Token need to be generated and used in the ServiceConnection.\n          #Also change name of the project and artifactId(line 6 & 14) to ado-spring-boot-app-dev in POM.\n          #No need to create a project in sonarqube as its created automatically.\n          - task: SonarQubePrepare@7\n            inputs:\n              SonarQube: \"SonarTestToken\"\n              scannerMode: \"Other\"\n              projectVersion: \"$(Build.BuildId)\"\n            displayName: \"Preparing SonarQube Config\"\n          - task: Maven@4\n            inputs:\n              mavenPomFile: \"pom.xml\"\n              publishJUnitResults: false\n              javaHomeOption: \"JDKVersion\"\n              mavenVersionOption: \"Default\"\n              mavenAuthenticateFeed: false\n              effectivePomSkip: false\n              sonarQubeRunAnalysis: true\n              sqMavenPluginVersionChoice: \"latest\"\n              options: \"-DskipTests\"\n            displayName: \"Running SonarQube Maven Analysis\"\n          - task: sonar-buildbreaker@8\n            inputs:\n              SonarQube: \"SonarTestToken\"\n            displayName: \"SAST Job Fail or Pass\"\n  - stage: BuildingJavaCodeWithMavenCopyToJFrog\n    condition: and(succeeded(), eq(variables.isDev, true))\n    #condition: always()\n    pool:\n      name: LinuxAgentPool\n      demands:\n        - Terraform -equals Yes\n    jobs:\n      - job: BuildingJavaCodeJob\n        timeoutInMinutes: 5\n        steps:\n          - script: ls -al && pwd && rm -rf /home/adminsai/.m2/settings.xml\n            displayName: \"List Files & Current Working Directory\"\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"settings.xml\"\n          - task: CopyFiles@2\n            inputs:\n              SourceFolder: \"$(Agent.TempDirectory)\"\n              Contents: \"**\"\n              TargetFolder: \"/home/adminsai/.m2\"\n          - script: mvn versions:set -DnewVersion=Dev-2.0.$(Build.BuildId)\n            displayName: \"Set Maven Build Version\"\n          - script: mvn clean package install && ls -al\n            displayName: \"Run the maven build and install\"\n          - script: mvn deploy && ls -al\n            displayName: \"Run the maven deploy\"\n            continueOnError: true\n          - script: ls -al && cp /home/adminsai/myagent/_work/1/s/target/ado-spring-boot-app-dev-Dev-2.0.$(Build.BuildId).jar ROOT$(Build.BuildId).jar && ls -al\n            displayName: \"List Files & Rename ROOT.jar\"\n          - script: rm -rf /artifacts/*.jar && cp ROOT$(Build.BuildId).jar /artifacts && ls -al /artifacts\n            displayName: \"Copy Artifact To Folder\"\n          - task: CopyFiles@2\n            inputs:\n              Contents: \"ROOT$(Build.BuildId).jar\"\n              TargetFolder: \"$(Build.ArtifactStagingDirectory)\"\n              OverWrite: true\n            displayName: \"Copying JAR file to ArtifactStagingDirector\"\n          - task: PublishBuildArtifacts@1\n            inputs:\n              PathtoPublish: \"$(Build.ArtifactStagingDirectory)\"\n              ArtifactName: \"ROOT$(Build.BuildId).jar\"\n              publishLocation: \"Container\"\n            displayName: \"Publishing JAR Artifact.\"\n  - stage: CopyingArtifactsToAzureAndAws\n    condition: and(succeeded(), eq(variables.isDev, true))\n    jobs:\n      - job: CopyFilesToAzureBlob\n        timeoutInMinutes: 5\n        steps:\n          - checkout: none\n          - task: AzureCLI@2\n            inputs:\n              azureSubscription: \"saikiransecops-subscription\"\n              scriptType: \"bash\"\n              scriptLocation: \"inlineScript\"\n              inlineScript: |\n                az storage blob upload-batch --account-name saikiransecopsprod  --account-key $(STORAGE_ACCOUNT_KEY) --destination artifacts --source /artifacts/\n            displayName: \"Azure Upload artifacts to Azure Blob\"\n            continueOnError: true\n      - job: CopyFilesToAWSS3Bucket\n        dependsOn: CopyFilesToAzureBlob\n        condition: always() # succeededOrFailed() or always() or failed() or succeeded()-default\n        timeoutInMinutes: 5\n        steps:\n          - checkout: none\n          - task: S3Upload@1\n            inputs:\n              awsCredentials: \"saikiransecops-s3\"\n              regionName: \"us-east-1\"\n              bucketName: \"saikiransecopss3uploadprodartifacts\"\n              sourceFolder: \"/artifacts/\"\n              globExpressions: \"ROOT$(Build.BuildId).jar\"\n            displayName: \"AWS Upload artifacts to AWS S3 Bucket\"\n            continueOnError: true\n  - stage: DockerBuildAndTrivyScan\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: LinuxAgentPool\n    jobs:\n      - job: BuildingContainerImageAndSecurityScanning\n        timeoutInMinutes: 10\n        steps:\n          - checkout: none\n          - script: docker build -t kiran2361993/myapp:$(Build.BuildId) .\n            displayName: \"Create Docker Image\"\n          #- script: trivy image --severity HIGH,CRITICAL --format template --template \"@template/junit.tpl\" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId)\n          - script: |\n              trivy image --exit-code 0 --severity LOW,MEDIUM --format template --template \"@template/junit.tpl\" -o junit-report-low-med.xml kiran2361993/myapp:$(Build.BuildId)\n              trivy image --exit-code 0 --severity HIGH,CRITICAL --format template --template \"@template/junit.tpl\" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId)\n            displayName: \"Scan Image and Create Report\"\n          - task: PublishTestResults@2\n            inputs:\n              testResultsFormat: \"JUnit\"\n              testResultsFiles: \"**/junit-report-low-med.xml\"\n              mergeTestResults: true\n              failTaskOnFailedTests: false\n              testRunTitle: \"Trivy - Low and Medium Vulnerabilities\"\n            displayName: \"Trivy - Low and Medium Vulnerabilities\"\n            condition: \"always()\"\n          - task: PublishTestResults@2\n            inputs:\n              testResultsFormat: \"JUnit\"\n              testResultsFiles: \"**/junit-report-high-crit.xml\"\n              mergeTestResults: true\n              failTaskOnFailedTests: false\n              testRunTitle: \"Trivy - High and Critical Vulnerabilities\"\n            displayName: \"Trivy - High and Critical Vulnerabilities\"\n            condition: \"always()\"\n"
  },
  {
    "path": "Day 29 AzureDevOps-Part-2/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n  xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>org.springframework.boot</groupId>\n    <artifactId>spring-boot-starter-parent</artifactId>\n    <version>3.4.0</version>\n    <relativePath></relativePath>\n  </parent>\n\n  <groupId>org.springframework.samples</groupId>\n  <artifactId>ado-spring-boot-app-dev</artifactId>\n  <version>3.4.0-SNAPSHOT</version>\n\n  <name>ado-spring-boot-app-dev</name>\n\n  <properties>\n    <!-- <sonar.branch.name>${BRANCH_NAME}</sonar.branch.name> -->\n    <java.version>17</java.version>\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\n    <!-- Important for reproducible builds. Update using e.g. ./mvnw versions:set\n        -DnewVersion=... -->\n    <project.build.outputTimestamp>2024-11-28T14:37:52Z</project.build.outputTimestamp>\n\n    <!-- Web dependencies -->\n    <webjars-locator.version>1.0.1</webjars-locator.version>\n    <webjars-bootstrap.version>5.3.3</webjars-bootstrap.version>\n    <webjars-font-awesome.version>4.7.0</webjars-font-awesome.version>\n\n    <checkstyle.version>10.20.1</checkstyle.version>\n    <jacoco.version>0.8.12</jacoco.version>\n    <libsass.version>0.2.29</libsass.version>\n    <lifecycle-mapping>1.0.0</lifecycle-mapping>\n    <maven-checkstyle.version>3.6.0</maven-checkstyle.version>\n    <nohttp-checkstyle.version>0.0.11</nohttp-checkstyle.version>\n    <spring-format.version>0.0.43</spring-format.version>\n\n  </properties>\n\n  <distributionManagement>\n    <repository>\n      <id>central</id>\n      <name>libs-release</name>\n      <url>http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-release-local</url>\n    </repository>\n\n    <snapshotRepository>\n      <id>snapshots</id>\n      <name>libs-snapshot</name>\n      <url>http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-snapshot-local</url>\n    </snapshotRepository>\n  </distributionManagement>\n\n  <dependencies>\n    <!-- Spring and Spring Boot dependencies -->\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-starter-actuator</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-starter-cache</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-starter-data-jpa</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-starter-web</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-starter-validation</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-starter-thymeleaf</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-starter-test</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <!-- Workaround for AOT issue (https://github.com/spring-projects/spring-framework/pull/33949) -->\n      <groupId>io.projectreactor</groupId>\n      <artifactId>reactor-core</artifactId>\n    </dependency>\n\n    <!-- Databases - Uses H2 by default -->\n    <dependency>\n      <groupId>com.h2database</groupId>\n      <artifactId>h2</artifactId>\n      <scope>runtime</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.mysql</groupId>\n      <artifactId>mysql-connector-j</artifactId>\n      <scope>runtime</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.postgresql</groupId>\n      <artifactId>postgresql</artifactId>\n      <scope>runtime</scope>\n    </dependency>\n\n    <!-- Caching -->\n    <dependency>\n      <groupId>javax.cache</groupId>\n      <artifactId>cache-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>com.github.ben-manes.caffeine</groupId>\n      <artifactId>caffeine</artifactId>\n    </dependency>\n\n    <!-- Webjars -->\n    <dependency>\n      <groupId>org.webjars</groupId>\n      <artifactId>webjars-locator-lite</artifactId>\n      <version>${webjars-locator.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.webjars.npm</groupId>\n      <artifactId>bootstrap</artifactId>\n      <version>${webjars-bootstrap.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.webjars.npm</groupId>\n      <artifactId>font-awesome</artifactId>\n      <version>${webjars-font-awesome.version}</version>\n    </dependency>\n\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-devtools</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-testcontainers</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.boot</groupId>\n      <artifactId>spring-boot-docker-compose</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.testcontainers</groupId>\n      <artifactId>junit-jupiter</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.testcontainers</groupId>\n      <artifactId>mysql</artifactId>\n      <scope>test</scope>\n    </dependency>\n\n    <dependency>\n      <groupId>jakarta.xml.bind</groupId>\n      <artifactId>jakarta.xml.bind-api</artifactId>\n    </dependency>\n\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-enforcer-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>enforce-java</id>\n            <goals>\n              <goal>enforce</goal>\n            </goals>\n            <configuration>\n              <rules>\n                <requireJavaVersion>\n                  <message>This build requires at least Java ${java.version},\n                    update your JVM, and\n                    run the build again</message>\n                  <version>${java.version}</version>\n                </requireJavaVersion>\n              </rules>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>io.spring.javaformat</groupId>\n        <artifactId>spring-javaformat-maven-plugin</artifactId>\n        <version>${spring-format.version}</version>\n        <executions>\n          <execution>\n            <goals>\n              <goal>validate</goal>\n            </goals>\n            <phase>validate</phase>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-checkstyle-plugin</artifactId>\n        <version>${maven-checkstyle.version}</version>\n        <dependencies>\n          <dependency>\n            <groupId>com.puppycrawl.tools</groupId>\n            <artifactId>checkstyle</artifactId>\n            <version>${checkstyle.version}</version>\n          </dependency>\n          <dependency>\n            <groupId>io.spring.nohttp</groupId>\n            <artifactId>nohttp-checkstyle</artifactId>\n            <version>${nohttp-checkstyle.version}</version>\n          </dependency>\n        </dependencies>\n        <!-- <executions>\n          <execution>\n            <id>nohttp-checkstyle-validation</id>\n            <goals>\n              <goal>check</goal>\n            </goals>\n            <phase>validate</phase>\n            <configuration>\n              <configLocation>src/checkstyle/nohttp-checkstyle.xml</configLocation>\n              <sourceDirectories>${basedir}</sourceDirectories>\n              <includes>**/*</includes>\n              <excludes>**/.git/**/*,**/.idea/**/*,**/target/**/,**/.flattened-pom.xml,**/*.class</excludes>\n              <propertyExpansion>config_loc=${basedir}/src/checkstyle/</propertyExpansion>\n            </configuration>\n          </execution>\n        </executions> -->\n      </plugin>\n      <plugin>\n        <groupId>org.graalvm.buildtools</groupId>\n        <artifactId>native-maven-plugin</artifactId>\n      </plugin>\n      <plugin>\n        <groupId>org.springframework.boot</groupId>\n        <artifactId>spring-boot-maven-plugin</artifactId>\n        <executions>\n          <execution>\n            <!-- Spring Boot Actuator displays build-related information\n              if a META-INF/build-info.properties file is present -->\n            <goals>\n              <goal>build-info</goal>\n            </goals>\n            <configuration>\n              <additionalProperties>\n                <encoding.source>${project.build.sourceEncoding}</encoding.source>\n                <encoding.reporting>${project.reporting.outputEncoding}</encoding.reporting>\n                <java.source>${java.version}</java.source>\n                <java.target>${java.version}</java.target>\n              </additionalProperties>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.jacoco</groupId>\n        <artifactId>jacoco-maven-plugin</artifactId>\n        <version>${jacoco.version}</version>\n        <executions>\n          <execution>\n            <goals>\n              <goal>prepare-agent</goal>\n            </goals>\n          </execution>\n          <execution>\n            <id>report</id>\n            <goals>\n              <goal>report</goal>\n            </goals>\n            <phase>prepare-package</phase>\n          </execution>\n        </executions>\n      </plugin>\n\n      <!-- Spring Boot Actuator displays build-related information if a git.properties file is\n      present at the classpath -->\n      <plugin>\n        <groupId>io.github.git-commit-id</groupId>\n        <artifactId>git-commit-id-maven-plugin</artifactId>\n        <configuration>\n          <failOnNoGitDirectory>false</failOnNoGitDirectory>\n          <failOnUnableToExtractRepoInfo>false</failOnUnableToExtractRepoInfo>\n        </configuration>\n      </plugin>\n      <!-- Spring Boot Actuator displays sbom-related information if a CycloneDX SBOM file is\n      present at the classpath -->\n      <plugin>\n        <?m2e ignore?>\n        <groupId>org.cyclonedx</groupId>\n        <artifactId>cyclonedx-maven-plugin</artifactId>\n      </plugin>\n      <plugin>\n        <groupId>org.codehaus.mojo</groupId>\n        <artifactId>build-helper-maven-plugin</artifactId>\n        <version>3.2.0</version>\n      </plugin>\n      <plugin>\n        <groupId>org.codehaus.mojo</groupId>\n        <artifactId>versions-maven-plugin</artifactId>\n        <version>2.8.1</version>\n      </plugin>\n    </plugins>\n  </build>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0</url>\n    </license>\n  </licenses>\n\n  <repositories>\n    <repository>\n      <snapshots>\n        <enabled>true</enabled>\n      </snapshots>\n      <id>spring-snapshots</id>\n      <name>Spring Snapshots</name>\n      <url>https://repo.spring.io/snapshot</url>\n    </repository>\n    <repository>\n      <snapshots>\n        <enabled>false</enabled>\n      </snapshots>\n      <id>spring-milestones</id>\n      <name>Spring Milestones</name>\n      <url>https://repo.spring.io/milestone</url>\n    </repository>\n  </repositories>\n  <pluginRepositories>\n    <pluginRepository>\n      <snapshots>\n        <enabled>true</enabled>\n      </snapshots>\n      <id>spring-snapshots</id>\n      <name>Spring Snapshots</name>\n      <url>https://repo.spring.io/snapshot</url>\n    </pluginRepository>\n    <pluginRepository>\n      <snapshots>\n        <enabled>false</enabled>\n      </snapshots>\n      <id>spring-milestones</id>\n      <name>Spring Milestones</name>\n      <url>https://repo.spring.io/milestone</url>\n    </pluginRepository>\n  </pluginRepositories>\n\n  <profiles>\n    <profile>\n      <id>css</id>\n      <build>\n        <plugins>\n          <plugin>\n            <groupId>org.apache.maven.plugins</groupId>\n            <artifactId>maven-dependency-plugin</artifactId>\n            <executions>\n              <execution>\n                <id>unpack</id>\n                <goals>\n                  <goal>unpack</goal>\n                </goals>\n                <?m2e execute onConfiguration,onIncremental?>\n                <phase>generate-resources</phase>\n                <configuration>\n                  <artifactItems>\n                    <artifactItem>\n                      <groupId>org.webjars.npm</groupId>\n                      <artifactId>bootstrap</artifactId>\n                      <version>${webjars-bootstrap.version}</version>\n                    </artifactItem>\n                  </artifactItems>\n                  <outputDirectory>${project.build.directory}/webjars</outputDirectory>\n                </configuration>\n              </execution>\n            </executions>\n          </plugin>\n\n          <plugin>\n            <groupId>com.gitlab.haynes</groupId>\n            <artifactId>libsass-maven-plugin</artifactId>\n            <version>${libsass.version}</version>\n            <configuration>\n              <inputPath>${basedir}/src/main/scss/</inputPath>\n              <outputPath>${basedir}/src/main/resources/static/resources/css/</outputPath>\n              <includePath>\n                ${project.build.directory}/webjars/META-INF/resources/webjars/bootstrap/${webjars-bootstrap.version}/scss/</includePath>\n            </configuration>\n            <executions>\n              <execution>\n                <?m2e execute onConfiguration,onIncremental?>\n                <goals>\n                  <goal>compile</goal>\n                </goals>\n                <phase>generate-resources</phase>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n    <profile>\n      <id>m2e</id>\n      <activation>\n        <property>\n          <name>m2e.version</name>\n        </property>\n      </activation>\n      <build>\n        <pluginManagement>\n          <plugins>\n            <!-- This plugin's configuration is used to store Eclipse m2e settings\n              only. It has no influence on the Maven build itself. -->\n            <plugin>\n              <groupId>org.eclipse.m2e</groupId>\n              <artifactId>lifecycle-mapping</artifactId>\n              <version>${lifecycle-mapping}</version>\n              <configuration>\n                <lifecycleMappingMetadata>\n                  <pluginExecutions>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>org.apache.maven.plugins</groupId>\n                        <artifactId>maven-checkstyle-plugin</artifactId>\n                        <versionRange>[1,)</versionRange>\n                        <goals>\n                          <goal>check</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <ignore></ignore>\n                      </action>\n                    </pluginExecution>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>org.springframework.boot</groupId>\n                        <artifactId>spring-boot-maven-plugin</artifactId>\n                        <versionRange>[1,)</versionRange>\n                        <goals>\n                          <goal>build-info</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <ignore></ignore>\n                      </action>\n                    </pluginExecution>\n                    <pluginExecution>\n                      <pluginExecutionFilter>\n                        <groupId>io.spring.javaformat</groupId>\n                        <artifactId>spring-javaformat-maven-plugin</artifactId>\n                        <versionRange>[0,)</versionRange>\n                        <goals>\n                          <goal>validate</goal>\n                        </goals>\n                      </pluginExecutionFilter>\n                      <action>\n                        <ignore></ignore>\n                      </action>\n                    </pluginExecution>\n                  </pluginExecutions>\n                </lifecycleMappingMetadata>\n              </configuration>\n            </plugin>\n          </plugins>\n        </pluginManagement>\n      </build>\n    </profile>\n  </profiles>\n</project>"
  },
  {
    "path": "Day 30 AzureDevOps-Part-3/README.md",
    "content": "# DevSecOps Pipeline Tutorial\n\n![Day 02 (1)](https://github.com/user-attachments/assets/ae4bd8bb-3988-45c9-887d-cb14531c40e5)\n\n\n### Start the Instances\n1. Start all the instances on **AWS** and **Azure**.\n2. Copy the IP addresses of the instances to **Route53**.\n3. Clone the production code and switch to the development branch.\n\n### Recap of Previous Session\nIn the previous session, we created a Docker image and analyzed it with **Trivy** for any security vulnerabilities.\n\n### Today's Session\nWe will complete the next stages by pushing the Docker image to:\n- **Azure Container Registry (ACR)**\n- **Docker Hub (Private)**\n\n### Steps for Azure Container Registry\n1. Copy the code to production.\n2. Go to **Azure** and create a container registry:\n   - **Container Registry** > **Create**\n   - Resource Group: `devSecOps`\n   - Registry Name: `devsecopsacr`\n   - Region: `East US`\n   - Click **Review and Create**.\n3. After creating the registry:\n   - Go to the resources > **Access Keys**.\n   - Enable the **Admin user** checkbox.\n4. In the pipeline:\n   - Edit the pipeline > **Variables** > **Add**:\n     - Name: `acrpassword`\n     - Value: Copy and paste the password (keep it secret).\n   - Save the changes.\n\n### Steps for Docker Hub\n1. Go to **Project Settings** > **Service Connections** > **New** > **Docker Registry** > **Docker Hub**.\n2. Enter the following details:\n   - Docker ID: `kiran2361993`\n   - Password or Token.\n   - Service Connection Name: `devops-dockerhub-connection`.\n   - Grant access and click **Save**.\n3. Push the changes to Git and monitor the pipeline.\n\n## Step 09: Fixing Errors and Deploying to Azure Container Instance (ACI)\n### Java Version Change\n- Show the error and update the Java version in the Dockerfile from **11** to **17**.\n\n### Deploy to Azure Container Instance\n1. Copy the code.\n2. Deploy it to **Azure Container Instance (ACI)**:\n   - ACI automatically creates an instance without manual provisioning.\n\n### Creating Environments on AWS\n1. Create two different Ubuntu servers on **AWS**:\n   - One for **Staging** and another for **Production**.\n2. Deploy two **t2.medium** instances with the following settings:\n   - **Tag**: `Name: Staging` (Rename one instance to `Production` after creation).\n   - **Advanced Details** > **User Data**:\n     ```bash\n     #!bin/bash\n     apt update\n     apt install -y openjdk-17-jdk\n     ```\n   - Launch the instances.\n3. Once deployed, rename one instance to `Production`.\n\n### Route53 Records\n1. Create two records in **Route53**:\n   - Record Name: `staging` and its IP address.\n   - Record Name: `prod` and its IP address.\n2. Create the records.\n\n### Configuring Environments in the Pipeline\n1. Go to **Pipeline** > **Edit** > **Environment** > **Create**:\n   - **Staging**:\n     - Select **Virtual Machines** > **Linux**.\n     - Log in to the staging EC2 instance and verify Java version using:\n       ```bash\n       java -version\n       ```\n     - Copy the register script from Azure and run it in the instance.\n   - **Production**:\n     - Select **Virtual Machines** > **Linux**.\n     - Log in to the production EC2 instance and verify Java version.\n     - Copy the register script from Azure and run it in the instance.\n\n2. Verify the following:\n   - **ACI**, **Docker Hub**, and **ACR** for images.\n   - Go to **Azure** > **Container Instances** > Check the **FQDN** and access it on port **8080**.\n\n## Step 10: Adding Deployment Code and Running DAST Testing\n1. Add deployment code and run **ZAP** for security testing.\n2. Go to the pipeline:\n   - **Edit** > **Variables** > Add Docker login variables.\n3. Push the changes to Git.\n\n### Handling Pipeline Issues\n- If you see an orange status in the pipeline, it’s not an issue.\n- Explanation: If a JAR file is already found, the process will stop; otherwise, it will continue.\n\n### Break Time\nLet the pipeline complete. After the break:\n1. Access the application via **ACI FQDN** and `http://staging.cloudvishwakarma.in:8080`.\n2. Run **DAST** testing and show the results.\n\n### Fixing Maven Build Configuration\n- Since the pipeline was configured for **Dev**, update it for **Prod** by commenting out the previous condition:\n  ```yaml\n  # condition: or(eq(variables.isProd, true), eq(variables.isDev, true))\n  ```\n- Push the changes to the `production` branch instead of `development`.\n- Monitor the pipeline; most tasks should be skipped.\n\n## Next Session Preview\nIn the next session, we will cover:\n1. **Infrastructure Pipeline** using **Terraform**.\n2. **SAST** directly on the code.\n3. **DAST** after deploying the application.\n\n"
  },
  {
    "path": "Day 30 AzureDevOps-Part-3/azure-pipelines.yml",
    "content": "trigger:\n  - development\n  - uat\n  - production\n\npool:\n  name: ProdAgentPool\n  demands:\n    - JDK -equals 17\n    - Terraform -equals Yes\n    - Agent.Name -equals ADO-Testing_Env\n\nvariables:\n  global_version: \"1.0.0\"\n  global_email: \"mavrick202@gmail.com\"\n  azure_dev_sub: \"9ce91e05-4b9e-4a42-95c1-4385c54920c6\"\n  azure_prod_sub: \"298f2c19-014b-4195-b821-e3d8fc25c2a8\"\n  isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/development')]\n  isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/production')]\n\nstages:\n  - stage: CheckingTheAgent\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: ProdAgentPool\n      demands:\n        - Terraform -equals Yes\n    variables:\n      stage_version: \"2.0.0\"\n      stage_email: \"saikiran.pinapathruni18@gmail.com\"\n    jobs:\n      - job: CheckingTerraformAndPacker\n        variables:\n          job_version: \"3.0.0\"\n          job_email: \"saiaws@gmail.com\"\n        timeoutInMinutes: 5\n        steps:\n          - script: echo $(Build.BuildId)\n            displayName: \"Display The Build-ID\"\n          - script: terraform version && packer version\n            displayName: \"Display Terraform & Packer Version\"\n          - script: docker version && docker ps && docker images && docker ps -a\n            displayName: \"Display Docker Version\"\n          - script: pwd && ls -al\n            displayName: \"List Folder & Files\"\n\n  - stage: SASTWithSonarQube\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: ProdAgentPool\n      demands:\n        - JDK -equals 17\n    jobs:\n      - job: RunningSASTWithSonarqube\n        timeoutInMinutes: 10\n        steps:\n          #SonarQube User Token need to be generated and used in the ServiceConnection.\n          #Also change name of the project and artifactId(line 6 & 14) to ado-spring-boot-app-dev in POM.\n          #No need to create a project in sonarqube as its created automatically.\n          - task: SonarQubePrepare@7\n            inputs:\n              SonarQube: \"SonarTestToken\"\n              scannerMode: \"Other\"\n              #projectKey: 'sqp_63da7bac31bd4496f2ee1170156659ea8c782c28'-NotNeeded\n              #projectName: 'ado-spring-boot-app-dev'-NotNeeded\n              projectVersion: \"$(Build.BuildId)\"\n            displayName: \"Preparing SonarQube Config\"\n          - task: Maven@4\n            inputs:\n              mavenPomFile: \"pom.xml\"\n              publishJUnitResults: false\n              javaHomeOption: \"JDKVersion\"\n              mavenVersionOption: \"Default\"\n              mavenAuthenticateFeed: false\n              effectivePomSkip: false\n              sonarQubeRunAnalysis: true\n              sqMavenPluginVersionChoice: \"latest\"\n              options: \"-DskipTests\"\n            displayName: \"Running SonarQube Maven Analysis\"\n          - task: sonar-buildbreaker@8\n            inputs:\n              SonarQube: \"SonarTestToken\"\n            displayName: \"SAST Job Fail or Pass\"\n  - stage: BuildingJavaCodeWithMavenCopyToJFrog\n    condition: or(eq(variables.isProd, true), eq(variables.isDev, true))\n    # condition: and(succeeded(), eq(variables.isDev, true))\n    #condition: always()\n    pool:\n      name: ProdAgentPool\n      demands:\n        - Terraform -equals Yes\n    jobs:\n      - job: BuildingJavaCodeJob\n        timeoutInMinutes: 5\n        steps:\n          - script: ls -al && pwd && rm -rf /home/adminsai/.m2/settings.xml\n            displayName: \"List Files & Current Working Directory\"\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"settings.xml\"\n          - task: CopyFiles@2\n            inputs:\n              SourceFolder: \"$(Agent.TempDirectory)\"\n              Contents: \"**\"\n              TargetFolder: \"/home/adminsai/.m2\"\n          - script: mvn versions:set -DnewVersion=Dev-2.0.$(Build.BuildId)\n            displayName: \"Set Maven Build Version\"\n          - script: mvn clean package install && ls -al\n            displayName: \"Run the maven build and install\"\n          - script: mvn deploy && ls -al\n            displayName: \"Run the maven deploy\"\n            continueOnError: true\n          - script: ls -al && cp /home/adminsai/myagent/_work/1/s/target/ado-spring-boot-app-dev-Dev-2.0.$(Build.BuildId).jar ROOT$(Build.BuildId).jar && ls -al\n            displayName: \"List Files & Rename ROOT.jar\"\n          - script: rm -rf /artifacts/*.jar && cp ROOT$(Build.BuildId).jar /artifacts && ls -al /artifacts\n            displayName: \"Copy Artifact To Folder\"\n          - task: CopyFiles@2\n            inputs:\n              Contents: \"ROOT$(Build.BuildId).jar\"\n              TargetFolder: \"$(Build.ArtifactStagingDirectory)\"\n              OverWrite: true\n            displayName: \"Copying JAR file to ArtifactStagingDirector\"\n          - task: PublishBuildArtifacts@1\n            inputs:\n              PathtoPublish: \"$(Build.ArtifactStagingDirectory)\"\n              ArtifactName: \"ROOT$(Build.BuildId).jar\"\n              publishLocation: \"Container\"\n            displayName: \"Publishing JAR Artifact.\"\n  - stage: CopyingArtifactsToAzureAndAws\n    condition: and(succeeded(), eq(variables.isDev, true))\n    jobs:\n      - job: CopyFilesToAzureBlob\n        timeoutInMinutes: 5\n        steps:\n          - checkout: none\n          - script: |\n              echo \"Debugging STORAGE_ACCOUNT_KEY...\"\n              echo \"Key length: ${#STORAGE_ACCOUNT_KEY}\"\n              echo \"Key value (partial): ${STORAGE_ACCOUNT_KEY:0:5}*****\"\n            displayName: \"Debug STORAGE_ACCOUNT_KEY\"\n\n          - task: AzureCLI@2\n            inputs:\n              azureSubscription: \"saikiransecops-subscription\"\n              scriptType: \"bash\"\n              scriptLocation: \"inlineScript\"\n              inlineScript: |\n                az storage blob upload-batch --account-name saikiransecops \\\n                  --account-key $(STORAGE_ACCOUNT_KEY) \\\n                  --destination artifacts --source /artifacts/\n            displayName: \"Azure Upload artifacts to Azure Blob\"\n            continueOnError: true\n\n          # Fallback hardcoded key for testing purposes\n          - task: AzureCLI@2\n            condition: failed()\n            inputs:\n              azureSubscription: \"saikiransecops-subscription\"\n              scriptType: \"bash\"\n              scriptLocation: \"inlineScript\"\n              inlineScript: |\n                echo \"Using hardcoded key for testing...\"\n                az storage blob upload-batch --account-name saikiransecops \\\n                  --account-key \"yDO5lCm7ud6VRLjHkjikceT3ysgEYeDUn5SRC8jIU3PcNe/ZIocl+90BfRAUl3QkF6CLfARX8IRA+AStA/NlOA==\" \\\n                  --destination artifacts --source /artifacts/\n            displayName: \"Azure Upload artifacts with hardcoded key\"\n            continueOnError: true\n      - job: CopyFilesToAWSS3Bucket\n        dependsOn: CopyFilesToAzureBlob\n        condition: always() # succeededOrFailed() or always() or failed() or succeeded()-default\n        timeoutInMinutes: 5\n        steps:\n          - checkout: none\n          - task: S3Upload@1\n            inputs:\n              awsCredentials: \"saikiransecops-s3\"\n              regionName: \"us-east-1\"\n              bucketName: \"saikiransecopss3uploadartifacts\"\n              sourceFolder: \"/artifacts/\"\n              globExpressions: \"ROOT$(Build.BuildId).jar\"\n            displayName: \"AWS Upload artifacts to AWS S3 Bucket\"\n            continueOnError: true\n  - stage: DockerBuildAndTrivyScan\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: ProdAgentPool\n    jobs:\n      - job: BuildingContainerImageAndSecurityScanning\n        timeoutInMinutes: 10\n        steps:\n          - checkout: none\n          - script: docker build -t kiran2361993/myapp:$(Build.BuildId) .\n            displayName: \"Create Docker Image\"\n          #- script: trivy image --severity HIGH,CRITICAL --format template --template \"@template/junit.tpl\" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId)\n          - script: |\n              trivy image --exit-code 0 --severity LOW,MEDIUM --format template --template \"@template/junit.tpl\" -o junit-report-low-med.xml kiran2361993/myapp:$(Build.BuildId)\n              trivy image --exit-code 0 --severity HIGH,CRITICAL --format template --template \"@template/junit.tpl\" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId)\n            displayName: \"Scan Image and Create Report\"\n          - task: PublishTestResults@2\n            inputs:\n              testResultsFormat: \"JUnit\"\n              testResultsFiles: \"**/junit-report-low-med.xml\"\n              mergeTestResults: true\n              failTaskOnFailedTests: false\n              testRunTitle: \"Trivy - Low and Medium Vulnerabilities\"\n            displayName: \"Trivy - Low and Medium Vulnerabilities\"\n            condition: \"always()\"\n          - task: PublishTestResults@2\n            inputs:\n              testResultsFormat: \"JUnit\"\n              testResultsFiles: \"**/junit-report-high-crit.xml\"\n              mergeTestResults: true\n              failTaskOnFailedTests: false\n              testRunTitle: \"Trivy - High and Critical Vulnerabilities\"\n            displayName: \"Trivy - High and Critical Vulnerabilities\"\n            condition: \"always()\"\n  - stage: BuildDockerImagePushToAzureACRAndDockerHub\n    condition: and(succeeded(), eq(variables.isDev, true))\n    jobs:\n      - job: PushToAzureACR\n        #dependsOn: DockerBuildAndTrivyScan\n        condition: always() # succeededOrFailed() or always() or failed()\n        timeoutInMinutes: 5\n        steps:\n          - checkout: none\n          - task: Bash@3\n            inputs:\n              targetType: \"inline\"\n              script: |\n                docker login -u devsecopsacrtest -p $(acrpassword) devsecopsacrtest.azurecr.io\n                docker tag kiran2361993/myapp:$(Build.BuildId) devsecopsacrtest.azurecr.io/devsecopsacrtest:$(Build.BuildId)\n                docker push devsecopsacrtest.azurecr.io/devsecopsacrtest:$(Build.BuildId)\n            displayName: \"Creating & Pushing Docker Image To Azure ACR\"\n      # - job: PushToDockerHub\n      #   dependsOn: PushToAzureACR\n      #   condition: always() # succeededOrFailed() or always() or failed()\n      #   timeoutInMinutes: 5\n      #   steps:\n      #     - checkout: none\n      #     - task: Docker@2\n      #       inputs:\n      #         containerRegistry: \"devops-dockerhub-connection\"\n      #         command: \"login\"\n      #       displayName: \"Login To Docker Hub\"\n      #     - task: Bash@3\n      #       inputs:\n      #         targetType: \"inline\"\n      #         script: |\n      #           docker tag kiran2361993/myapp:$(Build.BuildId) kiran2361993/devsecopsado:$(Build.BuildId)\n      #           docker push kiran2361993/devsecopsado:$(Build.BuildId)\n      #       displayName: \"Pushing Docker Image To Docker Hub\"\n  - stage: DeployDockerImageToAzureACI\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: ProdAgentPool\n      demands:\n        - JDK -equals 17\n    jobs:\n      - job: DeployAzureACI\n        timeoutInMinutes: 10\n        steps:\n          - checkout: none\n          - task: AzureCLI@2\n            inputs:\n              azureSubscription: \"saikiransecops-subscription\"\n              scriptType: \"bash\"\n              scriptLocation: \"inlineScript\"\n              inlineScript: \"az container create -g Prod-ADO-1 --name devsecopsado$(Build.BuildId) --image devsecopsacrtest.azurecr.io/devsecopsacrtest:$(Build.BuildId) --cpu 2 --memory 4 --ports 8080 --dns-name-label devsecopsado$(Build.BuildId) --registry-username devsecopsacrtest --registry-password $(acrpassword) --location eastus --os-type Linux\"\n              #inlineScript: az group list\n            displayName: \"Deploy Docker Image to Azure Container Instances\"\n            continueOnError: true\n  - stage: \"DeployingToStagingEnvironment\"\n    dependsOn: BuildingJavaCodeWithMavenCopyToJFrog\n    condition: and(succeeded(), eq(variables.isDev, true))\n    pool:\n      name: ProdAgentPool\n    displayName: \"Deploying To AWS Staging Environment\"\n    jobs:\n      - deployment: \"DeployJARtoStagingServer\"\n        environment:\n          name: STAGING\n          resourceType: VirtualMachine\n        strategy:\n          runOnce:\n            deploy:\n              steps:\n                - script: |\n                    PROC=$(ps -ef | grep -i jar | grep -v grep | awk '{print $2}')\n                    if [ -n \"$PROC\" ]; then\n                      echo \"Stopping process with PID: $PROC\"\n                      sudo kill -9 $PROC || echo \"Failed to stop process.\"\n                    else\n                      echo \"No JAR process found. Nothing to stop.\"\n                    fi\n                    exit 0  # Force success status\n                  displayName: \"Stop Existing JAR File\"\n\n                - script: |\n                    sudo java -jar /home/ubuntu/azagent/_work/1/ROOT$(Build.BuildId).jar/ROOT$(Build.BuildId).jar &\n                    echo \"Application started successfully.\"\n                    exit 0  # Force success status\n                  displayName: \"Running The Jar File\"\n\n  - stage: ZAPOWASPTestingStagingEnvironment\n    condition: and(succeeded(), eq(variables.isDev, true))\n    jobs:\n      - job: ZapTestingStaging\n        timeoutInMinutes: 20\n        steps:\n          - checkout: none\n\n          # Pull the OWASP ZAP image and run the baseline scan\n          - script: |\n              docker pull ghcr.io/zaproxy/zaproxy:stable\n              docker run -u 0 -v $(Pipeline.Workspace)/owaspzap:/zap/wrk/:rw ghcr.io/zaproxy/zaproxy:stable zap-baseline.py -t http://staging.cloudvishwakarma.in:8080/ -J report.json -r report.html -I -i\n            displayName: \"DAST Staging Environment\"\n            continueOnError: true\n\n          # Publish the ZAP test results\n          - task: PublishTestResults@2\n            displayName: \"Publish Test Results For ZAP Testing\"\n            inputs:\n              testResultsFormat: \"NUnit\"\n              testResultsFiles: \"$(Pipeline.Workspace)/owaspzap/report.html\"\n  - stage: \"DeployingToProdEnvironment\"\n    dependsOn: BuildingJavaCodeWithMavenCopyToJFrog\n    condition: and(succeeded('BuildingJavaCodeWithMavenCopyToJFrog'), eq(variables.isProd, true))\n    pool:\n      name: ProdAgentPool\n    displayName: \"Deploying To AWS Prod Environment\"\n    jobs:\n      - deployment: \"DeployJARtoProdServer\"\n        environment:\n          name: PROD\n          resourceType: VirtualMachine\n        strategy:\n          runOnce:\n            deploy:\n              steps:\n                - script: |\n                    PROC=$(ps -ef | grep -i jar | grep -v grep | awk '{print $2}')\n                    if [ -n \"$PROC\" ]; then\n                      echo \"Stopping process with PID: $PROC\"\n                      sudo kill -9 $PROC || echo \"Failed to stop process.\"\n                    else\n                      echo \"No JAR process found. Nothing to stop.\"\n                    fi\n                  displayName: \"Stop Existing JAR File\"\n                  continueOnError: true\n\n                - script: |\n                    sudo java -jar /home/ubuntu/azagent/_work/1/ROOT$(Build.BuildId).jar/ROOT$(Build.BuildId).jar > /dev/null 2>&1 &\n                    echo \"Application started successfully.\"\n                  displayName: \"Running The Jar File\"\n                  continueOnError: true\n"
  },
  {
    "path": "Day 30 AzureDevOps-Part-3/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\r\n  xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\r\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\">\r\n  <modelVersion>4.0.0</modelVersion>\r\n\r\n  <parent>\r\n    <groupId>org.springframework.boot</groupId>\r\n    <artifactId>spring-boot-starter-parent</artifactId>\r\n    <version>3.4.0</version>\r\n    <relativePath></relativePath>\r\n  </parent>\r\n\r\n  <groupId>org.springframework.samples</groupId>\r\n  <artifactId>ado-spring-boot-app-dev</artifactId>\r\n  <version>3.4.0-SNAPSHOT</version>\r\n\r\n  <name>ado-spring-boot-app-dev</name>\r\n\r\n  <properties>\r\n    <!-- <sonar.branch.name>${BRANCH_NAME}</sonar.branch.name> -->\r\n    <java.version>17</java.version>\r\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\r\n    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\r\n    <!-- Important for reproducible builds. Update using e.g. ./mvnw versions:set\r\n        -DnewVersion=... -->\r\n    <project.build.outputTimestamp>2024-11-28T14:37:52Z</project.build.outputTimestamp>\r\n\r\n    <!-- Web dependencies -->\r\n    <webjars-locator.version>1.0.1</webjars-locator.version>\r\n    <webjars-bootstrap.version>5.3.3</webjars-bootstrap.version>\r\n    <webjars-font-awesome.version>4.7.0</webjars-font-awesome.version>\r\n\r\n    <checkstyle.version>10.20.1</checkstyle.version>\r\n    <jacoco.version>0.8.12</jacoco.version>\r\n    <libsass.version>0.2.29</libsass.version>\r\n    <lifecycle-mapping>1.0.0</lifecycle-mapping>\r\n    <maven-checkstyle.version>3.6.0</maven-checkstyle.version>\r\n    <nohttp-checkstyle.version>0.0.11</nohttp-checkstyle.version>\r\n    <spring-format.version>0.0.43</spring-format.version>\r\n\r\n  </properties>\r\n\r\n  <distributionManagement>\r\n    <repository>\r\n      <id>central</id>\r\n      <name>libs-release</name>\r\n      <url>http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-release-local</url>\r\n    </repository>\r\n\r\n    <snapshotRepository>\r\n      <id>snapshots</id>\r\n      <name>libs-snapshot</name>\r\n      <url>http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-snapshot-local</url>\r\n    </snapshotRepository>\r\n  </distributionManagement>\r\n\r\n  <dependencies>\r\n    <!-- Spring and Spring Boot dependencies -->\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-starter-actuator</artifactId>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-starter-cache</artifactId>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-starter-data-jpa</artifactId>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-starter-web</artifactId>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-starter-validation</artifactId>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-starter-thymeleaf</artifactId>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-starter-test</artifactId>\r\n      <scope>test</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <!-- Workaround for AOT issue (https://github.com/spring-projects/spring-framework/pull/33949) -->\r\n      <groupId>io.projectreactor</groupId>\r\n      <artifactId>reactor-core</artifactId>\r\n    </dependency>\r\n\r\n    <!-- Databases - Uses H2 by default -->\r\n    <dependency>\r\n      <groupId>com.h2database</groupId>\r\n      <artifactId>h2</artifactId>\r\n      <scope>runtime</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>com.mysql</groupId>\r\n      <artifactId>mysql-connector-j</artifactId>\r\n      <scope>runtime</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.postgresql</groupId>\r\n      <artifactId>postgresql</artifactId>\r\n      <scope>runtime</scope>\r\n    </dependency>\r\n\r\n    <!-- Caching -->\r\n    <dependency>\r\n      <groupId>javax.cache</groupId>\r\n      <artifactId>cache-api</artifactId>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>com.github.ben-manes.caffeine</groupId>\r\n      <artifactId>caffeine</artifactId>\r\n    </dependency>\r\n\r\n    <!-- Webjars -->\r\n    <dependency>\r\n      <groupId>org.webjars</groupId>\r\n      <artifactId>webjars-locator-lite</artifactId>\r\n      <version>${webjars-locator.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.webjars.npm</groupId>\r\n      <artifactId>bootstrap</artifactId>\r\n      <version>${webjars-bootstrap.version}</version>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.webjars.npm</groupId>\r\n      <artifactId>font-awesome</artifactId>\r\n      <version>${webjars-font-awesome.version}</version>\r\n    </dependency>\r\n\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-devtools</artifactId>\r\n      <scope>test</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-testcontainers</artifactId>\r\n      <scope>test</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.springframework.boot</groupId>\r\n      <artifactId>spring-boot-docker-compose</artifactId>\r\n      <scope>test</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.testcontainers</groupId>\r\n      <artifactId>junit-jupiter</artifactId>\r\n      <scope>test</scope>\r\n    </dependency>\r\n    <dependency>\r\n      <groupId>org.testcontainers</groupId>\r\n      <artifactId>mysql</artifactId>\r\n      <scope>test</scope>\r\n    </dependency>\r\n\r\n    <dependency>\r\n      <groupId>jakarta.xml.bind</groupId>\r\n      <artifactId>jakarta.xml.bind-api</artifactId>\r\n    </dependency>\r\n\r\n  </dependencies>\r\n\r\n  <build>\r\n    <plugins>\r\n      <plugin>\r\n        <groupId>org.apache.maven.plugins</groupId>\r\n        <artifactId>maven-enforcer-plugin</artifactId>\r\n        <executions>\r\n          <execution>\r\n            <id>enforce-java</id>\r\n            <goals>\r\n              <goal>enforce</goal>\r\n            </goals>\r\n            <configuration>\r\n              <rules>\r\n                <requireJavaVersion>\r\n                  <message>This build requires at least Java ${java.version},\r\n                    update your JVM, and\r\n                    run the build again</message>\r\n                  <version>${java.version}</version>\r\n                </requireJavaVersion>\r\n              </rules>\r\n            </configuration>\r\n          </execution>\r\n        </executions>\r\n      </plugin>\r\n      <plugin>\r\n        <groupId>io.spring.javaformat</groupId>\r\n        <artifactId>spring-javaformat-maven-plugin</artifactId>\r\n        <version>${spring-format.version}</version>\r\n        <executions>\r\n          <execution>\r\n            <goals>\r\n              <goal>validate</goal>\r\n            </goals>\r\n            <phase>validate</phase>\r\n          </execution>\r\n        </executions>\r\n      </plugin>\r\n      <plugin>\r\n        <groupId>org.apache.maven.plugins</groupId>\r\n        <artifactId>maven-checkstyle-plugin</artifactId>\r\n        <version>${maven-checkstyle.version}</version>\r\n        <dependencies>\r\n          <dependency>\r\n            <groupId>com.puppycrawl.tools</groupId>\r\n            <artifactId>checkstyle</artifactId>\r\n            <version>${checkstyle.version}</version>\r\n          </dependency>\r\n          <dependency>\r\n            <groupId>io.spring.nohttp</groupId>\r\n            <artifactId>nohttp-checkstyle</artifactId>\r\n            <version>${nohttp-checkstyle.version}</version>\r\n          </dependency>\r\n        </dependencies>\r\n        <!-- <executions>\r\n          <execution>\r\n            <id>nohttp-checkstyle-validation</id>\r\n            <goals>\r\n              <goal>check</goal>\r\n            </goals>\r\n            <phase>validate</phase>\r\n            <configuration>\r\n              <configLocation>src/checkstyle/nohttp-checkstyle.xml</configLocation>\r\n              <sourceDirectories>${basedir}</sourceDirectories>\r\n              <includes>**/*</includes>\r\n              <excludes>**/.git/**/*,**/.idea/**/*,**/target/**/,**/.flattened-pom.xml,**/*.class</excludes>\r\n              <propertyExpansion>config_loc=${basedir}/src/checkstyle/</propertyExpansion>\r\n            </configuration>\r\n          </execution>\r\n        </executions> -->\r\n      </plugin>\r\n      <plugin>\r\n        <groupId>org.graalvm.buildtools</groupId>\r\n        <artifactId>native-maven-plugin</artifactId>\r\n      </plugin>\r\n      <plugin>\r\n        <groupId>org.springframework.boot</groupId>\r\n        <artifactId>spring-boot-maven-plugin</artifactId>\r\n        <executions>\r\n          <execution>\r\n            <!-- Spring Boot Actuator displays build-related information\r\n              if a META-INF/build-info.properties file is present -->\r\n            <goals>\r\n              <goal>build-info</goal>\r\n            </goals>\r\n            <configuration>\r\n              <additionalProperties>\r\n                <encoding.source>${project.build.sourceEncoding}</encoding.source>\r\n                <encoding.reporting>${project.reporting.outputEncoding}</encoding.reporting>\r\n                <java.source>${java.version}</java.source>\r\n                <java.target>${java.version}</java.target>\r\n              </additionalProperties>\r\n            </configuration>\r\n          </execution>\r\n        </executions>\r\n      </plugin>\r\n      <plugin>\r\n        <groupId>org.jacoco</groupId>\r\n        <artifactId>jacoco-maven-plugin</artifactId>\r\n        <version>${jacoco.version}</version>\r\n        <executions>\r\n          <execution>\r\n            <goals>\r\n              <goal>prepare-agent</goal>\r\n            </goals>\r\n          </execution>\r\n          <execution>\r\n            <id>report</id>\r\n            <goals>\r\n              <goal>report</goal>\r\n            </goals>\r\n            <phase>prepare-package</phase>\r\n          </execution>\r\n        </executions>\r\n      </plugin>\r\n\r\n      <!-- Spring Boot Actuator displays build-related information if a git.properties file is\r\n      present at the classpath -->\r\n      <plugin>\r\n        <groupId>io.github.git-commit-id</groupId>\r\n        <artifactId>git-commit-id-maven-plugin</artifactId>\r\n        <configuration>\r\n          <failOnNoGitDirectory>false</failOnNoGitDirectory>\r\n          <failOnUnableToExtractRepoInfo>false</failOnUnableToExtractRepoInfo>\r\n        </configuration>\r\n      </plugin>\r\n      <!-- Spring Boot Actuator displays sbom-related information if a CycloneDX SBOM file is\r\n      present at the classpath -->\r\n      <plugin>\r\n        <?m2e ignore?>\r\n        <groupId>org.cyclonedx</groupId>\r\n        <artifactId>cyclonedx-maven-plugin</artifactId>\r\n      </plugin>\r\n      <plugin>\r\n        <groupId>org.codehaus.mojo</groupId>\r\n        <artifactId>build-helper-maven-plugin</artifactId>\r\n        <version>3.2.0</version>\r\n      </plugin>\r\n      <plugin>\r\n        <groupId>org.codehaus.mojo</groupId>\r\n        <artifactId>versions-maven-plugin</artifactId>\r\n        <version>2.8.1</version>\r\n      </plugin>\r\n    </plugins>\r\n  </build>\r\n  <licenses>\r\n    <license>\r\n      <name>Apache License, Version 2.0</name>\r\n      <url>https://www.apache.org/licenses/LICENSE-2.0</url>\r\n    </license>\r\n  </licenses>\r\n\r\n  <repositories>\r\n    <repository>\r\n      <snapshots>\r\n        <enabled>true</enabled>\r\n      </snapshots>\r\n      <id>spring-snapshots</id>\r\n      <name>Spring Snapshots</name>\r\n      <url>https://repo.spring.io/snapshot</url>\r\n    </repository>\r\n    <repository>\r\n      <snapshots>\r\n        <enabled>false</enabled>\r\n      </snapshots>\r\n      <id>spring-milestones</id>\r\n      <name>Spring Milestones</name>\r\n      <url>https://repo.spring.io/milestone</url>\r\n    </repository>\r\n  </repositories>\r\n  <pluginRepositories>\r\n    <pluginRepository>\r\n      <snapshots>\r\n        <enabled>true</enabled>\r\n      </snapshots>\r\n      <id>spring-snapshots</id>\r\n      <name>Spring Snapshots</name>\r\n      <url>https://repo.spring.io/snapshot</url>\r\n    </pluginRepository>\r\n    <pluginRepository>\r\n      <snapshots>\r\n        <enabled>false</enabled>\r\n      </snapshots>\r\n      <id>spring-milestones</id>\r\n      <name>Spring Milestones</name>\r\n      <url>https://repo.spring.io/milestone</url>\r\n    </pluginRepository>\r\n  </pluginRepositories>\r\n\r\n  <profiles>\r\n    <profile>\r\n      <id>css</id>\r\n      <build>\r\n        <plugins>\r\n          <plugin>\r\n            <groupId>org.apache.maven.plugins</groupId>\r\n            <artifactId>maven-dependency-plugin</artifactId>\r\n            <executions>\r\n              <execution>\r\n                <id>unpack</id>\r\n                <goals>\r\n                  <goal>unpack</goal>\r\n                </goals>\r\n                <?m2e execute onConfiguration,onIncremental?>\r\n                <phase>generate-resources</phase>\r\n                <configuration>\r\n                  <artifactItems>\r\n                    <artifactItem>\r\n                      <groupId>org.webjars.npm</groupId>\r\n                      <artifactId>bootstrap</artifactId>\r\n                      <version>${webjars-bootstrap.version}</version>\r\n                    </artifactItem>\r\n                  </artifactItems>\r\n                  <outputDirectory>${project.build.directory}/webjars</outputDirectory>\r\n                </configuration>\r\n              </execution>\r\n            </executions>\r\n          </plugin>\r\n\r\n          <plugin>\r\n            <groupId>com.gitlab.haynes</groupId>\r\n            <artifactId>libsass-maven-plugin</artifactId>\r\n            <version>${libsass.version}</version>\r\n            <configuration>\r\n              <inputPath>${basedir}/src/main/scss/</inputPath>\r\n              <outputPath>${basedir}/src/main/resources/static/resources/css/</outputPath>\r\n              <includePath>\r\n                ${project.build.directory}/webjars/META-INF/resources/webjars/bootstrap/${webjars-bootstrap.version}/scss/</includePath>\r\n            </configuration>\r\n            <executions>\r\n              <execution>\r\n                <?m2e execute onConfiguration,onIncremental?>\r\n                <goals>\r\n                  <goal>compile</goal>\r\n                </goals>\r\n                <phase>generate-resources</phase>\r\n              </execution>\r\n            </executions>\r\n          </plugin>\r\n        </plugins>\r\n      </build>\r\n    </profile>\r\n    <profile>\r\n      <id>m2e</id>\r\n      <activation>\r\n        <property>\r\n          <name>m2e.version</name>\r\n        </property>\r\n      </activation>\r\n      <build>\r\n        <pluginManagement>\r\n          <plugins>\r\n            <!-- This plugin's configuration is used to store Eclipse m2e settings\r\n              only. It has no influence on the Maven build itself. -->\r\n            <plugin>\r\n              <groupId>org.eclipse.m2e</groupId>\r\n              <artifactId>lifecycle-mapping</artifactId>\r\n              <version>${lifecycle-mapping}</version>\r\n              <configuration>\r\n                <lifecycleMappingMetadata>\r\n                  <pluginExecutions>\r\n                    <pluginExecution>\r\n                      <pluginExecutionFilter>\r\n                        <groupId>org.apache.maven.plugins</groupId>\r\n                        <artifactId>maven-checkstyle-plugin</artifactId>\r\n                        <versionRange>[1,)</versionRange>\r\n                        <goals>\r\n                          <goal>check</goal>\r\n                        </goals>\r\n                      </pluginExecutionFilter>\r\n                      <action>\r\n                        <ignore></ignore>\r\n                      </action>\r\n                    </pluginExecution>\r\n                    <pluginExecution>\r\n                      <pluginExecutionFilter>\r\n                        <groupId>org.springframework.boot</groupId>\r\n                        <artifactId>spring-boot-maven-plugin</artifactId>\r\n                        <versionRange>[1,)</versionRange>\r\n                        <goals>\r\n                          <goal>build-info</goal>\r\n                        </goals>\r\n                      </pluginExecutionFilter>\r\n                      <action>\r\n                        <ignore></ignore>\r\n                      </action>\r\n                    </pluginExecution>\r\n                    <pluginExecution>\r\n                      <pluginExecutionFilter>\r\n                        <groupId>io.spring.javaformat</groupId>\r\n                        <artifactId>spring-javaformat-maven-plugin</artifactId>\r\n                        <versionRange>[0,)</versionRange>\r\n                        <goals>\r\n                          <goal>validate</goal>\r\n                        </goals>\r\n                      </pluginExecutionFilter>\r\n                      <action>\r\n                        <ignore></ignore>\r\n                      </action>\r\n                    </pluginExecution>\r\n                  </pluginExecutions>\r\n                </lifecycleMappingMetadata>\r\n              </configuration>\r\n            </plugin>\r\n          </plugins>\r\n        </pluginManagement>\r\n      </build>\r\n    </profile>\r\n  </profiles>\r\n</project>"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/.gitignore",
    "content": "access.auto.tfvars\nbackend.json\npacker-vars.json\nLaptopKey.pem\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/1-main.tf",
    "content": "provider \"aws\" {\n    access_key = \"${var.aws_access_key}\"\n    secret_key = \"${var.aws_secret_key}\"\n    region = \"${var.aws_region}\"\n}\n\nterraform {\n  backend \"s3\" {\n    bucket = \"sais3bucket236\"\n    key    = \"sais3bucket236.tfstate\"\n    region = \"us-east-1\"\n  }\n}\n\nresource \"aws_vpc\" \"default\" {\n    cidr_block = \"${var.vpc_cidr}\"\n    enable_dns_hostnames = true\n    tags = {\n        Name = \"${var.vpc_name}\"\n    }\n}\n\nresource \"aws_internet_gateway\" \"default\" {\n    vpc_id = \"${aws_vpc.default.id}\"\n\ttags = {\n        Name = \"${var.IGW_name}\"\n    }\n}\n\nresource \"aws_subnet\" \"subnet1-public\" {\n    vpc_id = \"${aws_vpc.default.id}\"\n    cidr_block = \"${var.public_subnet1_cidr}\"\n    availability_zone = \"us-east-1a\"\n\n    tags = {\n        Name = \"${var.public_subnet1_name}\"\n    }\n}\n\nresource \"aws_subnet\" \"subnet2-public\" {\n    vpc_id = \"${aws_vpc.default.id}\"\n    cidr_block = \"${var.public_subnet2_cidr}\"\n    availability_zone = \"us-east-1b\"\n\n    tags = {\n        Name = \"${var.public_subnet2_name}\"\n    }\n}\n\nresource \"aws_subnet\" \"subnet3-public\" {\n    vpc_id = \"${aws_vpc.default.id}\"\n    cidr_block = \"${var.public_subnet3_cidr}\"\n    availability_zone = \"us-east-1c\"\n\n    tags = {\n        Name = \"${var.public_subnet3_name}\"\n    }\n\t\n}\n\n\nresource \"aws_route_table\" \"terraform-public\" {\n    vpc_id = \"${aws_vpc.default.id}\"\n\n    route {\n        cidr_block = \"0.0.0.0/0\"\n        gateway_id = \"${aws_internet_gateway.default.id}\"\n    }\n\n    tags = {\n        Name = \"${var.Main_Routing_Table}\"\n    }\n}\n\nresource \"aws_route_table_association\" \"terraform-public\" {\n    subnet_id = \"${aws_subnet.subnet1-public.id}\"\n    route_table_id = \"${aws_route_table.terraform-public.id}\"\n}\n\nresource \"aws_security_group\" \"allow_all\" {\n  name        = \"allow_all\"\n  description = \"Allow all inbound traffic\"\n  vpc_id      = \"${aws_vpc.default.id}\"\n\n  ingress {\n    from_port   = 0\n    to_port     = 0\n    protocol    = \"-1\"\n    cidr_blocks = [\"0.0.0.0/0\"]\n  }\n\n  egress {\n    from_port       = 0\n    to_port         = 0\n    protocol        = \"-1\"\n    cidr_blocks     = [\"0.0.0.0/0\"]\n    }\n}\n\n\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/2-ec2.tf",
    "content": "data \"aws_ami\" \"my_ami\" {\n     most_recent      = true\n     name_regex       = \"^Saikiran\"\n     owners           = [\"211125710812\"]\n}\n\n\nresource \"aws_instance\" \"web-1\" {\n    count = 3\n    #ami = var.imagename\n    #ami = \"ami-0d857ff0f5fc4e03b\"\n    ami = \"${data.aws_ami.my_ami.id}\"\n    availability_zone = \"us-east-1a\"\n    instance_type = \"t2.small\"\n    key_name = \"SecOps-Key\"\n    subnet_id = \"${aws_subnet.subnet1-public.id}\"\n    vpc_security_group_ids = [\"${aws_security_group.allow_all.id}\"]\n    associate_public_ip_address = true\t\n    tags = {\n        Name = \"Web-Server-0${count.index+1}\"\n        Env = \"Prod\"\n        Owner = \"saikiran\"\n\t    CostCenter = \"ABCD\"\n    }\n}\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/3-alb.tf",
    "content": "resource \"aws_lb\" \"alb\" {\n  name               = \"app-nlb\"\n  internal           = false\n  load_balancer_type = \"application\"\n  security_groups    = [\"${aws_security_group.allow_all.id}\"]\n  subnets            = [aws_subnet.subnet1-public.id,aws_subnet.subnet2-public.id,aws_subnet.subnet3-public.id]\n  enable_deletion_protection = false\n  tags = {\n    Environment = \"Production\"\n  }\n}\n\nresource \"aws_lb_target_group\" \"albtest\" {\n  name     = \"app-tg\"\n  port     = 80\n  protocol = \"HTTP\"\n  vpc_id   = aws_vpc.default.id\n}\n\nresource \"aws_lb_target_group\" \"albtest-flask\" {\n  name     = \"app-tg-flask\"\n  port     = 5000\n  protocol = \"HTTP\"\n  vpc_id   = aws_vpc.default.id\n}\n\n\n\nresource \"aws_lb_target_group_attachment\" \"albtest\" {\n  count = 3\n  target_group_arn = aws_lb_target_group.albtest.arn\n  target_id        = \"${element(aws_instance.web-1.*.id, count.index)}\"\n  port             = 8000\n}\n\nresource \"aws_lb_target_group_attachment\" \"albflask\" {\n  count = 3\n  target_group_arn = aws_lb_target_group.albtest-flask.arn\n  target_id        = \"${element(aws_instance.web-1.*.id, count.index)}\"\n  port             = 5000\n}"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/4-alb-listener.tf",
    "content": "resource \"aws_lb_listener\" \"alb-https\" {\n  load_balancer_arn = aws_lb.alb.arn\n  port              = \"443\"\n  protocol          = \"HTTPS\"\n  ssl_policy        = \"ELBSecurityPolicy-FS-1-2-Res-2020-10\"\n  certificate_arn   = \"arn:aws:acm:us-east-1:211125710812:certificate/13300e95-ddf9-40d0-b807-977f157d59d2\"\n\n  default_action {\n    type             = \"forward\"\n    target_group_arn = aws_lb_target_group.albtest.arn\n  }\n}\n\nresource \"aws_lb_listener\" \"alb-https-redirect\" {\n  load_balancer_arn = aws_lb.alb.arn\n  port              = \"80\"\n  protocol          = \"HTTP\"\n\n  default_action {\n    type = \"redirect\"\n\n    redirect {\n      port        = \"443\"\n      protocol    = \"HTTPS\"\n      status_code = \"HTTP_301\"\n    }\n  }\n}\n\nresource \"aws_lb_listener\" \"alb-flask\" {\n  load_balancer_arn = aws_lb.alb.arn\n  port              = \"5000\"\n  protocol          = \"HTTPS\"\n  ssl_policy        = \"ELBSecurityPolicy-FS-1-2-Res-2020-10\"\n  certificate_arn   = \"arn:aws:acm:us-east-1:211125710812:certificate/13300e95-ddf9-40d0-b807-977f157d59d2\"\n\n  default_action {\n    type             = \"forward\"\n    target_group_arn = aws_lb_target_group.albtest-flask.arn\n  }\n}"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/5-route53.tf",
    "content": "data \"aws_route53_zone\" \"selected\" {\n  name = \"cloudvishwakarma.in\"\n}\n\nresource \"aws_route53_record\" \"nlb\" {\n  zone_id = data.aws_route53_zone.selected.zone_id\n  name    = \"myapp.${data.aws_route53_zone.selected.name}\"\n  type    = \"A\"\n\n  alias {\n    name                   = aws_lb.alb.dns_name\n    zone_id                = aws_lb.alb.zone_id\n    evaluate_target_health = false\n  }\n}\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/README.md",
    "content": "# Managing Infrastructure Pipelines - Session Notes\n\n![Azure Devops](https://github.com/user-attachments/assets/a9b32e1f-1bea-48b9-ac03-5e75daa04d4a)\n\nFrom today’s session, we discussed how to effectively manage Infrastructure Pipelines, focusing on real-world scenarios like sharing agents across organizations, password version control, and handling secure files.\n\n### Key Concepts:\n\n**Example Scenario:**\n- We have an Agent in one organization and need to share the same Agent with another organization.\n- Managing passwords in version control securely.\n- Handling sensitive security files.\n\n---\n\n### Start Agent - Example 01:\n- Create a Project in the PROD organization and demonstrate how to use the existing Agent Pool.\n\n**Task:**\n1. Open the VSCode file and explain the \"AZURE-PIPELINE\" code. Add the `.pem` file.\n2. Edit `packer-vars.json`.\n   - The file wasn’t present earlier because it’s a secure file. If you check `.gitignore`, you’ll notice sensitive files are ignored.\n   - Create the necessary secure files and add values later.\n\n3. Update the following files:\n   - `route53`\n   - Certificate name under `prod-auto.tfvars`\n   - Bucket name in `main.tf` and `prod-auto.tfvars`\n   - VPC, IGW, and Subnet names.\n\n**CIDR Ranges:**\n- VPC CIDR: `10.37.0.0/16`\n- Public Subnet CIDRs:\n  - `10.37.1.0/24`\n  - `10.37.2.0/24`\n  - `10.37.3.0/24`\n- Private Subnet CIDR: `10.37.20.0/24`\n- Remove the private subnet name.\n- Remove the AMI as it’s being taken from the Datasource.\n\n4. Go to the Terraform code and highlight where the access key and secret key are specified. Now, these keys need to be referenced as variables:\n   - Push the code first.\n   - Navigate to Pipeline > Code > Edit > Variables.\n     - **Name:** `aws_access_key` (Copy from IAM)\n     - **Value:** `aws_secret_key` (Copy from IAM)\n\n5. Go to the previous ADO Project > Service Connections > Azure Connections:\n   - Options > Security > + Search and confirm.\n   - This demonstrates that not only agents but also service connections can be shared across projects.\n\n6. Configure the project:\n   - **Library**\n   - Create a Variable Group: `AWS_ACCESS_GROUP`\n     - Add the access key and secret key.\n\n7. Return to the Terraform code:\n   - Copy-paste your `.pem` file.\n   - Add access key and secret key in `access.auto.tfvars`.\n   - Update `packer-vars.json`.\n   - Apply the changes in `backend.json`.\n\n8. Upload all four files as secure files under the pipeline.\n9. Push the code to the repository.\n   - Initially, everything was set to 'NO' except for `destroy` which was set to 'YES'.\n   - Modify the code to set `destroy` to 'NO' and other parameters to 'YES'.\n   - Run `git status` and push the changes to master.\n\n10. Once done, change `Terraform Destroy` back to 'YES' and others to 'NO'. Push the changes.\n11. Enable release pipelines:\n    - Go to Org Settings > Pipelines > Settings > Disable the creation of classic release pipelines.\n    - In Pipelines, you will see the Release option.\n\n**Purpose:**\nThe reason for this hands-on demonstration is to prepare you for real-time environments. When you encounter these processes in a production setting, they should not feel overwhelming. These are simply release pipelines that you now understand.\n\n---\n\n### Next Session Preview:\n- Understanding pipeline licensing.\n- Exploring different types of Azure Boards and how agile delivery works.\n- Hosted vs. Self-hosted Pipelines.\n- Various types of integrations.\n\nStay tuned for more insights in the next session!\n\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/azure-pipelines.yml",
    "content": "trigger:\n  branches:\n    include:\n      - master\n    exclude:\n      - releases/old*\n      - feature/*-working\n# resources:\n#   pipelines:\n#     - pipeline: running-secondary-pipeline\n#       source: variable-group-testing\n#       project: variable-group-testing\n#       trigger:\n#         branches:\n#           include:\n#             - main\n#For using single agent for all stages use below code.\npool:\n  name: LinuxAgentPool\n  demands:\n    - Terraform -equals Yes\nvariables:\n  - group: AWS_ACCESS_GROUP\n  - name: PACKERBUILD\n    value: \"NO\"\n  - name: TERRAFORM_APPLY\n    value: \"NO\"\n  - name: ANSIBLEJOB\n    value: \"NO\"\n  - name: TERRAFORM_DESTROY\n    value: \"YES\"\n  #- DESTROY: 'NO' #- Without Variable Group.\n  # PACKERBUILD: 'YES' - Without Variable Group.\n  # We can pass variables between stages by exporting then as outputs. Refernce below\n  #https://www.reddit.com/r/azuredevops/comments/qlroi7/pass_variables_between_stages/\n\nstages:\n  - stage: \"Packer_Validate_Build\"\n    displayName: \"Packer Validate & Build\"\n    condition: eq(variables.PACKERBUILD, 'YES')\n    jobs:\n      - job: \"Download_Secure_Files\"\n        displayName: \"Download_Secure_Files\"\n        steps:\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"packer-vars.json\"\n          - task: CopyFiles@2\n            inputs:\n              SourceFolder: \"$(Agent.TempDirectory)\"\n              Contents: \"**\"\n              TargetFolder: \"/home/adminsai/myagent/_work/2/s\"\n          - script: pwd && ls -al\n            displayName: \"Files_Check\"\n\n          # Step to install the Amazon plugin\n          - script: |\n              echo \"Installing Packer Amazon plugin...\"\n              packer plugins install github.com/hashicorp/amazon\n              echo \"Verifying installed plugins...\"\n              packer plugins installed\n            displayName: \"Install Packer Amazon Plugin\"\n\n          - script: packer validate -var-file packer-vars.json packer.json\n            displayName: \"Packer Validate\"\n\n          - script: packer build -var-file packer-vars.json packer.json\n            displayName: \"Packer Build\"\n\n  - stage: \"Download_Secure_Files_and_Terraform_Validate\"\n    displayName: \"Terraform Validate & Download Secure Files\"\n    condition: and(in(dependencies.Packer_Validate_Build.result, 'Succeeded', 'Skipped'), eq(variables.TERRAFORM_APPLY, 'YES'))\n    jobs:\n      - job: \"Download_Secure_Files\"\n        displayName: \"Download_Secure_Files\"\n        steps:\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"backend.json\"\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"access.auto.tfvars\"\n          - task: CopyFiles@2\n            inputs:\n              SourceFolder: \"$(Agent.TempDirectory)\"\n              Contents: \"**\"\n              TargetFolder: \"/home/adminsai/myagent/_work/2/s\"\n          - script: pwd && ls -al && echo $COMMIT_MESG\n            displayName: \"Files_Check\"\n          - script: terraform init -backend-config=backend.json\n            displayName: \"Terraform_Initialize\"\n          - script: terraform validate\n            displayName: \"Terraform_Validate\"\n\n  - stage: \"Download_Secure_Files_and_Terraform_Plan_and_Apply\"\n    displayName: \"Terraform Plan & Apply & Download Secure Files\"\n    condition: and(in(dependencies.Packer_Validate_Build.result, 'Succeeded', 'Skipped'), eq(variables.TERRAFORM_DESTROY, 'NO'), eq(variables.TERRAFORM_APPLY, 'YES'))\n    jobs:\n      - job: \"Download_Secure_Files_And_Terraform_Apply\"\n        displayName: \"Download_Secure_Files_And_Terraform_Apply\"\n        steps:\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"backend.json\"\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"access.auto.tfvars\"\n          - task: CopyFiles@2\n            inputs:\n              SourceFolder: \"$(Agent.TempDirectory)\"\n              Contents: \"**\"\n              TargetFolder: \"/home/adminsai/myagent/_work/2/s\"\n          - script: pwd && ls -al\n            displayName: \"Files_Check\"\n          - script: terraform init -backend-config=backend.json\n            displayName: \"Terraform_Initialize\"\n          - script: terraform plan\n            displayName: \"Terraform_Plan\"\n          - script: terraform apply -var=\"aws_access_key=$(aws-access-key)\" -var=\"aws_secret_key=$(aws-secret-key)\" --auto-approve\n            displayName: \"Terraform_Apply\"\n          - script: pwd && ls -al && cat invfile\n            displayName: \"Files_Check\"\n\n  #Make sure ansible is installed on the ADO Agent and disable host_key_checking.\n  - stage: \"Run_Ansible_Setup\"\n    displayName: \"Run Ansible Setup Module\"\n    condition: and(in(dependencies.Download_Secure_Files_and_Terraform_Plan_and_Apply.result, 'Succeeded', 'Skipped'), eq(variables.TERRAFORM_DESTROY, 'NO'), eq(variables.ANSIBLEJOB, 'YES'))\n    jobs:\n      - job: \"Download_Secure_Files\"\n        displayName: \"Download_Secure_Files\"\n        timeoutInMinutes: 5\n        steps:\n          - checkout: none\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"SecOps-Key.pem\"\n          - task: CopyFiles@2\n            inputs:\n              SourceFolder: \"$(Agent.TempDirectory)\"\n              Contents: \"**\"\n              TargetFolder: \"/home/adminsai/myagent/_work/2/s\"\n          - script: pwd && ls -al && chmod 400 SecOps-Key.pem\n            displayName: \"Files_Check\"\n          - script: ansible -i invfile all -m ping -u ubuntu\n            displayName: \"Ansible_Setup\"\n            timeoutInMinutes: 1\n          - script: ansible-playbook -i invfile docker-swarm.yml -u ubuntu --syntax-check\n            displayName: \"Ansible_Docker_Swarm_Syntax_Check\"\n            timeoutInMinutes: 1\n          - script: ansible-playbook -i invfile docker-swarm.yml -u ubuntu --check\n            displayName: \"Ansible_Docker_Swarm_Dry_Run\"\n            timeoutInMinutes: 2\n          - script: ansible-playbook -i invfile docker-swarm.yml -u ubuntu -vv\n            displayName: \"Ansible_Docker_Swarm_Apply\"\n            timeoutInMinutes: 5\n\n  - stage: \"Download_Secure_Files_and_Terraform_Destroy_Variable\"\n    displayName: \"Terraform Destroy & Download Secure Files\"\n    condition: and(eq(variables.TERRAFORM_DESTROY, 'YES'), eq(variables.TERRAFORM_APPLY, 'NO'), eq(variables.ANSIBLEJOB, 'NO'))\n    jobs:\n      - job: \"Terraform_Destroy\"\n        displayName: \"Terraform_Destroy\"\n        timeoutInMinutes: 5\n        steps:\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"backend.json\"\n          - task: DownloadSecureFile@1\n            inputs:\n              secureFile: \"access.auto.tfvars\"\n          - task: CopyFiles@2\n            inputs:\n              SourceFolder: \"$(Agent.TempDirectory)\"\n              Contents: \"**\"\n              TargetFolder: \"/home/adminsai/myagent/_work/2/s\"\n          - script: pwd && ls -al\n            displayName: \"Files_Check\"\n          - script: terraform init -backend-config=backend.json\n            displayName: \"Terraform_Initialize\"\n          - script: terraform destroy -var=\"aws_access_key=$(aws-access-key)\" -var=\"aws_secret_key=$(aws-secret-key)\" --auto-approve\n            displayName: \"Terraform_Destroy\"\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/details.tpl",
    "content": "[docker_servers]\n${master01} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem\n${master02} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem\n${master03} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem\n[docker_master]\n${master01} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem\n[docker_managers]\n${master02} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem\n${master03} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem\n[docker_workers]"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/docker-swarm.yml",
    "content": "---\n- name: Install Docker and Configure Docker Swarm\n  hosts: docker_servers\n  become: yes\n  become_user: root\n  tasks:\n    - name: Install Docker on all docker_servers\n      shell: curl https://get.docker.com | bash\n\n    - name: Check Docker Version\n      shell: docker version | grep -w Version | head -1\n      register: version\n    - debug:\n        var: version\n  tags:\n    - install\n\n- name: Enable Docker Swarm\n  hosts: docker_master\n  become: yes\n  become_user: root\n  tasks:\n    - name: Enable Docker Swarm on Master docker_servers\n      shell: docker swarm init\n      ignore_errors: yes\n    - name: Get Docker Worker Token\n      shell: docker swarm join-token -q worker\n      register: token\n    - set_fact:\n        swarm_token: \"{{ token.stdout }}\"\n    - debug:\n        var: token.stdout\n      no_log: true\n    - name: Get Docker Manager Token\n      shell: docker swarm join-token -q manager\n      register: managertoken\n    - set_fact:\n        swarmmanager_token: \"{{ managertoken.stdout }}\"\n    - debug:\n        var: swarmmanager_token.stdout\n      no_log: true\n    - name: Get Docker Master Private IP\n      shell: curl http://169.254.169.254/latest/meta-data/local-ipv4/\n      register: private_ip\n    - set_fact:\n        swarm_ip: \"{{ private_ip.stdout }}\"\n    - debug:\n        var: private_ip.stdout\n    - name: add variables to dummy host 1\n      add_host:\n        name: \"docker_master_node_token\"\n        shared_variable: \"{{ swarm_token }}\"\n    - name: add variables to dummy host 3\n      add_host:\n        name: \"docker_master_node_ip\"\n        shared_variable: \"{{ swarm_ip }}\"\n    - name: add variables to dummy host 4\n      add_host:\n        name: \"docker_master_managernode_token\"\n        shared_variable: \"{{ swarmmanager_token }}\"\n\n  tags:\n    - swarm\n\n- name: Add Workers to Swarm\n  hosts: docker_workers\n  become: yes\n  become_user: root\n  vars:\n    private_ip: \"{{ hostvars['docker_master_node_ip']['shared_variable'] }}\"\n    token: \"{{ hostvars['docker_master_node_token']['shared_variable'] }}\"\n  tasks:\n    - debug:\n        var: token\n      no_log: true\n    - debug:\n        var: private_ip\n    - name: Add Workers to Swarm\n      shell: docker swarm join --token \"{{ token }}\" \"{{ private_ip }}\":2377\n      ignore_errors: yes\n  tags:\n    - workers\n\n- name: Add Managers to Swarm\n  hosts: docker_managers\n  become: yes\n  become_user: root\n  vars:\n    private_ip: \"{{ hostvars['docker_master_node_ip']['shared_variable'] }}\"\n    token: \"{{ hostvars['docker_master_managernode_token']['shared_variable'] }}\"\n  tasks:\n    - debug:\n        var: token\n      no_log: true\n    - debug:\n        var: private_ip\n    - name: Add Managers to Swarm\n      shell: docker swarm join --token \"{{ token }}\" \"{{ private_ip }}\":2377\n      ignore_errors: yes\n  tags:\n    - managers\n- name: Deploy Test Application\n  hosts: docker_master\n  become: yes\n  become_user: root\n  vars:\n    private_ip: \"{{ hostvars['docker_master_node_ip']['shared_variable'] }}\"\n    token: \"{{ hostvars['docker_master_managernode_token']['shared_variable'] }}\"\n  tasks:\n    - debug:\n        var: token\n      no_log: true\n    - debug:\n        var: private_ip\n    - name: Delete Docker Service nginx001 If Exists\n      shell: docker service rm nginx001\n      ignore_errors: yes\n    - name: Delete Docker Service flask If Exists\n      shell: docker service rm flask\n      ignore_errors: yes\n    - name: Deploy Sample Application\n      shell: docker service create --name nginx001 -p 8000:80 --replicas 3 kiran2361993/kubegame:v2\n      ignore_errors: yes\n    - name: Deploy Sample Flask Application\n      shell: docker service create --name flask -p 5000:5000 --replicas 3 kiran2361993/mydb:v1\n      ignore_errors: yes\n    - name: Validate Deployment Nginx\n      shell: sleep 10 && curl http://\"{{ private_ip.stdout }}\":8000\n      register: html\n      ignore_errors: yes\n    - name: Validate Deployment Flask\n      shell: sleep 10 && curl http://\"{{ private_ip.stdout }}\":5000\n      register: html\n      ignore_errors: yes\n    - debug:\n        var: html.stdout\n  tags:\n    - managers\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/docker.service",
    "content": "[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\nAfter=network-online.target firewalld.service containerd.service\nWants=network-online.target\nRequires=docker.socket\n\n[Service]\nType=notify\n# the default is not to use systemd for cgroups because the delegate issues still\n# exists and systemd currently does not support the cgroup feature set required\n# for containers run by docker\n#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock\nExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375 -H fd:// --containerd=/run/containerd/containerd.sock\n#sudo systemctl daemon-reload\n#sudo systemctl restart docker\nExecReload=/bin/kill -s HUP $MAINPID\nTimeoutSec=0\nRestartSec=2\nRestart=always\n\n# Note that StartLimit* options were moved from \"Service\" to \"Unit\" in systemd 229.\n# Both the old, and new location are accepted by systemd 229 and up, so using the old location\n# to make them work for either version of systemd.\nStartLimitBurst=3\n\n# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.\n# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make\n# this option work for either version of systemd.\nStartLimitInterval=60s\n\n# Having non-zero Limit*s causes performance problems due to accounting overhead\n# in the kernel. We recommend using cgroups to do container-local accounting.\nLimitNOFILE=infinity\nLimitNPROC=infinity\nLimitCORE=infinity\n\n# Comment TasksMax if your systemd version does not supports it.\n# Only systemd 226 and above support this option.\nTasksMax=infinity\n\n# set delegate yes so that systemd does not reset the cgroups of docker containers\nDelegate=yes\n\n# kill only the docker process, not all processes in the cgroup\nKillMode=process\n\n[Install]\nWantedBy=multi-user.target\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/localfile.tf",
    "content": "resource \"local_file\" \"foo\" {\n  content = templatefile(\"details.tpl\",\n    {\n\n      master01 = aws_instance.web-1.0.public_ip\n      master02 = aws_instance.web-1.1.public_ip\n      master03 = aws_instance.web-1.2.public_ip\n      #worker01 = aws_instance.worker-1.public_ip\n      #worker02 = aws_instance.worker-2.public_ip\n      #worker03 = aws_instance.worker-3.public_ip\n      # worker04 = aws_instance.worker-4.public_ip\n      # worker05 = aws_instance.worker-5.public_ip\n    }\n  )\n  filename = \"invfile\"\n}"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/packer.json",
    "content": "{\n  \"_comment\": \"Create a AWS AMI ith AMZ Linux 2018 with Java and Tomcat\",\n  \"variables\": {\n    \"aws_access_key\": \"\",\n    \"aws_secret_key\": \"\",\n    \"region\": \"\",\n    \"source_ami\": \"\",\n    \"instance_type\": \"\",\n    \"vpc_id\": \"\",\n    \"subnet_id\": \"\"\n  },\n  \"_comment1\": \"packer build -var \\\"aws_secret_key=foo\\\" template.json\",\n  \"_comment2\": \"packer build -var-file packer-vars.json template.json\",\n  \"builders\": [\n    {\n      \"access_key\": \"{{user `aws_access_key`}}\",\n      \"secret_key\": \"{{user `aws_secret_key`}}\",\n      \"type\": \"amazon-ebs\",\n      \"region\": \"{{user `region`}}\",\n      \"source_ami\": \"{{user `source_ami`}}\",\n      \"instance_type\": \"{{user `instance_type`}}\",\n      \"ssh_username\": \"ubuntu\",\n      \"ami_name\": \"Saikiran-Pinapathruni-Build-{{isotime | clean_resource_name}}\",\n      \"vpc_id\": \"{{user `vpc_id`}}\",\n      \"subnet_id\": \"{{user `subnet_id`}}\",\n      \"tags\": {\n        \"Name\": \"Saikiran-Pinapathruni-Build-{{isotime | clean_resource_name}}\"\n      }\n    }\n  ],\n  \"provisioners\": [\n    {\n      \"type\": \"shell\",\n      \"inline\": [\n        \"sleep 30\",\n        \"sudo apt update -y\",\n        \"sudo apt install nginx -y\",\n        \"sudo apt install git -y\",\n        \"sudo git clone https://github.com/saikiranpi/webhooktesting.git\",\n        \"sudo rm -rf /var/www/html/index.nginx-debian.html\",\n        \"sudo cp webhooktesting/index.html /var/www/html/index.nginx-debian.html\",\n        \"sudo cp webhooktesting/style.css /var/www/html/style.css\",\n        \"sudo cp webhooktesting/scorekeeper.js /var/www/html/scorekeeper.js\",\n        \"sudo service nginx start\",\n        \"sudo systemctl enable nginx\",\n        \"curl https://get.docker.com | bash\"\n      ]\n    },\n    {\n      \"type\": \"file\",\n      \"source\": \"docker.service\",\n      \"destination\": \"/tmp/docker.service\"\n    },\n    {\n      \"type\": \"shell\",\n      \"inline\": [\n        \"sudo cp /tmp/docker.service /lib/systemd/system/docker.service\",\n        \"sudo usermod -a -G docker ubuntu\",\n        \"sudo systemctl daemon-reload\",\n        \"sudo service docker restart\"\n      ]\n    }\n  ]\n}"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/prod.auto.tfvars",
    "content": "aws_region = \"us-east-1\"\nvpc_cidr = \"10.1.0.0/16\"\npublic_subnet1_cidr = \"10.1.1.0/24\"\npublic_subnet2_cidr = \"10.1.2.0/24\"\npublic_subnet3_cidr = \"10.1.3.0/24\"\nprivate_subnet_cidr = \"10.1.20.0/24\"\nvpc_name = \"Staging-aws\"\nIGW_name = \"Staging-aws-igw\"\npublic_subnet1_name = \"Staging_Public_Subnet1\"\npublic_subnet2_name = \"Staging_Public_Subnet2\"\npublic_subnet3_name = \"Staging_Public_Subnet3\"\nMain_Routing_Table = \"Staging_Main_table\"\nkey_name = \"SecOps-Key\"\nenvironment = \"dev\""
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/variables.tf",
    "content": "variable \"aws_access_key\" {}\nvariable \"aws_secret_key\" {}\nvariable \"aws_region\" {}\nvariable \"vpc_cidr\" {}\nvariable \"vpc_name\" {}\nvariable \"IGW_name\" {}\nvariable \"key_name\" {}\nvariable \"public_subnet1_cidr\" {}\nvariable \"public_subnet2_cidr\" {}\nvariable \"public_subnet3_cidr\" {}\nvariable \"private_subnet_cidr\" {}\nvariable \"public_subnet1_name\" {}\nvariable \"public_subnet2_name\" {}\nvariable \"public_subnet3_name\" {}\nvariable Main_Routing_Table {}\nvariable \"azs\" {\n  description = \"Run the EC2 Instances in these Availability Zones\"\n  default = [\"us-east-1a\", \"us-east-1b\", \"us-east-1c\"]\n}\nvariable \"environment\" { default = \"dev\" }\n\n"
  },
  {
    "path": "Day 32 AzureDevOps-Part-5/README.md",
    "content": "# Azure DevOps Project Management Repository\n\nThis repository demonstrates project management practices and workflows in Azure DevOps (ADO), with a focus on Azure Boards, Sprints, Repos, Branching Strategies, and Artifacts.\n\n## Repository Structure\n```\nado-project-mgmt/\n├── azure-boards/\n│   ├── sprint-planning.md\n│   ├── backlog-management.md\n│   ├── agile-process.md\n├── azure-repos/\n│   ├── branching-strategies.md\n│   ├── hotfix-branch.md\n│   ├── repo-setup.md\n├── artifacts/\n│   ├── package-management.md\n├── gitlab-integration/\n│   ├── gitlab-overview.md\n│   ├── repo-setup.md\n├── README.md\n```\n\n## Overview\nThis repository serves as a reference for understanding and implementing project management concepts in Azure DevOps. Each section provides detailed explanations, examples, and best practices.\n\n---\n\n### Azure Boards\n#### Files:\n1. **sprint-planning.md**\n   - Explains sprint planning, creating sprints in Azure Boards, and managing sprint tasks.\n   - Includes screenshots or console views of Azure Boards.\n\n2. **backlog-management.md**\n   - Discusses managing backlogs, sprint grooming sessions, and handling client requests.\n   - Contains sample tasks and effort estimation examples.\n\n3. **agile-process.md**\n   - Outlines the Agile process flow, including Heads-up calls and backlog refinement.\n\n---\n\n### Azure Repos\n#### Files:\n1. **branching-strategies.md**\n   - Details common branching strategies:\n     - **Master/Main**: Production-ready code.\n     - **UAT/DEV/QA**: Environment-specific branches.\n     - **Feature branches**: For new features.\n     - **Hotfix branches**: For production issue fixes.\n\n2. **hotfix-branch.md**\n   - Describes the process of creating and merging a hotfix branch.\n\n3. **repo-setup.md**\n   - Guides setting up Azure Repos and integrating with GitLab.\n\n---\n\n### Artifacts\n#### Files:\n1. **package-management.md**\n   - Describes how to use Azure Artifacts for managing and sharing packages.\n\n---\n\n### GitLab Integration\n#### Files:\n1. **gitlab-overview.md**\n   - Provides an overview of GitLab and its integration with Azure DevOps.\n\n2. **repo-setup.md**\n   - Details setting up repositories and managing workflows in GitLab.\n\n---\n\n### README.md\nThe main README file provides:\n- A quick introduction to the repository.\n- Links to detailed documentation for each feature.\n- Best practices for using Azure Boards, Repos, and Artifacts.\n\n---\n\n### Contribution\nFeel free to fork the repository, submit issues, or create pull requests for enhancements.\n\n---\n\n### License\nThis repository is licensed under the MIT License.\n"
  },
  {
    "path": "Day 33 Jenkins-Part-1/Jenkinsfile",
    "content": "// Declarative Pipeline\r\ndef VERSION = '1.0.0'\r\n\r\npipeline {\r\n    agent none\r\n    // tools {\r\n    //     maven 'apache-maven-3.6.3'\r\n    // }\r\n    environment {\r\n        PROJECT = \"WELCOME TO Jenkins Class\"\r\n        AZ_SUB_ID = \"9ce91e05-4b9e-4a42-95c1-4385c54920c6\"\r\n        AZ_TEN_ID = \"2b387c91-acd6-4c88-a6aa-c92a96cab8b1\"\r\n    }\r\n    stages {\r\n        stage(\"Dev Tools Verification\") {\r\n            when {\r\n                branch 'development'\r\n            }\r\n            agent { label 'DEV' }\r\n            steps {\r\n                sh \"mvn --version\"\r\n                sh \"java -version\"\r\n                sh \"terraform version\"\r\n                sh \"packer version\"\r\n                sh \"trivy --version\"\r\n                sh \"trivy --version\"\r\n            }\r\n        }\r\n\r\n        //-----------------------------PRODUCTION---------------\r\n        stage(\"PROD Tools Verification\") {\r\n            when {\r\n                branch 'production'\r\n            }\r\n            agent { label 'PROD' }\r\n            steps {\r\n                sh \"mvn --version\"\r\n                sh \"java -version\"\r\n                sh \"terraform version\"\r\n                sh \"packer version\"\r\n            }\r\n        }\r\n    }\r\n}\r\n\r\n\r\n\r\n\r\n// //Declarative Pipeline\r\n// def VERSION='1.0.0'\r\n// pipeline {\r\n//     agent none\r\n//     // tools {\r\n// \t//  maven 'apache-maven-3.6.3'\r\n//     // }\r\n//     environment {\r\n//         PROJECT = \"WELCOME TO DEVOPS jenkins\"\r\n//         AZ_SUB_ID = \"9ce91e05-4b9e-4a42-95c1-4385c54920c6\"\r\n//         AZ_TEN_ID = \"2b387c91-acd6-4c88-a6aa-c92a96cab8b1\"\r\n//         BATCH = \"B36\"\r\n//     }\r\n//     stages {\r\n//         stage(\"Dev Tools Verification\") {\r\n//             when {\r\n//                 branch 'development'\r\n//             }\r\n//             agent { label 'DEV' }\r\n//             steps {\r\n//                 sh \"mvn --version\"\r\n//                 sh \"java -version\"\r\n//                 sh \"terraform version\"\r\n//                 sh \"packer version\"\r\n//                 sh \"trivy --version\"\r\n\r\n//             }\r\n//         }\r\n//         // stage('Dev Sonarqube SAST') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps { \r\n//         //         withSonarQubeEnv('SonarQube-Dev'){\r\n//         //              sh \"mvn clean verify sonar:sonar \\\r\n//         //              -Dsonar.projectKey=spring-boot-app-dev \\\r\n//         //              -Dsonar.projectName=spring-boot-app-dev \\\r\n//         //              -Dsonar.host.url=http://sonarqube.cloudvishwakarma.in:9000\"\r\n//         //         }\r\n\r\n//         //     }\r\n//         // }\r\n//         // stage(\"Dev Quality gate\") {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     steps {\r\n//         //         waitForQualityGate abortPipeline: true\r\n//         //     }\r\n//         // }\r\n//         // stage('Dev mvn clean') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps { \r\n//         //         sh \"mvn clean\"\r\n//         //         // exit 1\r\n//         //     }\r\n//         // }\r\n//         // stage('Dev mvn test') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps { \r\n//         //         sh \"mvn test\"\r\n//         //     }\r\n//         // }\r\n//         // stage('Dev mvn package & install') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps { \r\n//         //         sh \"mvn versions:set -DnewVersion=Dev-1.0.${BUILD_NUMBER}\"\r\n//         //         sh \"mvn package install\"\r\n//         //         sh \"rm -rf /home/ubuntu/.m2/settings.xml\"\r\n//         //         sh \"cp dev-settings.xml /home/ubuntu/.m2/settings.xml\"\r\n//         //     }\r\n//         // }\r\n//         // stage('Dev mvn package') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps {\r\n//         //         sh \"mvn versions:set -DnewVersion=Dev-1.0.${BUILD_NUMBER}\" \r\n//         //         sh \"mvn clean package\"\r\n//         //     }\r\n//         // }\r\n\r\n//         // stage('Dev docker build') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps { \r\n//         //         sh \"sudo docker build -t kiran2361993/jenkinsimage:$BUILD_NUMBER .\"\r\n//         //     }\r\n//         // }\r\n//         // stage('Dev Trivy Scan') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps {\r\n//         //                 sh 'curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/html.tpl > html.tpl'\r\n//         //                 sh 'mkdir -p reports && rm -rf reports/dev_trivy_report.html' \r\n//         //                 sh \"\"\"sudo trivy image kiran2361993/jenkinsimage:$BUILD_NUMBER --security-checks vuln --exit-code 0 --severity CRITICAL --timeout 15m --format template --template \\\"@html.tpl\\\" --output reports/dev_trivy_report.html  \"\"\"\r\n//         //                 sh 'aws s3 cp reports/dev_trivy_report.html s3://sais3bucket236/dev_trivy_report.html'\r\n//         //             }\r\n\r\n//         //     }\r\n//         // stage('Publish Trivy Report') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'MASTER' }\r\n//         //     steps {\r\n//         //                 sh 'aws s3 cp s3://sais3bucket236/dev_trivy_report.html dev_trivy_report.html'\r\n//         //                 // Publishing Dev trivy HTML findings Report\r\n//         //                 publishHTML (target : [\r\n//         //                 allowMissing: true,\r\n//         //                 alwaysLinkToLastBuild: true,\r\n//         //                 keepAll: true,\r\n//         //                 reportDir: '.',\r\n//         //                 reportFiles: 'dev_trivy_report.html',\r\n//         //                 reportName: 'Dev Trivy Scan',\r\n//         //                 reportTitles: 'Dev Trivy Scan'\r\n//         //                 ])\r\n//         //             }\r\n\r\n//         //     }\r\n//         // stage('Dev Deploy Docker Image') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps { \r\n//         //         sh \"sudo docker stop springbootapp || sudo docker ps\"\r\n//         //         sh \"sudo docker run --rm -dit --name springbootapp -p 8080:8080 kiran2361993/jenkinsimage:$BUILD_NUMBER\"\r\n//         //     }\r\n//         // }\r\n//         // stage('Dev Validate Deployment') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     options {\r\n//         //        timeout(time: 3, unit: 'MINUTES') \r\n//         //     }\r\n//         //     agent { label 'DEV' }\r\n//         //     steps { \r\n//         //         sh \"sleep 30 && curl http://dev.awsb49.xyz:8080 || exit 1\"\r\n//         //     }\r\n//         // }\r\n//         // stage ('Dev DAST') {\r\n//         //     when {\r\n//         //         branch 'development'\r\n//         //     }\r\n//         //     options {\r\n//         //         timeout(time: 5, unit: 'MINUTES') \r\n//         //     }\r\n//         //   agent { label 'DEV' }  \r\n//         //   steps {\r\n//         //      sh 'sudo docker run -t owasp/zap2docker-stable zap-baseline.py -t http://dev.awsb49.xyz:8080 || true'\r\n//         //     }\r\n//         // }\r\n// //-----------------------------PRODUCTION---------------\r\n//         stage(\"PROD Tools Verification\") {\r\n//             when {\r\n//                 branch 'production'\r\n//             }\r\n//             agent { label 'PROD' }\r\n//             steps {\r\n//                 sh \"mvn --version\"\r\n//                 sh \"java -version\"\r\n//                 sh \"terraform version\"\r\n//                 sh \"packer version\"\r\n//             }\r\n//         }\r\n//     //     stage('PROD Sonarqube SAST') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps { \r\n//     //             withSonarQubeEnv('SonarQube-PROD'){\r\n//     //                  sh \"mvn clean verify sonar:sonar \\\r\n//     //                  -Dsonar.projectKey=spring-boot-app-prod \\\r\n//     //                  -Dsonar.projectName=spring-boot-app-prod \\\r\n//     //                  -Dsonar.host.url=http://sonarqube.cloudvishwakarma.in:9000\"\r\n//     //             }\r\n\r\n//     //         }\r\n//     //     }\r\n//     //     stage(\"PROD Quality gate\") {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         steps {\r\n//     //             waitForQualityGate abortPipeline: true\r\n//     //         }\r\n//     //     }\r\n//     //     stage('PROD mvn clean') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps { \r\n//     //             sh \"mvn clean\"\r\n//     //             // exit 1\r\n//     //         }\r\n//     //     }\r\n//     //     stage('PROD mvn test') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps { \r\n//     //             sh \"mvn test\"\r\n//     //         }\r\n//     //     }\r\n//     //     stage('PROD mvn package & install') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps {\r\n//     //             sh \"mvn versions:set -DnewVersion=Prod-${BUILD_NUMBER}\" \r\n//     //             sh \"mvn package install\"\r\n//     //             sh \"rm -rf /home/ubuntu/.m2/settings.xml\"\r\n//     //             sh \"cp dev-settings.xml /home/ubuntu/.m2/settings.xml\"\r\n//     //         }\r\n//     //     }\r\n//     //     stage('PROD mvn package & deploy') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps { \r\n//     //             sh \"mvn versions:set -DnewVersion=Prod-${BUILD_NUMBER}\"\r\n//     //             sh \"mvn package deploy\"\r\n//     //         }\r\n//     //     }\r\n//     //     stage('PROD docker build') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps { \r\n//     //             sh \"sudo docker build -t kiran2361993/jenkinsimageprod:$BUILD_NUMBER .\"\r\n//     //         }\r\n//     //     }\r\n//     //     stage('Prod Trivy Scan') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps {\r\n//     //                     sh 'curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/html.tpl > html.tpl'\r\n//     //                     sh 'mkdir -p reports && rm -rf reports/prod_trivy_report.html' \r\n//     //                     sh \"\"\"sudo trivy image kiran2361993/jenkinsimageprod:$BUILD_NUMBER --security-checks vuln --exit-code 0 --severity CRITICAL --timeout 15m --format template --template \\\"@html.tpl\\\" --output reports/prod_trivy_report.html  \"\"\"\r\n//     //                     sh 'aws s3 cp reports/prod_trivy_report.html s3://sais3bucket236/prod_trivy_report.html'\r\n//     //                 }\r\n//     //             }\r\n//     //     stage('Publish Prod Trivy Report') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'MASTER' }\r\n//     //         steps {\r\n//     //                     sh 'aws s3 cp s3://sais3bucket236/prod_trivy_report.html prod_trivy_report.html'\r\n//     //                     // Publish prod trivy HTML findings Report\r\n//     //                     publishHTML (target : [\r\n//     //                     allowMissing: true,\r\n//     //                     alwaysLinkToLastBuild: true,\r\n//     //                     keepAll: true,\r\n//     //                     reportDir: '.',\r\n//     //                     reportFiles: 'prod_trivy_report.html',\r\n//     //                     reportName: 'Prod Trivy Scan',\r\n//     //                     reportTitles: 'Prod Trivy Scan'\r\n//     //                     ])\r\n//     //                 }\r\n\r\n//     //         }\r\n//     //     stage('PROD Deploy Docker Image') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps { \r\n//     //             sh \"sudo docker stop springbootapp || sudo docker ps\"\r\n//     //             sh \"sudo docker run --rm -dit --name springbootapp -p 8080:8080 kiran2361993/jenkinsimageprod:$BUILD_NUMBER\"\r\n//     //         }\r\n//     //     }\r\n//     //     stage('PROD Validate Deployment') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //         options {\r\n//     //            timeout(time: 3, unit: 'MINUTES') \r\n//     //         }\r\n//     //         agent { label 'PROD' }\r\n//     //         steps { \r\n//     //             sh \"sleep 30 && curl http://prod.awsb49.xyz:8080 || exit 1\"\r\n//     //         }\r\n//     //     }\r\n//     //     stage ('PROD DAST') {\r\n//     //         when {\r\n//     //             branch 'production'\r\n//     //         }\r\n//     //       options {\r\n//     //         timeout(time: 5, unit: 'MINUTES') \r\n//     //       }\r\n//     //       agent { label 'PROD' }  \r\n//     //       steps {\r\n//     //          sh 'sudo docker run -t owasp/zap2docker-stable zap-baseline.py -t http://prodslave.awsb49.xyz:8080 || true'\r\n//     //         }\r\n//     //     }\r\n//     // }\r\n//     // post {\r\n//     // success {\r\n//     //     slackSend(color: 'good', message: \"Pipeline Successfull: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}\") \r\n//     // }\r\n//     // failure {\r\n//     //     slackSend(color: 'danger', message: \"Pipeline Failed: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}\") \r\n//     // }\r\n//     // aborted {\r\n//     //     slackSend(color: 'warning', message: \"Pipeline Aborted: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}\")\r\n//     // }\r\n//     // always {\r\n//     //     echo \"I always run.\"\r\n//     // }\r\n//     // }\r\n\r\n// }\r\n\r\n\r\n"
  },
  {
    "path": "Day 33 Jenkins-Part-1/README.md",
    "content": "# Day 36 Jenkins-Part-1\n<img width=\"1536\" alt=\"jenkins\" src=\"https://github.com/user-attachments/assets/4519f27d-537b-4593-bde9-0e5756b2965c\" />\n\n# Jenkins Setup and Configuration Guide\n\nThis guide provides a step-by-step process to set up and configure Jenkins with slave nodes, GitHub integration, SonarQube, and Slack notifications. By following this guide, you will establish a fully functional Jenkins environment capable of managing development and production pipelines.\n\n---\n\n## 1. Deploy Jenkins Master Instance\n\n1. Launch an EC2 instance with the following specifications:\n   - **Instance Type:** t2.medium\n   - **OS:** Ubuntu\n2. Install Jenkins on the instance and start the service.\n3. Install VSCode and run the necessary commands to configure Jenkins.\n\n---\n\n## 2. Configure Jenkins Slaves\n\n### Step 1: Add Global Credentials\n- Navigate to **Manage Jenkins > Credentials > System > Global Credentials**.\n- Add new credentials:\n  - **ID:** slave-access\n  - **Description:** slave-access\n  - **Username:** ubuntu\n  - **Password:** Use your `.pem` file.\n\n### Step 2: Deploy Slave Instances\n- Launch two t2.medium EC2 instances from the same AMI used for the master instance.\n- Name the instances:\n  - `Jenkins-Slave-Dev`\n  - `Jenkins-Slave-Prod`\n\n### Step 3: Add Nodes to Jenkins\n1. Navigate to **Manage Jenkins > Nodes > Add Node**.\n2. Configure **Dev-Slave**:\n   - **Permanent Agent:** Yes\n   - **No. of Executors:** 2\n   - **Remote Root Directory:** `/home/ubuntu`\n   - **Labels:** DEV\n   - **Usage:** Only build jobs with label.\n   - **Launch Method:** Launch Agents via SSH\n   - **Host:** Dev slave private IP or DNS\n   - **Credentials:** ubuntu (slave-access)\n   - **Host Key Verification Strategy:** Non-verifying strategy\n   - **Port:** 22\n3. Repeat the same steps for **Prod-Slave**, copying settings from **Dev-Slave** but updating the names and IP/DNS accordingly.\n\n---\n\n## 3. Configure GitHub Access\n\n1. Switch to the Jenkins user:\n   ```bash\n   su - jenkins\n   ssh-keygen\n   ```\n2. Add the private key to Jenkins:\n   - Navigate to **Manage Jenkins > Credentials > System > Global Credentials**.\n   - Add SSH Username with Private Key:\n     - **ID:** GitHubAccess\n     - **Username:** jenkins\n     - **Private Key:** Paste the generated private key.\n3. Add the public key to your GitHub repository under **Deploy Keys**.\n\n---\n\n## 4. Configure SonarQube\n\n1. Generate a token from SonarQube:\n   - Navigate to **SonarQube > My Account > Security**.\n   - Generate a token and copy it.\n2. Add the token to Jenkins:\n   - Navigate to **Manage Jenkins > Credentials > System > Global Credentials**.\n   - Add Secret Text:\n     - **ID:** sonarqube-token\n     - **Scope:** Global\n     - **Secret:** Paste the token.\n3. Configure SonarQube in Jenkins:\n   - Navigate to **Manage Jenkins > System > Configure System**.\n   - Add SonarQube Server:\n     - **Name:** As per your script\n     - **URL:** Your SonarQube URL (remove trailing slash)\n     - **Credentials:** Select the token you just created.\n4. Create a webhook in SonarQube:\n   - Navigate to **Administrator > Webhooks > Create**.\n   - **Name:** Jenkins-Webhook\n   - **URL:** `http://<Jenkins-Master-PublicIP>:8080/sonarqube-webhook/`\n\n---\n\n## 5. Configure GitHub Webhooks\n\n1. Push your development code to a private GitHub repository.\n2. Navigate to **Repository Settings > Webhooks > Add Webhook**.\n   - **Content Type:** `application/json`\n   - **URL:** As per your Jenkins pipeline token.\n   - Add the webhook and authenticate it.\n\n---\n\n## 6. Create a Multibranch Pipeline\n\n1. Create a new item in Jenkins:\n   - **Name:** Your pipeline name\n   - **Type:** Multibranch Pipeline\n2. Configure the pipeline:\n   - **Branch Source:**\n     - **Type:** Git\n     - **Credentials:** Jenkins (GitHubAccess)\n     - **Repository URL:** Your GitHub repository URL\n   - **Build Configuration:**\n     - **Script Path:** `Jenkinsfile`\n     - **Scan by Webhook:** Use the same token as the GitHub webhook.\n3. Add the public SSH key generated earlier to **GitHub Deploy Keys**.\n\n---\n\n## 7. Configure Slack Notifications\n\n1. Create a Slack channel and add the Jenkins app:\n   - **Channel:** Your desired Slack channel.\n   - **Token:** Copy the integration token.\n2. Add the Slack token to Jenkins:\n   - Navigate to **Manage Jenkins > Credentials > System > Global Credentials**.\n   - Add Secret Text:\n     - **ID:** slack-token\n     - **Secret:** Paste the token.\n3. Configure Slack in Jenkins:\n   - Navigate to **Manage Jenkins > System**.\n   - Add Slack configuration:\n     - **Workspace:** Your Slack workspace name\n     - **Credentials:** Select slack-token\n     - **Channel:** Your Slack channel name\n\n---\n\n## 8. Additional Steps\n\n- Update the `settings.xml` file with the correct JFrog URL.\n- Assign an IAM role with admin access to Jenkins for pushing reports.\n- Configure labels for nodes:\n  - **Manage Jenkins > Nodes and Clouds > Built-in Node**:\n    - **Labels:** MASTER\n\n---\n\n## 9. Test the Setup\n\n1. Create a new branch (`development`) in GitHub.\n2. Push a commit to the branch.\n3. Check if the pipeline triggers and runs successfully in Blue Ocean.\n4. Create a `prod` branch and run the job on the **Prod-Slave** node.\n\n---\n\n## 10. Stopping Instances\n\n- Stop all instances when not in use but do not terminate them to preserve configurations.\n\n---\n\n\n"
  },
  {
    "path": "Day 34 Jenkins-Part-2/0-jenkins_install.sh",
    "content": "sudo apt update && apt install -y unzip jq net-tools\napt install openjdk-17-jdk -y\napt install maven -y && curl https://get.docker.com | bash\nuseradd -G docker adminsai\nusermod -aG docker adminsai\n\n# aws cli install\ncurl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install\n\n# # azurecli ubuntu install\n# curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash\n\n# terraform.io and packer.io copy the link and install in /usr/local/bin\n\ncd /usr/local/bin\nwget https://releases.hashicorp.com/terraform/1.10.3/terraform_1.10.3_linux_amd64.zip\nunzip\n\n# packer.io\nwget https://releases.hashicorp.com/packer/1.11.2/packer_1.11.2_linux_amd64.zip\nunzip\n\n# document.ansible.com  Select ubuntu and download the file accordingly\nsudo apt update\nsudo apt install software-properties-common\nsudo add-apt-repository --yes --update ppa:ansible/ansible\nsudo apt install ansible\n\ncd /etc/ansible\ncp ansible.cfg ansible.cfg_backup\nansible-config init --disabled >ansible.cfg\nnano ansible.cfg\n\nctrl w  host_key_checking = False\n\n# Create one ansible user.\nsudo useradd -m -s /bin/bash ansibleadmin\nsudo mkdir -p /home/ansibleadmin/.ssh\nsudo chown -R ansibleadmin:ansibleadmin /home/ansibleadmin/.ssh\nsudo chmod 700 /home/ansibleadmin/.ssh\nsudo touch /home/ansibleadmin/.ssh/authorized_keys\nsudo chown ansibleadmin:ansibleadmin /home/ansibleadmin/.ssh/authorized_keys\nsudo chmod 600 /home/ansibleadmin/.ssh/authorized_keys\nsudo usermod -aG sudo ansibleadmin\necho 'ansibleadmin ALL=(ALL) NOPASSWD: ALL' | sudo tee -a /etc/sudoers\necho 'ssh-rsa key here' | sudo tee /home/ansibleadmin/.ssh/authorized_keys\nusermod -aG root ansibleadmin\nusermod -aG docker ansibleadmin\n\n# Install trivy https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb\n\ncd /usr/local/bin\nWget https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb\ndpkg -i trivy file\nTrivy\n\n#################################\n\n# 1 reboot the system for configurations, Once it is up then take AMI image and wait till the image has been created. Then install jenkins.\n# 2 Create DNS Record for Jenkins Jfrog and Sonarqube, Turn the sonar jfrog instance.\n\n#################################\n\n#jenkins installation\n\n# Add Jenkins GPG key\ncurl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc >/dev/null\n\n# Add Jenkins repository to sources list\necho \"deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/\" | sudo tee /etc/apt/sources.list.d/jenkins.list >/dev/null\n\n# Update package list\nsudo apt-get update\n\n# (Optional) Check available Jenkins versions\nsudo apt-cache madison jenkins | grep -i 2.426.2\n\n# Install the specific Jenkins version\nsudo apt-get install jenkins=2.426.2 -y\n\n######################################################################################################################\n\nLogin and install all neccessary plugins\n\nPLugins\n\n- AWS Steps Plugin\n- Docker Plugin\n- SonarQube Scanner Version 2.15 and configure it in Jenins Configure System.\n- Blue Ocean\n- Multibranch Scan Webhook Trigger\n- Slack Notification\n- Ansible\n\nOnce done, reboot the jenkins server\n\nThen update the SSL certificate following below.\n\n#######################################################################################################################\n\n# SSL Certificate\nsnap install --classic certbot\n\ncertbot certonly --manual --preferred-challenges=dns --key-type rsa \\\n    --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \\\n    --agree-tos -d \"*.cloudvishwakarma.in\"\n\n#get into the  /etc/letsencrypt/live/clodvishwakarma.in/ Then run below, Because it needs to pick the crts.\n\nopenssl pkcs12 -inkey privkey.pem -in cert.pem -export -out certificate.p12\n\n# password : India@123\n\n#Now convert into JKS certificate,\nkeytool -importkeystore -srckeystore certificate.p12 -srcstoretype pkcs12 \\\n    -destkeystore jenkinsserver.jks -deststoretype JKS\n# password : India@123\n\nsudo cp jenkinsserver.jks /var/lib/jenkins/\nsudo chown jenkins:jenkins /var/lib/jenkins/jenkinsserver.jks\n\nnano /lib/systemd/system/Jenkins.service\n\nEnvironment=\"JENKINS_PORT=8080\"\nEnvironment=\"JENKINS_PORT=8080\"\nEnvironment=\"JENKINS_HTTPS_PORT=8443\"\nEnvironment=\"JENKINS_HTTPS_KEYSTORE=/var/lib/jenkins/jenkinsserver.jks\"\nEnvironment=\"JENKINS_HTTPS_KEYSTORE_PASSWORD=India@123\"\nAmbientCapabilities=CAP_NET_BIND_SERVICE\n\necho 'JENKINS_ARGS=\"$JENKINS_ARGS --httpsPort=8443 --httpPort=-1 --httpsPrivateKey=/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem --httpsCertificate=/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem\"' >>/etc/default/jenkins\n\nsudo usermod -aG docker jenkins\nsudo usermod -aG root jenkins\nsudo systemctl daemon-reload && sudo systemctl restart jenkins && sudo systemctl status jenkins\n"
  },
  {
    "path": "Day 34 Jenkins-Part-2/README.md",
    "content": "# Day 37 Jenkins-Part-2\n\n![diagram-export-1-29-2025-8_53_17-PM](https://github.com/user-attachments/assets/123cd71f-a1ff-4263-a77f-13d00818363e)\n\n# Jenkins RBAC and Backup & Restore\n\n## Jenkins Role-Based Access Control (RBAC)\n\n### Overview\nJenkins Role-Based Access Control (RBAC) allows administrators to define specific permissions for users and groups. This ensures proper access management and enhances security within Jenkins.\n\n### Steps to Configure RBAC\n1. **Login to Your Jenkins Server**\n   - Open Jenkins on port `8443`.\n2. **Navigate to Manage Jenkins**\n   - You will not see any roles initially since a plugin needs to be installed.\n3. **Install the Required Plugin**\n   - Go to **Manage Jenkins > Plugins**.\n   - Install the **Role-Based Authorization Strategy** plugin.\n   - Wait for the installation to complete.\n4. **Configure Security Settings**\n   - Navigate to **Manage Jenkins > Security**.\n   - Set **Security Realm** to: *Jenkins’ own user database*.\n   - Set **Authorization** to: *Role-Based Strategy*.\n   - Click **Save**.\n5. **Assign Roles**\n   - Go to **Manage Jenkins > Manage and Assign Roles**.\n   - Click on **Assign Roles** and configure project-level roles as needed.\n   - Save the changes.\n6. **Create User Accounts**\n   - Navigate to **Manage Jenkins > Users**.\n   - Create a user named `saikiran` with a password and fill in the required details.\n   - Similarly, create three additional users.\n7. **Assign Users to Roles**\n   - Go to **Manage Jenkins > Manage and Assign Roles > Assign Roles**.\n   - Add users and assign them appropriate roles.\n   - Scroll down to **Item Roles** and configure access levels.\n   - Click **Save**.\n8. **Create a Project**\n   - Create a new project named `java-project`.\n   - Scroll down, select **Execute Shell**, enter the necessary script.\n   - Click **Save**.\n9. **Create Additional Projects**\n   - Create three more projects (e.g., Python, Java, etc.).\n   - The code remains the same.\n10. **Configure the Built-in Node**\n    - Navigate to **Manage Jenkins > Nodes > Built-in Node**.\n    - Remove existing labels and save.\n11. **Test Role-Based Access Control**\n    - Open Jenkins in a **private browser window**.\n    - Log in with different users and observe the access differences.\n\n---\n\n## Jenkins Backup and Restore\n\n### Overview\nRegular backups ensure that Jenkins configurations and jobs can be restored in case of failure. Jenkins provides multiple backup options, including local and cloud storage.\n\n### Steps to Backup Jenkins Data\n1. **Navigate to Jenkins Home Directory**\n   ```sh\n   cd /var/lib/jenkins\n   du -h /var/lib/jenkins\n   ```\n   - You can directly back up these files if required.\n2. **Use ThinBackup Plugin for Backup**\n   - Navigate to **Manage Jenkins > ThinBackup**.\n   - Click on **Backup Now**.\n3. **Create a Backup Directory on the Server**\n   - Open **Putty** and run:\n     ```sh\n     mkdir /Jenkins-backup\n     chown jenkins:jenkins /Jenkins-backup\n     ```\n4. **Configure Automatic Backup Schedule**\n   - Go to **Manage Jenkins > ThinBackup**.\n   - Set the backup schedule to run at 9 PM from Monday to Friday:\n     ```\n     0 21 * * 1-5\n     ```\n   - Set the maximum retention period to **30 days**.\n   - Restart Jenkins:\n     ```sh\n     sudo systemctl restart jenkins\n     ```\n   - Enable backups and click **Save**.\n   - Click **Backup Now** and verify the backup in **Putty**.\n5. **Store Backups in the Cloud**\n   - Since Jenkins could be completely deleted, it is advisable to store backups in **Amazon S3** or **Azure Blob Storage** for added security and redundancy.\n\n---\n\nBy following these steps, you can effectively manage user roles in Jenkins and implement a reliable backup strategy to protect your Jenkins environment.\n\n"
  },
  {
    "path": "Day 35 Jenkins-Part-3/Jenkinsfile",
    "content": "pipeline {\r\n    agent none\r\n    environment {\r\n        PROJECT = \"WELCOME TO Jenkins-Terraform Modules Pipeline\"\r\n        TERRAFORM_MODULE_REPO = \"https://github.com/saikiranpi/Terraform_Modules.git\"\r\n    }\r\n    stages {\r\n        stage('For Parallel Stages') {\r\n            parallel {\r\n                stage('Deploy To Development') {\r\n                    agent { label 'DEV' }\r\n                    environment {\r\n                        DEV_AWS_ACCOUNT = \"053490018989\"\r\n                        DEVDEFAULTAMI = \"ami-045d7ad26da8606ed\"\r\n                        TERRAFORM_APPLY = \"NO\" // Set to YES to trigger apply\r\n                        TERRAFORM_DESTROY = \"YES\" // Set to YES if you want to destroyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy\r\n                    }\r\n                    when {\r\n                        branch 'development'\r\n                    }\r\n                    stages {\r\n                        stage('Clone Terraform Modules') {\r\n                            steps {\r\n                                sh 'pwd'\r\n                                sh 'rm -rf terraform-modules'\r\n                                sh 'ls -al'\r\n                                sh \"git clone ${TERRAFORM_MODULE_REPO} terraform-modules\"\r\n                                sh 'ls -al terraform-modules/development'\r\n                                sh 'find terraform-modules/development -name \"*.tf\"'\r\n                            }\r\n                        }\r\n                        stage('Terraform Init & Plan') {\r\n                            when {\r\n                                expression {\r\n                                    \"${env.TERRAFORM_APPLY}\" == 'YES'\r\n                                }\r\n                            }\r\n                            steps {\r\n                                dir('terraform-modules/development') {  // Navigate to development directory\r\n                                    sh 'terraform init'\r\n                                    sh 'terraform validate'\r\n                                    sh 'terraform plan -var-file=terraform.tfvars'\r\n                                }\r\n                            }\r\n                        }\r\n                        stage('Terraform Apply') {\r\n                            when {\r\n                                expression {\r\n                                    \"${env.TERRAFORM_APPLY}\" == 'YES'\r\n                                }\r\n                            }\r\n                            steps {\r\n                                dir('terraform-modules/development') {\r\n                                    sh 'terraform apply -var-file=terraform.tfvars --auto-approve'\r\n                                }\r\n                            }\r\n                        }\r\n                        stage('Terraform Destroy') {\r\n                            when {\r\n                                expression {\r\n                                    \"${env.TERRAFORM_DESTROY}\" == 'YES'\r\n                                }\r\n                            }\r\n                            steps {\r\n                                dir('terraform-modules/development') {\r\n                                    sh 'terraform init'\r\n                                    sh 'terraform validate'\r\n                                    sh 'terraform destroy -var-file=terraform.tfvars --auto-approve'\r\n                                }\r\n                            }\r\n                        }\r\n                    }\r\n                }\r\n\r\n                stage('Deploy To Production') {\r\n                    agent { label 'PROD' }\r\n                    environment {\r\n                        PROD_AWS_ACCOUNT = \"009412611595\"\r\n                        PRODEFAULTAMI = \"ami-0f45852828028bd50\"\r\n                        TERRAFORM_APPLY = \"YES\" // Set to YES to trigger apply\r\n                        TERRAFORM_DESTROY = \"NO\" // Set to YES if you want to destroy\r\n                    }\r\n                    when {\r\n                        branch 'production'\r\n                    }\r\n                    stages {\r\n                        stage('Clone Terraform Modules') {\r\n                            steps {\r\n                                sh 'pwd'\r\n                                sh 'ls -al'\r\n                                sh \"git clone ${TERRAFORM_MODULE_REPO} terraform-modules\"\r\n                                sh 'ls -al terraform-modules/production'\r\n                                sh 'find terraform-modules/production -name \"*.tf\"'\r\n                            }\r\n                        }\r\n                        stage('Terraform Init & Plan') {\r\n                            when {\r\n                                expression {\r\n                                    \"${env.TERRAFORM_APPLY}\" == 'YES'\r\n                                }\r\n                            }\r\n                            steps {\r\n                                dir('terraform-modules/production') {  // Navigate to production directory\r\n                                    sh 'terraform init'\r\n                                    sh 'terraform validate'\r\n                                    sh 'terraform plan -var-file=terraform.tfvars'\r\n                                }\r\n                            }\r\n                        }\r\n                        stage('Terraform Apply') {\r\n                            when {\r\n                                expression {\r\n                                    \"${env.TERRAFORM_APPLY}\" == 'YES'\r\n                                }\r\n                            }\r\n                            steps {\r\n                                dir('terraform-modules/production') {\r\n                                    sh 'terraform apply -var-file=terraform.tfvars --auto-approve'\r\n                                }\r\n                            }\r\n                        }\r\n                        stage('Terraform Destroy') {\r\n                            when {\r\n                                expression {\r\n                                    \"${env.TERRAFORM_DESTROY}\" == 'YES'\r\n                                }\r\n                            }\r\n                            steps {\r\n                                dir('terraform-modules/production') {\r\n                                    sh 'terraform destroy -var-file=terraform.tfvars --auto-approve'\r\n                                }\r\n                            }\r\n                        }\r\n                    }\r\n                }\r\n            }\r\n        }\r\n    }\r\n    post {\r\n        success {\r\n            slackSend(color: 'good', message: \"Pipeline Successful: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}\") \r\n        }\r\n        failure {\r\n            slackSend(color: 'danger', message: \"Pipeline Failed: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}\") \r\n        }\r\n        aborted {\r\n            slackSend(color: 'warning', message: \"Pipeline Aborted: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}\")\r\n        }\r\n        always {\r\n            echo \"I always run.\"\r\n        }\r\n    }\r\n}\r\n"
  },
  {
    "path": "Day 35 Jenkins-Part-3/README.md",
    "content": "<img width=\"1536\" alt=\"Jenkins Pipeline\" src=\"https://github.com/user-attachments/assets/78c38280-4096-4b14-9891-f924ef87e729\" />\n\n\n# Jenkins Pipeline Setup with Multi-Branch Deployment for Infra Hanlding \n\nThis repository details the process of setting up a Jenkins multi-branch pipeline for automated deployments using GitHub webhooks and IAM roles on AWS instances.\n\n## **Steps to Follow**\n\n1. **Instance Setup & DNS Configuration**  \n   - Turn on the AWS instances and configure DNS records accordingly.\n\n2. **Code Explanation**  \n   - Review and understand the provided codebase for deployment automation.\n\n3. **IAM Role Assignment**  \n   - Assign appropriate IAM roles to all three instances to manage permissions.\n\n4. **DNS Update**  \n   - Update the DNS names post instance restart, as previous configurations might have changed.\n\n5. **GitHub Repository Setup**  \n   - Create a **private GitHub repository**.\n   - Push the code to a **development branch** for better version control.\n\n6. **Webhook Configuration**  \n   - Go to **Repo Settings > Webhooks**.\n   - Copy the webhook URL from the previous Spring Boot repository and apply it here.\n\n7. **Deploy Key Setup**  \n   - Remove the deploy key from the Spring Boot app.\n   - Create a new deploy key under the **infra-pipeline** section.\n   - Run `su - jenkins && cat ~/.ssh/id_rsa.pub` and paste the key under GitHub deploy keys.\n\n8. **Jenkins Pipeline Configuration**  \n   - Create a new pipeline in Jenkins:\n     - Select **New Item** and choose **Multibranch Pipeline**.\n     - Under **Branch Sources > Git**, paste the repository SSH URL (`git@...`).\n     - Use **Jenkins GitHub access credentials** for authentication.\n     - Enable **Webhook-triggered builds** by entering the token from the webhook settings.\n     - Save the configuration.\n"
  },
  {
    "path": "Day 36 Jenkins-Part-4/README.md",
    "content": "# Day 39 Jenkins-Part-4\n"
  },
  {
    "path": "README.md",
    "content": "## MASTERING DEVSECOPS\n"
  }
]