Master 985e76aae614 cached
114 files
307.5 KB
82.9k tokens
1 requests
Download .txt
Showing preview only (338K chars total). Download the full file or copy to clipboard to get everything.
Repository: saikiranpi/Mastering-DevSecOps
Branch: Master
Commit: 985e76aae614
Files: 114
Total size: 307.5 KB

Directory structure:
gitextract_dld9c9qw/

├── Day 01 Introduction-BaseLabCreation - Variables-Script-grep-awk-cut/
│   └── README.md
├── Day 02 Arguments-PassingSpecialparams/
│   └── README.md
├── Day 03 OutputRedirection-For-While/
│   └── README.md
├── Day 04 UserAutomation/
│   ├── README.md
│   └── script.sh
├── Day 05 RegEx-Break-Continue-CustomExitCodes/
│   ├── README.md
│   ├── break.sh
│   ├── continue.sh
│   └── exit-code.sh
├── Day 06 Functions/
│   ├── README.md
│   ├── docker.sh
│   ├── ebs.sh
│   ├── log-rotation.sh
│   └── multi-function.sh
├── Day 07 Git-1/
│   └── README.md
├── Day 08 Git-2/
│   └── README.md
├── Day 09 Git-3/
│   └── README.md
├── Day 10 AWS-Terraform-Part-1/
│   └── README.md
├── Day 11 AWS-Terraform-Part-2/
│   └── README.md
├── Day 12 AWS-Terraform-Part-3/
│   └── README.md
├── Day 13 AWS-Terraform-Part-4/
│   └── README.md
├── Day 14 AWS-Terraform-Functions-1/
│   ├── README.md
│   ├── RTA.tf
│   ├── locals.tf
│   ├── main.tf
│   ├── sg.tf
│   ├── subnet.tf
│   ├── terraform.tfvars
│   └── variables.tf
├── Day 15 AWS-Terraform-Functions-2/
│   ├── README.md
│   ├── private-ec2.tf
│   ├── public-ec2.tf
│   ├── terraform.tfvars
│   ├── txt.tf
│   ├── user-data.sh
│   └── variable.sh
├── Day 16 AWS-Terraform-Part-6 Modules-Part-1/
│   └── README.md
├── Day 17 AWS-Terraform-Full-Course/
│   └── README.md
├── Day 18 AWS-Terraform-Part-8 TerraformCloud/
│   └── README.md
├── Day 19 AWS-Terraform-Part-9 GitLab-Pipeline/
│   └── README.md
├── Day 20 AWS-Packer/
│   └── README.md
├── Day 21 AWS-Ansible-Part-1/
│   ├── .gitignore
│   ├── 1.provider.tf
│   ├── 10.locals.tf
│   ├── 11.localfile_ansible_inventory.tf
│   ├── 12.localfile_ansible_inventory_yaml.tf
│   ├── 13.null-local-exec.tf
│   ├── 14.outputs.tf
│   ├── 15.terraform.tfvars
│   ├── 16.variables.tf
│   ├── 2.vpc.tf
│   ├── 3.public-subnets.tf
│   ├── 4.private-subnets.tf
│   ├── 5.public-routing.tf
│   ├── 6.private-routing.tf
│   ├── 7.ec2.tf
│   ├── 8.sg.tf
│   ├── 9.vpc-peering.tf
│   ├── Playbooks
│   ├── README.md
│   ├── publicservers.tpl
│   └── publicservers_yaml.tpl
├── Day 22 AWS-Ansible-Part-2/
│   └── README.md
├── Day 23 AWS-Ansible-Part-3/
│   └── README.md
├── Day 24 Ansible-Part-4 DynamicInventory_AWX/
│   └── README.md
├── Day 25 HashicorpVault AWSIntegration/
│   ├── HashiCorp_Vault/
│   │   ├── 0-steps.sh
│   │   ├── 1-config.hcl
│   │   ├── 2-config-kms.hcl
│   │   └── 2-vault.service
│   ├── README.md
│   └── terraform-vault/
│       ├── 1-provider.tf
│       ├── 2-random-passwords.tf
│       ├── 3-hashi-vault-passwords.tf
│       ├── policy.yaml
│       ├── user.tf
│       └── variables.tf
├── Day 26 Docker-Full-Course/
│   └── README.md
├── Day 27 Maven-JFrog-Sonarqube/
│   └── README.md
├── Day 28 SAST-AzureDevOps-Part-1/
│   ├── 0-maven.sh
│   ├── 0-sonarqube.sh
│   ├── 1-ado-tools.sh
│   ├── 1-pipeline.yml
│   ├── 2-pipeline.yml
│   └── README.md
├── Day 29 AzureDevOps-Part-2/
│   ├── README.md
│   ├── azure-pipelines.yml
│   └── pom.xml
├── Day 30 AzureDevOps-Part-3/
│   ├── README.md
│   ├── azure-pipelines.yml
│   └── pom.xml
├── Day 31 AzureDevOps-Part-4/
│   ├── .gitignore
│   ├── 1-main.tf
│   ├── 2-ec2.tf
│   ├── 3-alb.tf
│   ├── 4-alb-listener.tf
│   ├── 5-route53.tf
│   ├── README.md
│   ├── azure-pipelines.yml
│   ├── details.tpl
│   ├── docker-swarm.yml
│   ├── docker.service
│   ├── localfile.tf
│   ├── packer.json
│   ├── prod.auto.tfvars
│   └── variables.tf
├── Day 32 AzureDevOps-Part-5/
│   └── README.md
├── Day 33 Jenkins-Part-1/
│   ├── Jenkinsfile
│   └── README.md
├── Day 34 Jenkins-Part-2/
│   ├── 0-jenkins_install.sh
│   └── README.md
├── Day 35 Jenkins-Part-3/
│   ├── Jenkinsfile
│   └── README.md
├── Day 36 Jenkins-Part-4/
│   └── README.md
└── README.md

================================================
FILE CONTENTS
================================================

================================================
FILE: Day 01 Introduction-BaseLabCreation - Variables-Script-grep-awk-cut/README.md
================================================
#  Introduction-BaseLabCreation - Variables-Script-grep-awk-cut

![1](https://github.com/user-attachments/assets/bb18e257-ad41-4d32-acfe-4963bb23cb8f)

# DevSecOps Scripting Course - Day 01 & 02

## Course Overview
This course is designed to help you get started with DevSecOps by covering shell scripting, cloud infrastructure, and essential security tools. You'll work with real-world tasks, using various tools and services to build a secure and functional DevSecOps environment.

---

## Prerequisites
### Cloud Platforms:
- **AWS**, **Azure**, or **GCP** – choose any one.

### DevSecOps Tools:
- **SonarQube** – for code quality and security analysis.
- **HashiCorp Vault** – for managing secrets and passwords.
- **Trivy** – for container image scanning.
- **Ansible Vault** – for secure secret management.
- **CISO** – for cybersecurity insights.

### Tools Required for Scripting:
- **JQ** – For parsing JSON data.
- **Net-tools** – Network utilities like `ifconfig`, `nslookup`.
- **Unzip** – To extract `.zip` files.

---

## Task: Create a Base Lab Environment

### Objective:
Set up a VPC, create a new key pair, deploy an instance, and access it using PuTTY.

### Steps:
1. **Create VPC and Instance**:
   - Create a new VPC with a single EC2 instance.
   - Generate a new key pair (PEM format).

2. **Generate PPK File for PuTTY**:
   - Open PuTTYgen and load the PEM file.
   - Generate and save a new private key (PPK format).

3. **Login via PuTTY**:
   - Open PuTTY and connect to `ubuntu@<EC2-IP>`.
   - Customize window settings (bold text, window size, colors).
   - Under `Connection > SSH > Auth`, browse and load your PPK file.
   - Save the session as "SecOps Session" for future use.

> **Note:** In production, avoid running `sudo su -` as you may not have root access. Running root commands could result in access to sensitive operations, like deleting logs.

4. **Install Required Tools**:
   ```bash
   sudo apt install jq -y && apt install net-tools -y && apt install unzip -y
   ```

---

## Shell Scripting Tasks

### Task 1: Using Tmux
To manage multiple servers or sessions, break the screen into two:
- Use `Ctrl + b`, then `Shift + "` to split the screen horizontally.
- For vertical split: `Ctrl + b`, then `Shift + 5`.
- Useful for monitoring multiple servers.

### Task 2: Print Time Repeatedly
Print the date and time every second for 10 seconds:
```bash
for i in {1..10}
do
  echo $(date)
  sleep 1
done
```

> **Note:** To get only the day, date, and time, modify the above script using `awk`:
```bash
for i in {1..10}
do
  echo $(date) | awk -F " " '{print $1, $2, $3, $4}'
  sleep 1
done
```

### Task 3: Understanding Variables in Shell Scripting
Variable declaration is useful for repeated values.
1. Declaring a variable and using it:
   ```bash
   RG='Saikiran-SecOps'
   echo $RG
   echo "${RG}"
   ```

2. Using variables with single and double quotes:
   ```bash
   X=10
   RG='Saikiran-SecOps-$X'  # Won't expand the variable
   echo $RG  # Outputs: Saikiran-SecOps-$X

   RG="Saikiran-SecOps-$X"  # Will expand the variable
   echo $RG  # Outputs: Saikiran-SecOps-10
   ```

---

## Task 4: AWS CLI and Data Manipulation

### Install AWS CLI:
Run the following commands:
```bash
sudo apt install awscli -y
aws configure  # Configure AWS access and secret keys.
```

### S3 Bucket Example:
1. List the contents of an S3 bucket:
   ```bash
   aws s3 ls
   ```

2. Use `cut` to extract specific fields:
   ```bash
   aws s3 ls | cut -d ' ' -f1,2,3
   ```

3. Use `awk` for more complex field manipulation:
   ```bash
   aws s3 ls | awk -F " " '{print $3,$2,$1}'
   ```

4. Use `grep` to find specific patterns:
   ```bash
   aws s3 ls | grep -E ^www[-]
   ```

---

## Shell Script Example: `get_bucket.sh`

```bash
#!/bin/bash
aws s3 ls | cut -d ' ' -f 3 | grep -E ^www[-]
echo "Hello Saikiran, welcome to DevSecOps!"
```

### Execution:
```bash
chmod +x get_bucket.sh
./get_bucket.sh
```

> **Note:** Do **not** use `chmod 777` as it grants full permissions to everyone, which is a security risk. Use `chmod 700` instead to restrict access to the owner.

---

## Debugging Scripts

To enable debugging in a script:
```bash
#!/bin/bash
set -x  # Enable debugging
```

This will print each command before executing it, helping you to debug.

---

## Conclusion
This README covers Day 01 of DevSecOps, focusing on basic shell scripting, AWS tools, and security best practices. You should now be familiar with setting up a basic lab, working with shell scripts, and using AWS CLI for DevSecOps tasks.


================================================
FILE: Day 02 Arguments-PassingSpecialparams/README.md
================================================
# Day 02 Arguments-PassingSpecialparams

![02](https://github.com/user-attachments/assets/13165920-47f8-4843-b6d4-00af9ca7ac5f)


Welcome to the **Arguments-PassingSpecialparams** repository! This project focuses on demonstrating the usage of parameter passing, special shell parameters, and output redirection in Bash scripting, specifically in the context of AWS VPC management.

## Table of Contents

- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Getting Started](#getting-started)
- [Scripts Overview](#scripts-overview)
  - [get_vpc.sh](#get_vpcsh)
  - [script.sh](#scriptsh)
- [Usage](#usage)
  - [Running `get_vpc.sh`](#running-get_vpcsh)
  - [Running `script.sh`](#running-scriptsh)
- [Understanding Special Parameters](#understanding-special-parameters)
  - [`$?`](#-exit-code)
  - [`$@` and `$*`](#-and-)
  - [`$#`](#-number-of-arguments)
- [Error Handling and Output Redirection](#error-handling-and-output-redirection)
- [Contributing](#contributing)
- [License](#license)

## Introduction

This repository contains Bash scripts designed to interact with AWS EC2 to retrieve VPC (Virtual Private Cloud) details across different regions. The scripts demonstrate:

- **Passing Parameters**: How to pass and utilize arguments in Bash scripts.
- **Special Parameters**: Utilizing `$?`, `$@`, `$*`, and `$#` to handle script behavior based on inputs and command execution status.
- **Output Redirection**: Managing script output and errors effectively.

## Prerequisites

Before using the scripts, ensure you have the following installed and configured:

- **AWS CLI**: [Installation Guide](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
- **jq**: A lightweight and flexible command-line JSON processor. [Installation Guide](https://stedolan.github.io/jq/download/)
- **Bash Shell**: Most Unix-based systems come with Bash pre-installed.
- **AWS Credentials**: Ensure your AWS credentials are configured with the necessary permissions to describe VPCs. [Configuration Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html)

## Getting Started

1. **Clone the Repository**

   ```bash
   git clone https://github.com/yourusername/Arguments-PassingSpecialparams.git
   cd Arguments-PassingSpecialparams
   ```

2. **Make Scripts Executable**

   ```bash
   chmod +x get_vpc.sh script.sh
   ```

## Scripts Overview

### `get_vpc.sh`

This script retrieves VPC IDs from a specified AWS region.

**Script Content:**

```bash
#!/bin/bash

# Check if at least one argument is provided
if [ $# -gt 0 ]; then
    REGIONS=$@
    echo "Fetching VPC IDs for regions: $REGIONS"
    for REGION in $REGIONS; do
        aws ec2 describe-vpcs --region ${REGION} | jq ".Vpcs[].VpcId" -r
    done
else
    echo "You have provided $# arguments. Please provide at least one region."
    exit 1
fi
```

### `script.sh`

This script demonstrates the use of special parameters and error handling by checking the AWS CLI version before proceeding to retrieve VPC details.

**Script Content:**

```bash
#!/bin/bash

# Suppress AWS CLI version output
aws --version > /dev/null 2>&1

# Check if the previous command was successful
if [ $? -eq 0 ]; then
    REGIONS=$@
    echo "Fetching VPC IDs for regions: $REGIONS"
    for REGION in $REGIONS; do
        aws ec2 describe-vpcs --region ${REGION} | jq ".Vpcs[].VpcId" -r
    done
else 
    echo "Incorrect AWS command. Please check your AWS CLI installation."
    exit 1
fi
```

## Usage

### Running `get_vpc.sh`

Retrieve VPC IDs from one or multiple AWS regions.

**Example:**

```bash
./get_vpc.sh us-east-1 ap-south-1 us-east-2
```

**Output:**

```
vpc-0abcd1234efgh5678
vpc-1bcde2345fghij678
...
```

### Running `script.sh`

Ensure AWS CLI is correctly installed and then retrieve VPC IDs.

**Example:**

```bash
./script.sh us-east-1 us-east-2 ap-southeast-1
```

**Output:**

```
Fetching VPC IDs for regions: us-east-1 us-east-2 ap-southeast-1
vpc-0abcd1234efgh5678
vpc-1bcde2345fghij678
...
```

**Handling Errors:**

- If AWS CLI is not installed or incorrectly configured, the script will output:

  ```
  Incorrect AWS command. Please check your AWS CLI installation.
  ```

- If no regions are provided as arguments, the script will output:

  ```
  You have provided 0 arguments. Please provide at least one region.
  ```

## Understanding Special Parameters

### `$?` – Exit Code

- Represents the exit status of the last executed command.
- `0` indicates success, while any non-zero value indicates an error.

**Example:**

```bash
ls -al
echo $?  # Outputs 0 if successful
ls nonexistentfile
echo $?  # Outputs a non-zero value indicating an error
```

### `$@` and `$*` – All Positional Parameters

- Both represent all the arguments passed to the script.
- The difference lies in how they handle quoted arguments.

**Usage in Scripts:**

```bash
REGIONS=$@
# or
REGIONS=$*
```

### `$#` – Number of Arguments

- Represents the number of arguments passed to the script.

**Example:**

```bash
echo "Number of arguments: $#"
```

## Error Handling and Output Redirection

**Output Redirection:**

- **Standard Output (`stdout`)**: Default output stream.
- **Standard Error (`stderr`)**: Output stream for errors.

**Redirecting Outputs:**

- Suppress standard output:

  ```bash
  aws --version > /dev/null
  ```

- Suppress both standard output and standard error:

  ```bash
  aws --version > /dev/null 2>&1
  ```

**Using Conditional Statements:**

Utilize exit codes to control script flow.

```bash
aws --version > /dev/null 2>&1
if [ $? -eq 0 ]; then
    # Proceed with script
else
    echo "AWS CLI not found. Exiting."
    exit 1
fi
```
## License

This project is licensed under the [MIT License](LICENSE).

---

*Happy Scripting!*



================================================
FILE: Day 03 OutputRedirection-For-While/README.md
================================================
![03](https://github.com/user-attachments/assets/6be236b3-3be1-4c2d-ade5-3341265b409d)

# Day 03 OutputRedirection-For-While

This project demonstrates **output redirection** and the use of **for** and **while** loops in Bash scripting, along with examples using **standard input**, **output**, and **error** redirections.

## Key Concepts

### Standard Streams:
- **stdin**: Standard Input (File descriptor 0)
- **stdout**: Standard Output (File descriptor 1)
- **stderr**: Standard Error (File descriptor 2)

### Output Redirection:
- `>` : Redirects the output and **overwrites** the content in the file.
- `>>` : Redirects the output and **appends** it to the file.
- **Tee Command**: Redirects the output to a file and **displays it on the screen** simultaneously.

---

## Script Example: `std-script.sh`

This Bash script demonstrates both valid and invalid commands. We'll focus on how to redirect output.

### Script:

```bash
#!/bin/bash
ls -al           # Valid command, prints directory listing
Saikiran         # Invalid command, will trigger an error
df -h            # Valid command, prints disk space usage
Avinash          # Invalid command, will trigger an error
free             # Valid command, prints memory usage
sai              # Invalid command, will trigger an error
cat /etc/hostname # Valid command, prints hostname
avi              # Invalid command, will trigger an error
```

### How to Execute:
1. Save the script as `std-script.sh`.
2. Run it using `bash std-script.sh`.

---

### Requirements:
1. **Print only successful commands**:
   ```bash
   bash std-script.sh 2> /dev/null
   ```
   - Redirects any errors (stderr) to `/dev/null`, so only the output of successful commands is shown.

2. **Print only failed commands**:
   ```bash
   bash std-script.sh 1> /dev/null
   ```
   - Redirects standard output (stdout) to `/dev/null`, so only error messages (stderr) are displayed.

---

### Overwriting and Appending Output:
- To redirect both **stdout** and **stderr** to a file:
  ```bash
  bash std-script.sh > /tmp/error 2>&1
  ```
  - This will **overwrite** the file with both standard output and errors.

- To **append** instead of overwriting:
  ```bash
  bash std-script.sh >> /tmp/error 2>&1
  ```

---

### Display and Save Output:
To display output on the screen **and** save it to a file:
```bash
bash std-script.sh | tee /tmp/tee1
```
- If you want to **append** to the file instead of overwriting:
  ```bash
  bash std-script.sh 2>&1 | tee -a /tmp/tee1
  ```

---

## For Loops vs While Loops

### For Loops:
Used when the number of iterations is known. For example, printing numbers from 1 to 100.

#### Script: `loops.sh`

```bash
#!/bin/bash
for i in {1..100}
do
    echo $i
done
```

### While Loops:
Used when the number of iterations is not known and continues as long as the condition is true.

#### Example:
Check if a website is working using a **while loop**:
```bash
while true
do
    curl https://www.google.com | grep -i google
    sleep 1
done
```

---

## Working with Python and Bash

### Python Example:
```python
x = 5 * 4
print(x)
```

### Bash Equivalent:
```bash
x=$(expr 5 \* 4)
echo $x
```
- In Bash, we need to use `expr` to perform arithmetic.

---

## Printing Even and Odd Numbers

### Even Numbers:
```bash
#!/bin/bash
for i in {1..100}; do
    if [ $((i % 2)) -eq 0 ]; then
        echo "$i is an even number"
    fi
done
```

### Even and Odd Numbers:
```bash
#!/bin/bash
for i in {1..100}
do
    if [ $(( i % 2 )) -ne 0 ]; then
        echo "$i is an odd number"
    else
        echo "$i is an even number"
    fi
done
```

---

## Conclusion

This project covers the basic concepts of output redirection in Linux, the usage of for and while loops, and demonstrates both valid and invalid command execution. Whether you are handling script output or automating tasks, understanding how to redirect outputs and loop through commands is essential for DevOps and system automation.

Feel free to explore the scripts, modify them, and experiment with different redirection methods and loop structures!

---

Happy scripting! 😊

--- 

This `README.md` provides an overview of the key concepts, code snippets, and practical use cases from the notes.


================================================
FILE: Day 04 UserAutomation/README.md
================================================
# Day 04 UserAutomation

![a-3d-render-of-a-dark-themed-cybersecurity-confere-TU2eVZcIRda9RcDkaObkyg-yt2DCIPgQIaI9w7_DYZnYw](https://github.com/user-attachments/assets/75314cc4-86a5-41bb-b47b-acb0d3765555)

This script automates the process of creating new users on a Linux system. It checks if a user already exists, creates a new user if they don't, generates a random password with a special character, and forces the user to reset their password on the first login.

## Features:
1. Checks if the provided username already exists in the system.
2. If the user doesn’t exist, it creates the user with a randomly generated password.
3. The password includes a special character and a random number.
4. The user is forced to reset their password during their first login.
5. Supports creating multiple users in one execution.
6. Includes automated SSH configuration changes to enable password authentication.

## Prerequisites:
- You must have root or sudo privileges to run this script.
- Ensure that `passwd` and `sed` are installed on your system.

## How It Works:
1. **Check for Existing Users:**  
   The script checks the `/etc/passwd` file to see if the provided username already exists.
   
2. **Create New User:**  
   If the user does not exist, it creates a new user with the `useradd` command and assigns a randomly generated password.
   
3. **Generate Random Password:**  
   The password is created using a combination of random numbers and a randomly selected special character from a predefined set.
   
4. **SSH Configuration:**  
   The script uses `sed` to modify the `/etc/ssh/sshd_config` file to enable password authentication. It also creates a backup of this file before making changes.
   
5. **Multiple Users Creation:**  
   The script allows you to create multiple users by passing multiple arguments.

## Script Example:

```bash
#!/bin/bash
if [ $# -gt 0 ]; then
    USER=$1
    echo $USER
else
    echo " Please Enter the Valid parameter "
fi

##ADDING-USER##

#!/bin/bash
if [ $# -gt 0 ]; then
    USER=$1
    echo $USER
    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1)
    if [ "${USER}" = "${EXISTING_USER}" ]; then
        echo "The $USER you have entered is already present in the machine, Please Enter the Another USername"
    else
        echo " Lets Create a New New username"
        sudo useradd -m $USER --shell /bin/bash
    fi
else
    echo " Please Enter the Valid parameter "

fi

##password ##

#!/bin/bash
if [ $# -gt 0 ]; then
    USER=$1
    echo $USER
    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1)
    if [ "${USER}" = "${EXISTING_USER}" ]; then
        echo "The $USER you have entered is already present in the machine, Please Enter the Another USername"
    else
        echo " Lets Create a New New username"
        sudo useradd -m $USER --shell /bin/bash
        SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)
        PASSWORD="IndianArmy@${RANDOM}${SPEC}"
        echo "$USER:$PASSWORD" | sudo chpasswd
        echo "The temporary password the $USER is ${PASSWORD}"
        passwd -e $USER
    fi
else
    echo " Please Enter the Valid parameter "

fi

# Sed -i “58 s/.*PasswordAuthentication.*/PasswordAuthentication yes/g” /etc/ssh/sshd_config

##Multi User passing ##

#!/bin/bash

#!/bin/bash
if [ $# -gt 0 ]; then
    for USER in $@; do
        echo $USER
        EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1)
        if [ "${USER}" = "${EXISTING_USER}" ]; then
            echo "The $USER you have entered is already present in the machine, Please Enter the Another USername"
        else
            echo " Lets Create a New New username"
            sudo useradd -m $USER --shell /bin/bash
            SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)
            PASSWORD="IndianArmy@${RANDOM}${SPEC}"
            echo "$USER:$PASSWORD" | sudo chpasswd
            echo "The temporary password the $USER is ${PASSWORD}"
            passwd -e $USER
        fi
    done
else
    echo " Please Enter the Valid parameter "

fi

##regex##

#regex- Regular Expressions#
#!/bin/bash
if [ $# -gt 0 ]; then
    for USER in $@; do
        echo $USER
        if [[ $USER =~ ^[a-zA-Z]+$ ]]; then
            EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ':' -f1)
            if [ "${USER}" = "${EXISTING_USER}" ]; then
                echo "$USER is already exisitin, Please create a New user"
            else
                echo "Lets create the New $USER"
                sudo useradd -m $USER --shell /bin/bash
                SPEC=$(echo '!@#$%^&*()_' | fold -w1 | shuf | head -1)
                PASSWORD="IndianArmy@${RANDOM}${SPEC}"
                echo "$USER:$PASSWORD" | sudo chpasswd
                echo "The termporary password for the user is ${PASSWORD}"
                passwd -e $USER
            fi
        else
            echo "The User Must Contain Alphabets"
        fi
    done
else
    echo "Please pass the Argument"
fi

```

## SSH Configuration (Optional):
To enable password authentication for newly created users, the script modifies the SSH configuration using `sed`. This is important for AWS instances, where password authentication is disabled by default.

```bash
# Backup the sshd_config file
cp /etc/ssh/sshd_config /etc/ssh/sshd_config_backup

# Modify the sshd_config file to enable password authentication
sed -i "s/.*PasswordAuthentication.*/PasswordAuthentication yes/g" /etc/ssh/sshd_config

# Restart the SSH service
sudo service sshd restart
```

## How to Run the Script:
1. Save the script as `user-automation.sh`.
2. Run the script with a username as an argument:
   ```bash
   bash user-automation.sh username1 username2
   ```
   Example:
   ```bash
   bash user-automation.sh alice bob
   ```

## Notes:
- Ensure that password authentication is enabled on your system if you want to use password-based login for the newly created users.
- This script automatically forces the new user to reset their password on first login.

---

This README provides an overview of the script in simple terms, helping users understand what it does and how to use it.


================================================
FILE: Day 04 UserAutomation/script.sh
================================================
#!/bin/bash
if [ $# -gt 0 ]; then
    USER=$1
    echo $USER
else
    echo " Please Enter the Valid parameter "
fi

##ADDING-USER##

#!/bin/bash
if [ $# -gt 0 ]; then
    USER=$1
    echo $USER
    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1)
    if [ "${USER}" = "${EXISTING_USER}" ]; then
        echo "The $USER you have entered is already present in the machine, Please Enter the Another USername"
    else
        echo " Lets Create a New New username"
        sudo useradd -m $USER --shell /bin/bash
    fi
else
    echo " Please Enter the Valid parameter "

fi

##password ##

#!/bin/bash
if [ $# -gt 0 ]; then
    USER=$1
    echo $USER
    EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1)
    if [ "${USER}" = "${EXISTING_USER}" ]; then
        echo "The $USER you have entered is already present in the machine, Please Enter the Another USername"
    else
        echo " Lets Create a New New username"
        sudo useradd -m $USER --shell /bin/bash
        SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)
        PASSWORD="IndianArmy@${RANDOM}${SPEC}"
        echo "$USER:$PASSWORD" | sudo chpasswd
        echo "The temporary password the $USER is ${PASSWORD}"
        passwd -e $USER
    fi
else
    echo " Please Enter the Valid parameter "

fi

# Sed -i “58 s/.*PasswordAuthentication.*/PasswordAuthentication yes/g” /etc/ssh/sshd_config

##Multi User passing ##

#!/bin/bash

#!/bin/bash
if [ $# -gt 0 ]; then
    for USER in $@; do
        echo $USER
        EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1)
        if [ "${USER}" = "${EXISTING_USER}" ]; then
            echo "The $USER you have entered is already present in the machine, Please Enter the Another USername"
        else
            echo " Lets Create a New New username"
            sudo useradd -m $USER --shell /bin/bash
            SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1)
            PASSWORD="IndianArmy@${RANDOM}${SPEC}"
            echo "$USER:$PASSWORD" | sudo chpasswd
            echo "The temporary password the $USER is ${PASSWORD}"
            passwd -e $USER
        fi
    done
else
    echo " Please Enter the Valid parameter "

fi

##regex##

#regex- Regular Expressions#
#!/bin/bash
if [ $# -gt 0 ]; then
    for USER in $@; do
        echo $USER
        if [[ $USER =~ ^[a-zA-Z]+$ ]]; then
            EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ':' -f1)
            if [ "${USER}" = "${EXISTING_USER}" ]; then
                echo "$USER is already exisitin, Please create a New user"
            else
                echo "Lets create the New $USER"
                sudo useradd -m $USER --shell /bin/bash
                SPEC=$(echo '!@#$%^&*()_' | fold -w1 | shuf | head -1)
                PASSWORD="IndianArmy@${RANDOM}${SPEC}"
                echo "$USER:$PASSWORD" | sudo chpasswd
                echo "The termporary password for the user is ${PASSWORD}"
                passwd -e $USER
            fi
        else
            echo "The User Must Contain Alphabets"
        fi
    done
else
    echo "Please pass the Argument"
fi


================================================
FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/README.md
================================================
# Day 05 RegEx-Break-Continue-CustomExitCodes

![05](https://github.com/user-attachments/assets/27fd624d-bb91-46d5-b710-3b04db991e75)


## Features:
1. **Regular Expressions in Shell Scripts**
2. **Break and Continue for Iteration Control**
3. **Custom Exit Codes**
4. **Arrays in Shell Scripts**

---

## 1. **User Automation with Regex**

Regular expressions are a powerful tool in shell scripts for tasks like input validation. In this repository, we demonstrate how to use regular expressions to enforce patterns in username creation, specifically requiring users to create usernames that follow a certain format (e.g., `3 lowercase letters followed by 3 numbers`).

**Example:**
```bash
if [[ $USER =~ ^[a-z]{3}[0-9]{3}$ ]] ; then
  echo "Username is valid"
else
  echo "Username is invalid"
fi
```

## 2. **Common Regex Patterns:**

- `\d` - Matches any digit.
- `\D` - Matches any non-digit character.
- `\s` - Matches any whitespace.
- `\W` - Matches any non-word character (like punctuation).

**Example:**
To find a phone number pattern like `123-456-7890`, you can use:
```regex
\d{3}-\d{3}-\d{4}
```

---

## 3. **Iteration Control Using Break and Continue**

In shell scripting, `break` and `continue` are essential for controlling loops.

- **Break**: Used to exit a loop when a condition is met.
- **Continue**: Used to skip the current iteration of the loop and move on to the next iteration.

**Example:**
```bash
for i in {1..10}; do
  if [[ $i -eq 5 ]]; then
    break  # Stops the loop when i equals 5
  fi
  echo $i
done
```

## 4. **Custom Exit Codes**

In shell scripts, you can use custom exit codes to signal the success or failure of commands. For instance, if an AWS command runs successfully, but you encounter a regional endpoint issue, you can check the exit status to determine what happened.

**Example:**
```bash
aws ec2 describe-vpcs --region us-east-1
if [[ $? -ne 0 ]]; then
  echo "Incorrect region, exiting"
  exit 1
else
  echo "Correct region"
fi
```

## 5. **Arrays in Shell Scripts**

Arrays are a useful way to handle multiple values in a shell script. You can manipulate strings or data using array operations.

**Example:**
```bash
NAME='SaikiranPinapathruni'
echo ${#NAME}  # Outputs the length of the string

for i in {0..${#NAME}}; do
  echo ${NAME:$i:1}  # Prints one character at a time
done
```

---

## 6. **Practical Scenarios:**

1. **Regex for Phone Numbers**:
   - Extract phone numbers starting with a specific pattern like `1-234`.
   
   Example regex: `\d-[234]\d\d-\d\d\d-\d\d\d\d`

2. **Shell Script for User Creation**:
   - Create two users: one with lowercase letters and one with uppercase letters.
   
3. **Exit Code Handling**:
   - Check whether a command executed successfully and handle errors gracefully based on the exit code.

---

## Conclusion

This repository provides a detailed guide on how to use regular expressions, break/continue, arrays, and exit codes in shell scripts. These concepts are essential for automating tasks and creating efficient shell scripts that handle various scenarios gracefully.

---




================================================
FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/break.sh
================================================
aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2)

echo "Running the function to list VPCs using the regions list"

for region in "${aws_regions[@]}"; do
    echo "Getting VPCs in $region .. "
    vpc_list=$(aws ec2 describe-vpcs --region "$region" | jq -r .Vpcs[].VpcId)
    vpc_arr=(${vpc_list[@]})

    if [ ${#vpc_arr[@]} -gt 0 ]; then
        for vpc in "${vpc_list[@]}"; do
            echo "The VPC-ID is: $vpc"
        done
        echo "##########"
    else
        echo "Invalid Region..!!"
        echo "#######"
        echo "# Breaking at $region #"
        echo "################"
        break
    fi
done


================================================
FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/continue.sh
================================================
# CONTINUE

#!/bin/bash
aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2)

echo "Running the function to list VPCs using the regions list"

for region in "${aws_regions[@]}"; do
    echo "Getting VPCs in $region .. "
    vpc_list=$(aws ec2 describe-vpcs --region "$region" | jq -r .Vpcs[].VpcId)
    vpc_arr=(${vpc_list[@]})

    if [ ${#vpc_arr[@]} -gt 0 ]; then
        for vpc in "${vpc_list[@]}"; do
            echo "The VPC-ID is: $vpc"
        done
        echo "##########"
    else
        echo "Invalid Region..!!"
        echo "#######"
        echo "# Breaking at $region #"
        echo "################"
        #break
        #exit 99
        continue
    fi
done



================================================
FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/exit-code.sh
================================================
######EXIT CODE############
#!/bin/bash
aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2)

echo "Running the function to list VPCs using the regions list"

for region in "${aws_regions[@]}"; do
    echo "Getting VPCs in $region .. "
    vpc_list=$(aws ec2 describe-vpcs --region "$region" | jq -r .Vpcs[].VpcId)
    vpc_arr=(${vpc_list[@]})

    if [ ${#vpc_arr[@]} -gt 0 ]; then
        for vpc in "${vpc_list[@]}"; do
            echo "The VPC-ID is: $vpc"
        done
        echo "##########"
    else
        echo "Invalid Region..!!"
        echo "#######"
        echo "# Breaking at $region #"
        echo "################"
        #break
        exit 99
    fi
done


================================================
FILE: Day 06 Functions/README.md
================================================
# Day 06: Functions and Scripts

## Overview

In this session, we explore the concept of functions in shell scripting and how they can be beneficial in managing code effectively. While functions might not be heavily utilized in shell scripting, they become crucial when you transition to languages like Python.

## What is a Function?

A **function** is a block of code that can be called whenever needed. It allows for code reuse and better organization.

### Example in Python

```python
def addition(a, b):  # Passing two parameters: a and b
    return a + b  # Returns the sum of a and b

# Calling the function
result_a = addition(2, 3)
result_b = addition(4, 5)
result_c = addition(10, 20)

print(result_a + result_b + result_c)  # Outputs the sum of all results
```

### Importance of Functions

Functions will only execute when they are called. For instance, in Terraform, you might use functions like:

```hcl
count = 3 
element
length
```

### Installing Docker

To install Docker, you would typically call a function from a script like this: [Docker Installation](https://get.docker.com).

## Defining Functions in Shell Scripting

In shell scripting, you can define functions in two ways:

1. **Using the `function` keyword:**

   ```bash
   function hello {
       # code
   }
   ```

2. **Using parentheses:**

   ```bash
   hello() {
       # code
   }
   ```

## Checking Installed Commands

You can check if a command is installed using:

```bash
command -v jq
echo $?  # Returns the exit status of the last command

command -v aq
echo $?
```

If the `command_exist` function wasn’t used, you would need to enter these commands multiple times in your script, making functions very useful for reducing redundancy.

## Running the Delete Volume Scripts

1. **Create three 1 GB EBS volumes.**
2. To automate this task daily, we’ll use **Cron Jobs**.

### Understanding Cron Jobs

To set up a Cron job, you would:

```bash
crontab -e  # Edit the crontab file
# Add the following line:
* * * * * sudo bash /root/deleteebs.sh us-east-1  # Adjust timing as needed
```

Ensure that your script is saved at `/root/deleteebs.sh`.

## Scheduling Adjustments

If you want the task to run every 10 minutes, use:

```
*/10 * * * * sudo bash /root/deleteebs.sh us-east-1
```

## Nginx Server Installation and Test

1. Install the Nginx server on your instance.
2. Access it and generate a simple HTML game:

   ```bash
   nano /var/www/html/index.html  # Make your changes here
   ```

3. Set up uptime monitoring with StatusCake:

   - Log in with Google.
   - Create a new uptime test with the URL and desired parameters.

## Calling Multiple Functions

In your script, you can call multiple functions. At the end of your script, you might have:

```bash
vpc $@  # Allows passing multiple regions
```

## Interview Question Example

**Question:** In one system, how can I find files larger than 10 MB?

**Answer:** You can list files and check their sizes with `du`, but using the `find` command is more efficient:

```bash
find / -size +50M -size -60M 2>/dev/null
```

### Explanation:

- `/`: The starting directory for the search (root).
- `-size +50M`: Finds files larger than 50 MB.
- `-size -60M`: Finds files smaller than 60 MB.
- `2>/dev/null`: Redirects error messages (e.g., permission denied) to `/dev/null`.

## Log Rotation Script

Log rotation helps manage log files by preventing them from growing indefinitely. When log files reach a certain size, the rotation script will execute to keep things organized.

---


================================================
FILE: Day 06 Functions/docker.sh
================================================
#!/bin/sh
set -e
# Docker Engine for Linux installation script.
#
# This script is intended as a convenient way to configure docker's package
# repositories and to install Docker Engine, This script is not recommended
# for production environments. Before running this script, make yourself familiar
# with potential risks and limitations, and refer to the installation manual
# at https://docs.docker.com/engine/install/ for alternative installation methods.
#
# The script:
#
# - Requires `root` or `sudo` privileges to run.
# - Attempts to detect your Linux distribution and version and configure your
#   package management system for you.
# - Doesn't allow you to customize most installation parameters.
# - Installs dependencies and recommendations without asking for confirmation.
# - Installs the latest stable release (by default) of Docker CLI, Docker Engine,
#   Docker Buildx, Docker Compose, containerd, and runc. When using this script
#   to provision a machine, this may result in unexpected major version upgrades
#   of these packages. Always test upgrades in a test environment before
#   deploying to your production systems.
# - Isn't designed to upgrade an existing Docker installation. When using the
#   script to update an existing installation, dependencies may not be updated
#   to the expected version, resulting in outdated versions.
#
# Source code is available at https://github.com/docker/docker-install/
#
# Usage
# ==============================================================================
#
# To install the latest stable versions of Docker CLI, Docker Engine, and their
# dependencies:
#
# 1. download the script
#
#   $ curl -fsSL https://get.docker.com -o install-docker.sh
#
# 2. verify the script's content
#
#   $ cat install-docker.sh
#
# 3. run the script with --dry-run to verify the steps it executes
#
#   $ sh install-docker.sh --dry-run
#
# 4. run the script either as root, or using sudo to perform the installation.
#
#   $ sudo sh install-docker.sh
#
# Command-line options
# ==============================================================================
#
# --version <VERSION>
# Use the --version option to install a specific version, for example:
#
#   $ sudo sh install-docker.sh --version 23.0
#
# --channel <stable|test>
#
# Use the --channel option to install from an alternative installation channel.
# The following example installs the latest versions from the "test" channel,
# which includes pre-releases (alpha, beta, rc):
#
#   $ sudo sh install-docker.sh --channel test
#
# Alternatively, use the script at https://test.docker.com, which uses the test
# channel as default.
#
# --mirror <Aliyun|AzureChinaCloud>
#
# Use the --mirror option to install from a mirror supported by this script.
# Available mirrors are "Aliyun" (https://mirrors.aliyun.com/docker-ce), and
# "AzureChinaCloud" (https://mirror.azure.cn/docker-ce), for example:
#
#   $ sudo sh install-docker.sh --mirror AzureChinaCloud
#
# ==============================================================================

# Git commit from https://github.com/docker/docker-install when
# the script was uploaded (Should only be modified by upload job):
SCRIPT_COMMIT_SHA="39040d838e8bcc48c23a0cc4117475dd15189976"

# strip "v" prefix if present
VERSION="${VERSION#v}"

# The channel to install from:
#   * stable
#   * test
DEFAULT_CHANNEL_VALUE="stable"
if [ -z "$CHANNEL" ]; then
    CHANNEL=$DEFAULT_CHANNEL_VALUE
fi

DEFAULT_DOWNLOAD_URL="https://download.docker.com"
if [ -z "$DOWNLOAD_URL" ]; then
    DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URL
fi

DEFAULT_REPO_FILE="docker-ce.repo"
if [ -z "$REPO_FILE" ]; then
    REPO_FILE="$DEFAULT_REPO_FILE"
fi

mirror=''
DRY_RUN=${DRY_RUN:-}
while [ $# -gt 0 ]; do
    case "$1" in
    --channel)
        CHANNEL="$2"
        shift
        ;;
    --dry-run)
        DRY_RUN=1
        ;;
    --mirror)
        mirror="$2"
        shift
        ;;
    --version)
        VERSION="${2#v}"
        shift
        ;;
    --*)
        echo "Illegal option $1"
        ;;
    esac
    shift $(($# > 0 ? 1 : 0))
done

case "$mirror" in
Aliyun)
    DOWNLOAD_URL="https://mirrors.aliyun.com/docker-ce"
    ;;
AzureChinaCloud)
    DOWNLOAD_URL="https://mirror.azure.cn/docker-ce"
    ;;
"") ;;
*)
    echo >&2 "unknown mirror '$mirror': use either 'Aliyun', or 'AzureChinaCloud'."
    exit 1
    ;;
esac

case "$CHANNEL" in
stable | test) ;;
*)
    echo >&2 "unknown CHANNEL '$CHANNEL': use either stable or test."
    exit 1
    ;;
esac

command_exists() {
    command -v "$@" >/dev/null 2>&1
}

# version_gte checks if the version specified in $VERSION is at least the given
# SemVer (Maj.Minor[.Patch]), or CalVer (YY.MM) version.It returns 0 (success)
# if $VERSION is either unset (=latest) or newer or equal than the specified
# version, or returns 1 (fail) otherwise.
#
# examples:
#
# VERSION=23.0
# version_gte 23.0  // 0 (success)
# version_gte 20.10 // 0 (success)
# version_gte 19.03 // 0 (success)
# version_gte 26.1  // 1 (fail)
version_gte() {
    if [ -z "$VERSION" ]; then
        return 0
    fi
    version_compare "$VERSION" "$1"
}

# version_compare compares two version strings (either SemVer (Major.Minor.Path),
# or CalVer (YY.MM) version strings. It returns 0 (success) if version A is newer
# or equal than version B, or 1 (fail) otherwise. Patch releases and pre-release
# (-alpha/-beta) are not taken into account
#
# examples:
#
# version_compare 23.0.0 20.10 // 0 (success)
# version_compare 23.0 20.10   // 0 (success)
# version_compare 20.10 19.03  // 0 (success)
# version_compare 20.10 20.10  // 0 (success)
# version_compare 19.03 20.10  // 1 (fail)
version_compare() (
    set +x

    yy_a="$(echo "$1" | cut -d'.' -f1)"
    yy_b="$(echo "$2" | cut -d'.' -f1)"
    if [ "$yy_a" -lt "$yy_b" ]; then
        return 1
    fi
    if [ "$yy_a" -gt "$yy_b" ]; then
        return 0
    fi
    mm_a="$(echo "$1" | cut -d'.' -f2)"
    mm_b="$(echo "$2" | cut -d'.' -f2)"

    # trim leading zeros to accommodate CalVer
    mm_a="${mm_a#0}"
    mm_b="${mm_b#0}"

    if [ "${mm_a:-0}" -lt "${mm_b:-0}" ]; then
        return 1
    fi

    return 0
)

is_dry_run() {
    if [ -z "$DRY_RUN" ]; then
        return 1
    else
        return 0
    fi
}

is_wsl() {
    case "$(uname -r)" in
    *microsoft*) true ;; # WSL 2
    *Microsoft*) true ;; # WSL 1
    *) false ;;
    esac
}

is_darwin() {
    case "$(uname -s)" in
    *darwin*) true ;;
    *Darwin*) true ;;
    *) false ;;
    esac
}

deprecation_notice() {
    distro=$1
    distro_version=$2
    echo
    printf "\033[91;1mDEPRECATION WARNING\033[0m\n"
    printf "    This Linux distribution (\033[1m%s %s\033[0m) reached end-of-life and is no longer supported by this script.\n" "$distro" "$distro_version"
    echo "    No updates or security fixes will be released for this distribution, and users are recommended"
    echo "    to upgrade to a currently maintained version of $distro."
    echo
    printf "Press \033[1mCtrl+C\033[0m now to abort this script, or wait for the installation to continue."
    echo
    sleep 10
}

get_distribution() {
    lsb_dist=""
    # Every system that we officially support has /etc/os-release
    if [ -r /etc/os-release ]; then
        lsb_dist="$(. /etc/os-release && echo "$ID")"
    fi
    # Returning an empty string here should be alright since the
    # case statements don't act unless you provide an actual value
    echo "$lsb_dist"
}

echo_docker_as_nonroot() {
    if is_dry_run; then
        return
    fi
    if command_exists docker && [ -e /var/run/docker.sock ]; then
        (
            set -x
            $sh_c 'docker version'
        ) || true
    fi

    # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output
    echo
    echo "================================================================================"
    echo
    if version_gte "20.10"; then
        echo "To run Docker as a non-privileged user, consider setting up the"
        echo "Docker daemon in rootless mode for your user:"
        echo
        echo "    dockerd-rootless-setuptool.sh install"
        echo
        echo "Visit https://docs.docker.com/go/rootless/ to learn about rootless mode."
        echo
    fi
    echo
    echo "To run the Docker daemon as a fully privileged service, but granting non-root"
    echo "users access, refer to https://docs.docker.com/go/daemon-access/"
    echo
    echo "WARNING: Access to the remote API on a privileged Docker daemon is equivalent"
    echo "         to root access on the host. Refer to the 'Docker daemon attack surface'"
    echo "         documentation for details: https://docs.docker.com/go/attack-surface/"
    echo
    echo "================================================================================"
    echo
}

# Check if this is a forked Linux distro
check_forked() {

    # Check for lsb_release command existence, it usually exists in forked distros
    if command_exists lsb_release; then
        # Check if the `-u` option is supported
        set +e
        lsb_release -a -u >/dev/null 2>&1
        lsb_release_exit_code=$?
        set -e

        # Check if the command has exited successfully, it means we're in a forked distro
        if [ "$lsb_release_exit_code" = "0" ]; then
            # Print info about current distro
            cat <<-EOF
			You're using '$lsb_dist' version '$dist_version'.
			EOF

            # Get the upstream release info
            lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]')
            dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]')

            # Print info about upstream distro
            cat <<-EOF
			Upstream release is '$lsb_dist' version '$dist_version'.
			EOF
        else
            if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ] && [ "$lsb_dist" != "raspbian" ]; then
                if [ "$lsb_dist" = "osmc" ]; then
                    # OSMC runs Raspbian
                    lsb_dist=raspbian
                else
                    # We're Debian and don't even know it!
                    lsb_dist=debian
                fi
                dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
                case "$dist_version" in
                12)
                    dist_version="bookworm"
                    ;;
                11)
                    dist_version="bullseye"
                    ;;
                10)
                    dist_version="buster"
                    ;;
                9)
                    dist_version="stretch"
                    ;;
                8)
                    dist_version="jessie"
                    ;;
                esac
            fi
        fi
    fi
}

do_install() {
    echo "# Executing docker install script, commit: $SCRIPT_COMMIT_SHA"

    if command_exists docker; then
        cat >&2 <<-'EOF'
			Warning: the "docker" command appears to already exist on this system.

			If you already have Docker installed, this script can cause trouble, which is
			why we're displaying this warning and provide the opportunity to cancel the
			installation.

			If you installed the current Docker package using this script and are using it
			again to update Docker, you can safely ignore this message.

			You may press Ctrl+C now to abort this script.
		EOF
        (
            set -x
            sleep 20
        )
    fi

    user="$(id -un 2>/dev/null || true)"

    sh_c='sh -c'
    if [ "$user" != 'root' ]; then
        if command_exists sudo; then
            sh_c='sudo -E sh -c'
        elif command_exists su; then
            sh_c='su -c'
        else
            cat >&2 <<-'EOF'
			Error: this installer needs the ability to run commands as root.
			We are unable to find either "sudo" or "su" available to make this happen.
			EOF
            exit 1
        fi
    fi

    if is_dry_run; then
        sh_c="echo"
    fi

    # perform some very rudimentary platform detection
    lsb_dist=$(get_distribution)
    lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"

    if is_wsl; then
        echo
        echo "WSL DETECTED: We recommend using Docker Desktop for Windows."
        echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop/"
        echo
        cat >&2 <<-'EOF'

			You may press Ctrl+C now to abort this script.
		EOF
        (
            set -x
            sleep 20
        )
    fi

    case "$lsb_dist" in

    ubuntu)
        if command_exists lsb_release; then
            dist_version="$(lsb_release --codename | cut -f2)"
        fi
        if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
            dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
        fi
        ;;

    debian | raspbian)
        dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
        case "$dist_version" in
        12)
            dist_version="bookworm"
            ;;
        11)
            dist_version="bullseye"
            ;;
        10)
            dist_version="buster"
            ;;
        9)
            dist_version="stretch"
            ;;
        8)
            dist_version="jessie"
            ;;
        esac
        ;;

    centos | rhel)
        if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
            dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
        fi
        ;;

    *)
        if command_exists lsb_release; then
            dist_version="$(lsb_release --release | cut -f2)"
        fi
        if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
            dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
        fi
        ;;

    esac

    # Check if this is a forked Linux distro
    check_forked

    # Print deprecation warnings for distro versions that recently reached EOL,
    # but may still be commonly used (especially LTS versions).
    case "$lsb_dist.$dist_version" in
    centos.8 | centos.7 | rhel.7)
        deprecation_notice "$lsb_dist" "$dist_version"
        ;;
    debian.buster | debian.stretch | debian.jessie)
        deprecation_notice "$lsb_dist" "$dist_version"
        ;;
    raspbian.buster | raspbian.stretch | raspbian.jessie)
        deprecation_notice "$lsb_dist" "$dist_version"
        ;;
    ubuntu.bionic | ubuntu.xenial | ubuntu.trusty)
        deprecation_notice "$lsb_dist" "$dist_version"
        ;;
    ubuntu.mantic | ubuntu.lunar | ubuntu.kinetic | ubuntu.impish | ubuntu.hirsute | ubuntu.groovy | ubuntu.eoan | ubuntu.disco | ubuntu.cosmic)
        deprecation_notice "$lsb_dist" "$dist_version"
        ;;
    fedora.*)
        if [ "$dist_version" -lt 39 ]; then
            deprecation_notice "$lsb_dist" "$dist_version"
        fi
        ;;
    esac

    # Run setup for each distro accordingly
    case "$lsb_dist" in
    ubuntu | debian | raspbian)
        pre_reqs="ca-certificates curl"
        apt_repo="deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL"
        (
            if ! is_dry_run; then
                set -x
            fi
            $sh_c 'apt-get -qq update >/dev/null'
            $sh_c "DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pre_reqs >/dev/null"
            $sh_c 'install -m 0755 -d /etc/apt/keyrings'
            $sh_c "curl -fsSL \"$DOWNLOAD_URL/linux/$lsb_dist/gpg\" -o /etc/apt/keyrings/docker.asc"
            $sh_c "chmod a+r /etc/apt/keyrings/docker.asc"
            $sh_c "echo \"$apt_repo\" > /etc/apt/sources.list.d/docker.list"
            $sh_c 'apt-get -qq update >/dev/null'
        )
        pkg_version=""
        if [ -n "$VERSION" ]; then
            if is_dry_run; then
                echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
            else
                # Will work for incomplete versions IE (17.12), but may not actually grab the "latest" if in the test channel
                pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/~ce~.*/g' | sed 's/-/.*/g')"
                search_command="apt-cache madison docker-ce | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
                pkg_version="$($sh_c "$search_command")"
                echo "INFO: Searching repository for VERSION '$VERSION'"
                echo "INFO: $search_command"
                if [ -z "$pkg_version" ]; then
                    echo
                    echo "ERROR: '$VERSION' not found amongst apt-cache madison results"
                    echo
                    exit 1
                fi
                if version_gte "18.09"; then
                    search_command="apt-cache madison docker-ce-cli | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
                    echo "INFO: $search_command"
                    cli_pkg_version="=$($sh_c "$search_command")"
                fi
                pkg_version="=$pkg_version"
            fi
        fi
        (
            pkgs="docker-ce${pkg_version%=}"
            if version_gte "18.09"; then
                # older versions didn't ship the cli and containerd as separate packages
                pkgs="$pkgs docker-ce-cli${cli_pkg_version%=} containerd.io"
            fi
            if version_gte "20.10"; then
                pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version"
            fi
            if version_gte "23.0"; then
                pkgs="$pkgs docker-buildx-plugin"
            fi
            if ! is_dry_run; then
                set -x
            fi
            $sh_c "DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pkgs >/dev/null"
        )
        echo_docker_as_nonroot
        exit 0
        ;;
    centos | fedora | rhel)
        repo_file_url="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE"
        (
            if ! is_dry_run; then
                set -x
            fi
            if command_exists dnf5; then
                # $sh_c "dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core"
                # $sh_c	"dnf5 config-manager addrepo --save-filename=docker-ce.repo --from-repofile='$repo_file_url'"

                $sh_c "dnf -y -q --setopt=install_weak_deps=False install curl dnf-plugins-core"
                # FIXME(thaJeztah); strip empty lines as workaround for https://github.com/rpm-software-management/dnf5/issues/1603
                TMP_REPO_FILE="$(mktemp --dry-run)"
                $sh_c "curl -fsSL '$repo_file_url' | tr -s '\n' > '${TMP_REPO_FILE}'"
                $sh_c "dnf5 config-manager addrepo --save-filename=docker-ce.repo --overwrite --from-repofile='${TMP_REPO_FILE}'"
                $sh_c "rm -f '${TMP_REPO_FILE}'"

                if [ "$CHANNEL" != "stable" ]; then
                    $sh_c "dnf5 config-manager setopt \"docker-ce-*.enabled=0\""
                    $sh_c "dnf5 config-manager setopt \"docker-ce-$CHANNEL.enabled=1\""
                fi
                $sh_c "dnf makecache"
            elif command_exists dnf; then
                $sh_c "dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core"
                $sh_c "dnf config-manager --add-repo $repo_file_url"

                if [ "$CHANNEL" != "stable" ]; then
                    $sh_c "dnf config-manager --set-disabled \"docker-ce-*\""
                    $sh_c "dnf config-manager --set-enabled \"docker-ce-$CHANNEL\""
                fi
                $sh_c "dnf makecache"
            else
                $sh_c "yum -y -q install yum-utils"
                $sh_c "yum config-manager --add-repo $repo_file_url"

                if [ "$CHANNEL" != "stable" ]; then
                    $sh_c "yum config-manager --disable \"docker-ce-*\""
                    $sh_c "yum config-manager --enable \"docker-ce-$CHANNEL\""
                fi
                $sh_c "yum makecache"
            fi
        )
        pkg_version=""
        if command_exists dnf; then
            pkg_manager="dnf"
            pkg_manager_flags="-y -q --best"
        else
            pkg_manager="yum"
            pkg_manager_flags="-y -q"
        fi
        if [ -n "$VERSION" ]; then
            if is_dry_run; then
                echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
            else
                if [ "$lsb_dist" = "fedora" ]; then
                    pkg_suffix="fc$dist_version"
                else
                    pkg_suffix="el"
                fi
                pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/\\\\.ce.*/g' | sed 's/-/.*/g').*$pkg_suffix"
                search_command="$pkg_manager list --showduplicates docker-ce | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
                pkg_version="$($sh_c "$search_command")"
                echo "INFO: Searching repository for VERSION '$VERSION'"
                echo "INFO: $search_command"
                if [ -z "$pkg_version" ]; then
                    echo
                    echo "ERROR: '$VERSION' not found amongst $pkg_manager list results"
                    echo
                    exit 1
                fi
                if version_gte "18.09"; then
                    # older versions don't support a cli package
                    search_command="$pkg_manager list --showduplicates docker-ce-cli | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
                    cli_pkg_version="$($sh_c "$search_command" | cut -d':' -f 2)"
                fi
                # Cut out the epoch and prefix with a '-'
                pkg_version="-$(echo "$pkg_version" | cut -d':' -f 2)"
            fi
        fi
        (
            pkgs="docker-ce$pkg_version"
            if version_gte "18.09"; then
                # older versions didn't ship the cli and containerd as separate packages
                if [ -n "$cli_pkg_version" ]; then
                    pkgs="$pkgs docker-ce-cli-$cli_pkg_version containerd.io"
                else
                    pkgs="$pkgs docker-ce-cli containerd.io"
                fi
            fi
            if version_gte "20.10"; then
                pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version"
            fi
            if version_gte "23.0"; then
                pkgs="$pkgs docker-buildx-plugin"
            fi
            if ! is_dry_run; then
                set -x
            fi
            $sh_c "$pkg_manager $pkg_manager_flags install $pkgs"
        )
        echo_docker_as_nonroot
        exit 0
        ;;
    sles)
        if [ "$(uname -m)" != "s390x" ]; then
            echo "Packages for SLES are currently only available for s390x"
            exit 1
        fi
        repo_file_url="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE"
        pre_reqs="ca-certificates curl libseccomp2 awk"
        (
            if ! is_dry_run; then
                set -x
            fi
            $sh_c "zypper install -y $pre_reqs"
            $sh_c "zypper addrepo $repo_file_url"
            if ! is_dry_run; then
                cat >&2 <<-'EOF'
						WARNING!!
						openSUSE repository (https://download.opensuse.org/repositories/security:/SELinux) will be enabled now.
						Do you wish to continue?
						You may press Ctrl+C now to abort this script.
						EOF
                (
                    set -x
                    sleep 30
                )
            fi
            opensuse_repo="https://download.opensuse.org/repositories/security:/SELinux/openSUSE_Factory/security:SELinux.repo"
            $sh_c "zypper addrepo $opensuse_repo"
            $sh_c "zypper --gpg-auto-import-keys refresh"
            $sh_c "zypper lr -d"
        )
        pkg_version=""
        if [ -n "$VERSION" ]; then
            if is_dry_run; then
                echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
            else
                pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/\\\\.ce.*/g' | sed 's/-/.*/g')"
                search_command="zypper search -s --match-exact 'docker-ce' | grep '$pkg_pattern' | tail -1 | awk '{print \$6}'"
                pkg_version="$($sh_c "$search_command")"
                echo "INFO: Searching repository for VERSION '$VERSION'"
                echo "INFO: $search_command"
                if [ -z "$pkg_version" ]; then
                    echo
                    echo "ERROR: '$VERSION' not found amongst zypper list results"
                    echo
                    exit 1
                fi
                search_command="zypper search -s --match-exact 'docker-ce-cli' | grep '$pkg_pattern' | tail -1 | awk '{print \$6}'"
                # It's okay for cli_pkg_version to be blank, since older versions don't support a cli package
                cli_pkg_version="$($sh_c "$search_command")"
                pkg_version="-$pkg_version"
            fi
        fi
        (
            pkgs="docker-ce$pkg_version"
            if version_gte "18.09"; then
                if [ -n "$cli_pkg_version" ]; then
                    # older versions didn't ship the cli and containerd as separate packages
                    pkgs="$pkgs docker-ce-cli-$cli_pkg_version containerd.io"
                else
                    pkgs="$pkgs docker-ce-cli containerd.io"
                fi
            fi
            if version_gte "20.10"; then
                pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version"
            fi
            if version_gte "23.0"; then
                pkgs="$pkgs docker-buildx-plugin"
            fi
            if ! is_dry_run; then
                set -x
            fi
            $sh_c "zypper -q install -y $pkgs"
        )
        echo_docker_as_nonroot
        exit 0
        ;;
    *)
        if [ -z "$lsb_dist" ]; then
            if is_darwin; then
                echo
                echo "ERROR: Unsupported operating system 'macOS'"
                echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop"
                echo
                exit 1
            fi
        fi
        echo
        echo "ERROR: Unsupported distribution '$lsb_dist'"
        echo
        exit 1
        ;;
    esac
    exit 1
}

# wrapped up in a function so that we have some protection against only getting
# half the file during "curl | sh"
do_install


================================================
FILE: Day 06 Functions/ebs.sh
================================================
#!/bin/bash

delete_vols() {
    # Fetch all volumes
    vols=$(aws ec2 describe-volumes | jq ".Volumes[].VolumeId" -r)

    for vol in $vols; do
        # Fetch volume details
        volume_info=$(aws ec2 describe-volumes --volume-ids $vol)
        size=$(echo "$volume_info" | jq ".Volumes[].Size")
        state=$(echo "$volume_info" | jq ".Volumes[].State" -r)

        # Check volume size and state
        if [ "$state" == "in-use" ]; then
            echo "$vol is attached to an instance. Skipping deletion."
        elif [ "$size" -gt 5 ]; then
            echo "$vol is larger than 5GB. Skipping deletion."
        else
            echo "Deleting Volume $vol"
            aws ec2 delete-volume --volume-id $vol
        fi
    done
}

# Call the function
delete_vols


================================================
FILE: Day 06 Functions/log-rotation.sh
================================================
#!/bin/bash

# Configuration
LOG_FILE="/var/log/syslog"          # Path to your log file
MAX_SIZE=100000000                  # Maximum size in bytes (100 MB)
BACKUP_DIR="/var/log/myapp/backups" # Directory to store rotated logs
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")  # Timestamp for backup filename

# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"

# Function to rotate log files
rotate_logs() {
    if [ -f "$LOG_FILE" ]; then
        echo "Rotating log file: $LOG_FILE"
        mv "$LOG_FILE" "$BACKUP_DIR/myapp_$TIMESTAMP.log" # Rename the log file with a timestamp
        touch "$LOG_FILE"                                 # Create a new empty log file
        echo "Log file rotated and stored as $BACKUP_DIR/myapp_$TIMESTAMP.log"
    else
        echo "Log file $LOG_FILE does not exist."
    fi
}

# Check if the log file size exceeds the maximum size
if [ -f "$LOG_FILE" ]; then
    FILE_SIZE=$(stat -c%s "$LOG_FILE") # Get the size of the log file in bytes
    if [ "$FILE_SIZE" -gt "$MAX_SIZE" ]; then
        rotate_logs
    else
        echo "Log file size is under control: ${FILE_SIZE} bytes"
    fi
else
    echo "Log file does not exist. No action taken."
fi


================================================
FILE: Day 06 Functions/multi-function.sh
================================================
#!/bin/bash
function subnets {
    echo "************************************************************"
    echo "**Getting SUBNETS Info VPC $VPC in region $REGION**"
    echo "************************************************************"
    aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPC" --region $REGION | jq ".Subnets[].SubnetId"
    echo "---------------------------------------------"
}

function sg {
    echo "********************************************************************"
    echo "**Getting Security Group Info VPC $VPC in region $REGION**"
    echo "********************************************************************"
    aws ec2 describe-security-groups --filters "Name=vpc-id,Values=$VPC" --region $REGION | jq ".SecurityGroups[].GroupName"
    echo "---------------------------------------------"
}

vpcs() {
    for REGION in $@; do
        echo "Getting VPC List For Regions $REGION..."
        vpcs=$(aws ec2 describe-vpcs --region "${REGION}" | jq ".Vpcs[].VpcId" | tr -d '"')
        echo $vpcs
        echo "--------------------------------------------------"
        for VPC in $vpcs; do
            subnets $VPC
            # sg $VPC
        done
        # for VPC in $vpcs; do
        #     sg $VPC
        # done
    done
}

vpcs $@


================================================
FILE: Day 07 Git-1/README.md
================================================
# Day 07 GIT Azure Terraform JIRA

![a-3d-scene-with-a-terraform-logo-on-one-side-and-a-UJgTFv-TSs-3jQkKJsSVGQ-TP18QzX3TRGEJlCl2aGlmA](https://github.com/user-attachments/assets/df80ecf8-a04e-45b1-9540-0759a6ea8fa2)


## Overview

This project demonstrates using Git for version control while developing infrastructure with Terraform on Azure. We'll cover setting up Git, Terraform, and pushing infrastructure code to a remote GitHub repository.

## Table of Contents
1. [Git and Remote Repositories](#git-and-remote-repositories)
2. [Setting Up Environment](#setting-up-environment)
3. [Azure Service Principal](#azure-service-principal)
4. [Terraform Project](#terraform-project)
5. [Managing State and GitHub](#managing-state-and-github)
6. [Branching Strategy](#branching-strategy)

## Git and Remote Repositories

Git is a tool that helps track changes in code and push it to a remote repository such as GitHub, GitLab, Bitbucket, or Azure DevOps. In a collaborative environment, all team members work on the same repository to manage changes effectively.

For this project, we are using Terraform to create infrastructure on Azure, and Git to version control the Terraform code.

## Setting Up Environment

### Step 1: Install Git and Terraform
- **Git Installation**:
  - Download Git and check the installation via PowerShell: 
    ```sh
    git --version
    ```

- **Terraform Installation**:
  - Create a folder named `software` in C drive.
  - Download Terraform binary, save it in the folder, extract it, and add its path to the system environment variables:
    ```sh
    sysdm.cpl > Advanced > Environment Variables > Path > Edit > New (paste path)
    ```

### Step 2: Create Project Folder
- Create a folder named `Azure-Tera-Git`.
- Inside, create a file called `Credentials` to store credentials.

## Azure Service Principal

To authenticate between Azure and Terraform:

1. **Azure EntraID** > **App Registration** > **New Registration**.
   - Register an app named `DevSecOps-Saikiran` (Service Principal).
   - Collect `ClientID` and `TenantID`.

2. Go to **Certificates & Secrets** and create a new client secret.

3. Navigate to **Subscriptions**:
   - Create a subscription and copy the `SubscriptionID`.
   - Assign roles:
     - **IAM** > **Role Assignment** > **Privilege Admin Roles** > **Contributor** > **Select Members**.

## Terraform Project

### Resources to Create
- Resource Groups (RG)
- Virtual Network & Subnets
- Network Security Group (NSG) and Rules
- Random Passwords
- Save Passwords in Key Vault
- Deploy Virtual Machine using passwords from the Key Vault

### Code Structure
- **provider.tf**: Configure Azure provider for Terraform.
  ```hcl
  provider "azurerm" {
    features {}
  }
  ```

### Commands
- **Initialize Terraform**:
  ```sh
  terraform init
  ```
  (This downloads the Azure provider.)

- **Deploy Resources**:
  - Create Resource Groups, Virtual Networks, etc., using the keyword `resource`.
  - The `resource` block is used for all resources, including security groups, VPCs, etc.

- **Manage State File**:
  - Keep track of the infrastructure state.
  - Store the state file in an Azure Storage account to maintain consistency:
    - **Storage Accounts** > **Containers** > Create container (`tfstate`).

- **Apply Configuration**:
  ```sh
  terraform init; terraform fmt; terraform validate; terraform plan; terraform apply
  ```

## Managing State and GitHub

### Initialize Git Repository
- Create a GitHub repository as **private**.
- Set up SSH keys for authentication:
  ```sh
  ssh-keygen
  ```
  Copy the `.pub` key and store it in GitHub.

### Version Control Steps
1. **Initialize Git**:
   ```sh
   git init
   ```
2. **Create `.gitignore`** to exclude unnecessary files.
3. **Commit Changes**:
   ```sh
   git add . && git commit -m "terraform Azure Base Code"
   ```
4. **Push to Remote Repository**:
   ```sh
   git branch -m master development
   git push origin development
   ```

### Virtual Network Deployment
- Add code for virtual networks, apply changes, and push the updated code to GitHub.

## Branching Strategy

### Create Branches
- **Production Branch**:
  ```sh
  git branch -b production
  git push origin production
  ```
- **Feature Branch for Updates**:
  - Create new features in separate branches:
    ```sh
    git checkout -b feature/subnet
    ```
  - Develop, test, and then create a Pull Request (PR) for merging changes into the **development** or **production** branch.

### Merging with Pull Request
- Create a PR in GitHub to merge changes from development to production.
- Add comments and request approval from reviewers.
- Once approved, merge the code.

### Create JIRA Branch
- Create a branch based on a JIRA ticket for tracking:
  ```sh
  git checkout -b JIRA-123
  ```
- Implement Azure Storage account code, commit, and push to the JIRA branch.
- Create a PR to merge the feature, add relevant comments, and ensure code review.



================================================
FILE: Day 08 Git-2/README.md
================================================
# Day 08 Git-2


================================================
FILE: Day 09 Git-3/README.md
================================================
![an-eye-catching-illustration-of-a-git-merge-and-gi-mich74xdR-iNzhh-DPdCaw-dDLWCUYQQtKBuum9wR-h7w](https://github.com/user-attachments/assets/affbf339-6c43-4fa4-a9e5-a3edf2961a33)


# Git Basics: Rebase, Reset, Stash, and Git Secrets

This repository provides practical examples and explanations on fundamental Git operations such as `rebase`, `reset`, `stash`, and securing sensitive information with `git-secrets`.

## Table of Contents

- [Rebase](#rebase)
- [Reset](#reset)
- [Stash](#stash)
- [Git Secrets](#git-secrets)

---

## Rebase

### What is Git Rebase?

Rebasing in Git is used to take the changes from one branch (usually a development branch) and apply them on top of another branch (typically the master branch). This results in a linear commit history, providing a cleaner log. However, it rewrites commit history, which can cause issues in a collaborative environment.

### Example:

1. Create the master branch and commit changes:
   ```bash
   mkdir rebase-example && cd rebase-example
   git init

   I=1
   while [ $I -lt 6 ]
   do
       echo "Master $I time" > MasterFile$I
       git add . && git commit -m "Master Commit $I"
       I=$((I+1))
   done
   ```

2. Create the development branch and add commits:
   ```bash
   git checkout -b development
   I=1
   while [ $I -lt 6 ]
   do
       echo "Development $I time" > DevFile$I
       git add . && git commit -m "Development Commit $I"
       I=$((I+1))
   done
   ```

3. Now, rebase the `development` branch onto `master`:
   ```bash
   git checkout development
   git rebase master
   git log --oneline
   ```

### Golden Rule of Rebase:

According to Google’s and Bitbucket's guidelines, **never rebase commits that you’ve already pushed to a shared repository**. This can cause confusion for your collaborators as it rewrites the commit history.

---

## Reset

### Types of Git Reset:

1. **Soft Reset**: Only resets the commit history, files remain intact.
2. **Hard Reset**: Removes both commit history and files, reverting to a previous state.

### Example:

1. Create 20 commits in a repository:
   ```bash
   mkdir reset-example && cd reset-example
   git init

   I=1
   while [ $I -lt 21 ]
   do
       echo "Commit $I content" > File$I
       git add . && git commit -m "Commit $I"
       I=$((I+1))
   done
   ```

2. Perform a hard reset to an earlier commit:
   ```bash
   git reset --hard <commit-id>
   git log --oneline
   ls -al
   ```

3. Perform a soft reset:
   ```bash
   git reset --soft <commit-id>
   ls -al  # Files will remain intact
   ```

4. If changes were pushed to the remote repository, use the following command to force-push after a reset:
   ```bash
   git push origin master --force
   ```

---

## Stash

### What is Git Stash?

Git stash is used to temporarily save your uncommitted changes so that you can work on something else. Later, you can retrieve those changes using `git stash pop`.

### Example:

1. Modify `app.py`:
   ```bash
   nano app.py
   # Add some code, like:
   print("Hello Saikiran")
   ```

2. If you need to switch to another task quickly without committing:
   ```bash
   git stash
   ```

3. To retrieve the stashed changes:
   ```bash
   git stash pop
   ```

In interviews, mention that `stash` is primarily used for temporarily saving work without committing.

---

## Git Secrets

### Protect Sensitive Information

Developers or DevOps engineers sometimes mistakenly commit sensitive information (API keys, PEM files, etc.) into repositories. To prevent this, we can use `git-secrets`.

### Example:

1. Install `git-secrets`:
   ```bash
   git clone https://github.com/awslabs/git-secrets.git
   cd git-secrets
   sudo apt install make -y
   make install
   git secrets --install
   ```

2. Register AWS patterns:
   ```bash
   git secrets --register-aws
   ```

3. Create a file containing sensitive information and attempt to commit it:
   ```bash
   nano keys
   # Add some AWS access keys
   git add . && git commit -m "AWS keys"
   ```

4. `git-secrets` will block this commit if sensitive information is detected.

---

## Conclusion

This repository covers essential Git operations:
- **Rebase** for cleaner history but with caution.
- **Reset** for undoing commits.
- **Stash** for temporarily saving work.
- **Git Secrets** for protecting sensitive information.

These concepts are critical for anyone working with version control and especially useful in DevOps and development workflows.
``


================================================
FILE: Day 10 AWS-Terraform-Part-1/README.md
================================================
![a-3d-render-of-a-youtube-thumbnail-with-the-text-d-6vFmUIlxRQ2-ERpv-XkPmg-98wY6FuxTTeyHEHWaD8X5w](https://github.com/user-attachments/assets/5ff94fd5-09ee-4fc9-87df-e16f87bab83c)


# Terraform Day 01 Provider Block - Resource Block - S3 backend - Data Source - Remote Data Source Backend 

# Code used in video https://github.com/saikiranpi/Terraformsingleinstance.git

# Infrastructure as Code (IaC) with Terraform and Cloud Native Tools (CNT)

## Overview

In this repository, we explore Infrastructure as Code (IaC) using both Cloud Native Tools (CNT) and Terraform. We'll compare AWS CloudFormation (CFT), Azure Resource Manager (ARM), and GCP Deployment Manager with Terraform. Additionally, we'll cover practical Terraform code examples for AWS, including how to manage infrastructure with modules, data sources, and remote state management.

### Tools Overview:

1. **AWS**: CloudFormation (CFT)
2. **Azure**: Azure Resource Manager (ARM)
3. **GCP**: Deployment Manager

### Key Differences between CNT (CFT, ARM) & Terraform:

| Feature                          | CFT & ARM                           | Terraform                      |
|-----------------------------------|--------------------------------------|---------------------------------|
| Language                          | JSON or YAML (All configs in one file) | HashiCorp Configuration Language (HCL) |
| Complexity                        | Learning JSON/YAML is difficult       | HCL is simpler and modular     |
| Cloud Compatibility               | AWS (CFT), Azure (ARM) only          | Multi-cloud (AWS, Azure, GCP)  |
| Module Support                    | No                                  | Yes, with reusable modules     |
| Workspace Support                 | No                                  | Yes, supports multiple workspaces |
| Dry-Run Capability                | Limited                             | `terraform plan` for effective dry-run |
| Importing Resources               | Complex in AWS, not available in ARM | Simple with `terraform import` |

---

## Terraform and Other HashiCorp Tools:

Terraform is a HashiCorp tool that is cloud-agnostic, which means you can use the same logic to deploy resources across multiple clouds, including AWS, Azure, and GCP. Alongside Terraform, HashiCorp also provides:

- **Packer**: For image automation
- **Consul**: For service discovery and cluster management
- **Vault**: For secure secrets management
- **Nomad**: For workload orchestration (alternative to Kubernetes)

---

## Getting Started with Terraform

### 1. Main Configuration (`main.tf`):
This is the main file where we define which cloud provider we will be deploying resources to, in this case, AWS.

```hcl
provider "aws" {
  region = "us-west-2"
}

# Other resource definitions will follow...
```

You don't need to hard-code your AWS credentials in the code; instead, you can configure them using the `aws configure` command after installing the AWS CLI.

---

### 2. Create Your First VPC (`vpc.tf`):

In Terraform, any service created is referred to as a **resource**.

```hcl
resource "aws_vpc" "my_vpc" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "My-VPC"
  }
}

resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.my_vpc.id
  tags = {
    Name = "My-Internet-Gateway"
  }
}
```

### 3. Using Data Sources:

Data sources are used to fetch information from existing resources in your cloud environment. For example, we can fetch an existing VPC using its tag name:

```hcl
data "aws_vpc" "Test-Vpc" {
  filter {
    name   = "tag:Name"
    values = ["Test-Vpc"]
  }
}

resource "aws_internet_gateway" "igw" {
  vpc_id = data.aws_vpc.Test-Vpc.id
}
```

### 4. Remote State Management:

After deploying your resources, Terraform generates a state file. This state file can be reused to deploy the same infrastructure in another project. We can manage this using Terraform's remote state:

```hcl
terraform {
  backend "s3" {
    bucket = "my-terraform-state-bucket"
    key    = "project1/terraform.tfstate"
    region = "us-west-2"
  }
}
```

After setting this up, initialize the backend:

```bash
terraform init
```

---

### Sample Workflow:

1. **Write Terraform Config**: Create resource files (`vpc.tf`, `ec2.tf`).
2. **Initialize**: Run `terraform init` to set up the environment.
3. **Plan**: Run `terraform plan` to perform a dry-run and check for any potential issues.
4. **Apply**: Run `terraform apply` to provision the resources.
5. **State Management**: Use remote state for managing large infrastructures and multiple environments.

### Additional Resources:

- **AWS Resources**: VPC, Internet Gateway, Subnets, Security Groups, EC2 instances.
- **Data Sources**: Reuse and reference existing resources.
- **Remote State**: Manage infrastructure state across projects.

---

## Conclusion

Terraform offers greater flexibility and multi-cloud support compared to cloud-native tools like CloudFormation (CFT) and Azure Resource Manager (ARM). It simplifies resource management through modules, reusable code, and a powerful state management system. This repository contains code examples and best practices for managing your cloud infrastructure using Terraform.


================================================
FILE: Day 11 AWS-Terraform-Part-2/README.md
================================================
![Untitled design](https://github.com/user-attachments/assets/d7d9ad96-e14e-40d8-ac6f-93004fb69da0)



# Terraform Day 02 - Dependencies, Variables,  TFVars and Create Before Destroy

Today, we'll dive into **dependencies in Terraform** and cover two main topics:  
1. **Implicit and Explicit Dependencies**  
2. **Variables and TFVars**

## Dependencies in Terraform

Terraform automatically handles resource dependencies in two ways:

### 1. Implicit Dependencies
An **implicit dependency** occurs when one resource refers to the attribute of another resource. For example, when creating a VPC and then an Internet Gateway, the Internet Gateway doesn't inherently know that it must wait for the VPC to be created. However, when you reference the VPC ID in the Internet Gateway resource, Terraform understands that the VPC must be created first.

- **Example:**  
  When you declare a VPC, its ID is generated only after it is created. Any resource, like a subnet or Internet Gateway, that references this VPC ID creates an implicit dependency.

### 2. Explicit Dependencies
Sometimes, implicit dependencies aren’t enough. For example, if we want the **S3 bucket** to be created only after the VPC is created, we need to use explicit dependencies. This is done using the `depends_on` argument in Terraform.

- **Example:**  
  A **NAT Gateway** should only be created after a **Route Table** has been established. If the NAT Gateway is created before the route table, it won’t function as expected. This is where **explicit dependencies** come into play using `depends_on`.

### Task Example: VPC, Internet Gateway, and S3 Bucket
- First, we’ll create a **VPC** and an **S3 bucket**. Since there's no direct dependency between the VPC and the S3 bucket, Terraform may create the S3 bucket first.
- To enforce order, we’ll explore how to use `depends_on` to make sure that resources like the **NAT Gateway** and **S3 bucket** are created in the correct sequence.

### Create S3 Buckets
1. Create an `s3.tf` file.
2. In it, define three S3 buckets.
3. Observe that the S3 buckets and VPC will deploy in parallel because there is no dependency between them.

To ensure that the S3 bucket is created **after** the VPC, we’ll add explicit dependencies using the `depends_on` argument.

---

## Variables and TFVars

### Variables
Variables allow us to easily change values without editing the code directly. This makes managing infrastructure more flexible and reusable.

### TFVars
Terraform variable values can be stored in separate `.tfvars` files, helping to:
- Keep the code clean.
- Manage sensitive data or multiple environments efficiently.

### Removing Lock Files
Remember to clean up all `terraform.tfstate.lock` files before redeploying to avoid locking issues.

---

## Create Before Destroy

When replacing resources, Terraform often follows the **create before destroy** pattern. This ensures minimal downtime by creating a replacement resource before destroying the original.

- **Example:**  
  When updating a resource like a **Key Pair** or upgrading a component, Terraform will first create the new key, then destroy the old one after the new one is functional.

### Task: Example Deployment
1. Deploy the resource.
2. Run `terraform plan` and observe the changes. (Copy the output to a Notepad for reference.)
3. Deploy the resource.
4. Add an additional name to the S3 bucket and reapply the changes to see how Terraform manages updates.

---

## Prevent Destroy

Use `prevent_destroy` to safeguard critical resources. This is especially useful for resources like databases or sensitive buckets where destruction could cause significant issues.

---

By the end of this session, you’ll have a deeper understanding of how Terraform handles dependencies, the flexibility of variables, and the best practices for managing infrastructure deployment and updates.

---



================================================
FILE: Day 12 AWS-Terraform-Part-3/README.md
================================================

![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-cTa2tZAgR1ShW2UwqRQdcQ-fbR1bkc9RlC23TynNHoRhA](https://github.com/user-attachments/assets/13e11914-f6c0-409a-9c9c-a9ce08f926be)


# Terraform Workspaces for Multi-Environment Infrastructure

This repository demonstrates how to set up and manage multiple identical environments (Dev, UAT, and Prod) using Terraform Workspaces. Each environment will have 3 servers with unique naming conventions. The state management for each environment is handled separately using Terraform's state backend in S3 with DynamoDB for state locking.

## Prerequisites

- Terraform installed on your local machine.
- AWS CLI configured with proper permissions.
- S3 bucket for state backend.
- DynamoDB table for state file locking.

## Infrastructure Overview

You will be deploying three environments:
- **Dev**: 3 Servers
- **UAT**: 3 Servers
- **Prod**: 3 Servers

Each environment will have its own Terraform `.tfvars` file to manage configuration differences like naming conventions.

## Step-by-Step Guide

### 1. Clone the Base Infrastructure

Clone the base Terraform infrastructure and make the necessary changes to create multiple environments.

### 2. Setup State Backend

Create an S3 bucket to store Terraform state files and configure it as a backend in your `main.tf`. Ensure that the bucket is set up before proceeding.

### 3. Create Environment-Specific `.tfvars` Files

- Rename the existing `terraform.tfvars` to `dev.tfvars`.
- Create `uat.tfvars` and `prod.tfvars` with environment-specific changes (like naming conventions for servers).

### 4. Initialize and Validate Terraform

```bash
terraform init
terraform validate
terraform fmt
```

### 5. Apply Terraform Configuration

Deploy the infrastructure for each environment using the appropriate `.tfvars` file.

#### For Dev Environment:
```bash
terraform apply -var-file=dev.tfvars
```

#### For UAT Environment:
```bash
terraform workspace new uat
terraform apply -var-file=uat.tfvars
```

#### For Prod Environment:
```bash
terraform workspace new prod
terraform apply -var-file=prod.tfvars
```

### 6. Managing State Files for Different Environments

Each environment requires a separate state file. If you use the same state backend without separating the state files, Terraform will attempt to apply changes across environments.

To manage state files for different environments, use Terraform workspaces:

```bash
terraform workspace new dev
terraform workspace new uat
terraform workspace new prod
```

Each workspace will create a separate folder in the S3 bucket to store the respective environment’s state file.

### 7. Adding EC2 Instances

Modify the `ec2.tf` file to add the EC2 instance configurations:
- Use different AMI IDs for each environment.
- Example of setting the server name:
  ```hcl
  server_name = "${var.env}-Server-1"
  ```

### 8. User Data Configuration

Add user data to the EC2 instances to update the web server’s index page:
```bash
#!/bin/bash
echo "Hello from ${var.env}" > /var/www/html/index.nginx-debian.html
```

### 9. Switch Between Workspaces

To switch between environments, use the `terraform workspace` commands:

```bash
terraform workspace select dev
terraform plan -var-file=dev.tfvars
terraform apply -var-file=dev.tfvars
```

Repeat the process for UAT and Prod environments by selecting their respective workspaces.

### 10. Check Public IPs of All Servers

After deployment, verify the public IP addresses of the servers in each environment.

### 11. Clean Up (Destroy Infrastructure)

To destroy resources from each environment:
```bash
terraform workspace select prod
terraform destroy -var-file=prod.tfvars

terraform workspace select dev
terraform destroy -var-file=dev.tfvars

terraform workspace select uat
terraform destroy -var-file=uat.tfvars
```

### 12. Delete Workspaces

Once the environments are destroyed, delete the workspaces:
```bash
terraform workspace delete dev
terraform workspace delete uat
terraform workspace delete prod
```

### 13. DynamoDB for State Locking

To avoid state file conflicts, implement state locking using DynamoDB.

1. Create a `dynamodb.tf` file:
    ```hcl
    resource "aws_dynamodb_table" "terraform_locks" {
      name         = "terraform-state-lock"
      billing_mode = "PAY_PER_REQUEST"
      hash_key     = "LockID"
  
      attribute {
        name = "LockID"
        type = "S"
      }
    }
    ```

2. Apply the DynamoDB configuration:
    ```bash
    terraform apply
    ```

3. Add the DynamoDB state locking configuration to your backend in `main.tf`:
    ```hcl
    backend "s3" {
      bucket         = "your-s3-bucket"
      key            = "path/to/terraform.tfstate"
      region         = "us-west-2"
      dynamodb_table = "terraform-state-lock"
    }
    ```

### 14. Excluding DynamoDB from Terraform State

If you wish to manage DynamoDB outside of Terraform to prevent it from being destroyed, remove it from the state file:

```bash
terraform state rm aws_dynamodb_table.terraform_locks
```

### 15. Push Code to GitHub

Once all the files are ready, push them to your GitHub repository:

```bash
git init
git add .
git commit -m "Initial commit for Terraform multi-environment setup"
git remote add origin https://github.com/your-username/terraform-multi-env.git
git push -u origin main
```

### 16. Deploying the Infrastructure from GitHub

1. Clone the repository onto your local machine or remote instance:
    ```bash
    git clone https://github.com/your-username/terraform-multi-env.git
    ```
2. Run the Terraform commands to deploy the infrastructure:
    ```bash
    terraform init
    terraform plan -var-file=dev.tfvars
    terraform apply -var-file=dev.tfvars
    ```

---

## Conclusion

This project demonstrates how to manage multiple identical environments (Dev, UAT, Prod) using Terraform Workspaces, S3 for state management, and DynamoDB for state locking. Be sure to separate your environments' state files to avoid conflicts and manage infrastructure more effectively.

Feel free to explore, modify, and extend this setup for your own infrastructure needs.

--- 



================================================
FILE: Day 13 AWS-Terraform-Part-4/README.md
================================================

![Untitled design](https://github.com/user-attachments/assets/58f96a76-cbc0-4ba5-ae0c-41e6f85c9b2b)


# Terraform Day 5: Enabling TF_LOG and Working with Sensitive Information

## Overview

In this session, we explore how to enable logging in Terraform using environment variables, how to handle sensitive information such as passwords, and how to integrate AWS Secrets Manager for securely storing sensitive data. We also demonstrate deploying an RDS MySQL instance with Terraform.

## Topics Covered

1. **Enabling TF_LOG for Debugging**
2. **Working with Sensitive Information**
3. **Using AWS Secrets Manager with Terraform**
4. **Deploying RDS MySQL Instance**

## Enabling TF_LOG

Terraform provides the `TF_LOG` environment variable for controlling log verbosity. You can choose from different levels like `TRACE`, `DEBUG`, `INFO`, `WARN`, and `ERROR`.

### Steps to Enable TF_LOG

1. **Set TF_LOG for detailed trace logs:**

    ```powershell
    $env:TF_LOG = "TRACE"
    terraform destroy
    ```

2. **Set TF_LOG for error-level logging:**

    ```powershell
    $env:TF_LOG = "ERROR"
    terraform destroy
    ```

3. **Write logs to a file:**

    ```powershell
    $env:TF_LOG = "TRACE"
    $env:TF_LOG_PATH = "./logs/terraform.log"
    terraform destroy
    ```

## Handling Sensitive Information

When working with sensitive data like usernames and passwords, it is important to avoid hardcoding them in the Terraform scripts. Instead, use variables marked as `sensitive`.

### Example

In your `variables.tf`:

```hcl
variable "username" {
  type      = string
  sensitive = true
}

variable "password" {
  type      = string
  sensitive = true
}
```

### Storing Passwords Securely with AWS Secrets Manager

To securely store and retrieve sensitive information like passwords, you can use AWS Secrets Manager.

1. **Generate a random password:**

    ```hcl
    resource "random_password" "master" {
      length           = 16
      special          = true
      override_special = "_!%^"
    }
    ```

2. **Store the password in AWS Secrets Manager:**

    ```hcl
    resource "aws_secretsmanager_secret" "password" {
      name = "test-db-password"
    }

    resource "aws_secretsmanager_secret_version" "password" {
      secret_id     = aws_secretsmanager_secret.password.id
      secret_string = random_password.master.result
    }
    ```

3. **Retrieve the password when deploying RDS:**

    ```hcl
    data "aws_secretsmanager_secret_version" "password" {
      secret_id = aws_secretsmanager_secret.password.id
    }

    resource "aws_db_instance" "default" {
      identifier           = "testdb"
      allocated_storage    = 10
      storage_type         = "gp2"
      engine               = "mysql"
      engine_version       = "5.7"
      instance_class       = "db.t2.medium"
      username             = "dbadmin"
      password             = data.aws_secretsmanager_secret_version.password.secret_string
      publicly_accessible  = true
      db_subnet_group_name = aws_db_subnet_group.default.id
    }
    ```

## Deploying RDS MySQL Instance

### Steps:

1. **Create a subnet group:**

    ```hcl
    resource "aws_db_subnet_group" "default" {
      name       = "main"
      subnet_ids = [
        aws_subnet.subnet1-public.id,
        aws_subnet.subnet2-public.id,
      ]
      tags = {
        Name = "My DB subnet group"
      }
    }
    ```

2. **Deploy the RDS instance:**

    ```hcl
    resource "aws_db_instance" "default" {
      identifier         = "testdb"
      allocated_storage  = 10
      engine             = "mysql"
      engine_version     = "5.7"
      instance_class     = "db.t2.medium"
      name               = "mydb"
      username           = "dbadmin"
      password           = data.aws_secretsmanager_secret_version.password.secret_string
      publicly_accessible = true
      db_subnet_group_name = aws_db_subnet_group.default.id
    }
    ```

### Connecting to RDS via MySQL Workbench:

1. In AWS Console, go to **RDS > Databases > testdb** and copy the **endpoint**.
2. In **MySQL Workbench**, use:
   - Hostname: `<copied endpoint>`
   - Username: `dbadmin`
   - Password: Fetch from **AWS Secrets Manager**.

### Destroy the Infrastructure

After testing, remember to clean up:

```bash
terraform destroy
```

## Interview Tip: Handling Sensitive Information

When asked how to handle sensitive information in Terraform, you can explain that Terraform can integrate with AWS Secrets Manager to securely store and retrieve sensitive data. Sensitive variables should be defined in Terraform to avoid exposing sensitive information directly in the code.

---

This README provides an overview of how to enable logging, securely manage sensitive information, and deploy an RDS MySQL instance using Terraform.


================================================
FILE: Day 14 AWS-Terraform-Functions-1/README.md
================================================

# Terraform Functions Part: 1

![Thumb](https://github.com/user-attachments/assets/69bc2680-9ffe-4852-a7f0-f2b9ed8496c5)


This repository demonstrates the efficient use of Terraform functions to manage infrastructure as code without duplicating resources. The focus is on creating modular, scalable, and maintainable Terraform configurations.

## Overview

In this project, we will utilize Terraform functions and techniques to create a cloud infrastructure with multiple instances and subnets efficiently. We aim to minimize duplication in our code by using various Terraform functionalities such as `count`, `for_each`, `locals`, and dynamic blocks.

### Key Objectives

- Clone the repository.
- Streamline Terraform configuration files by removing unnecessary variables and resources.
- Implement best practices for variable management and resource creation.

## Repository Structure

- **main.tf**: Main configuration file containing resource definitions.
- **variables.tf**: File for variable definitions.
- **terraform.tfvars**: File for variable values.
- **locals.tf**: File for local variables.
- **subnet.tf**: File dedicated to managing subnet resources.
- **routing_table.tf**: File for route table configurations.
- **sg.tf**: File for security group configurations.

## Step-by-Step Tasks

### 1. Clone Repository

Start by cloning the repository to your local environment.

### 2. Clean Up Terraform Files

#### variables.tf
- **Remove**:
  - Access Key and Secret Key
  - AMI
  - Internet Gateway (IGW)
  - All CIDR and Subnet entries
- **Keep**:
  - Availability Zones (AZs)
  - Environment (ENV)
- **Define Variables**:
  - Create a variable for `Public_cidr_block` to manage the creation of 6 subnets (3 private and 3 public).
  - Define `Private_cidr_block`.

#### terraform.tfvars
- Copy all relevant variables from `variables.tf` and paste them into `terraform.tfvars`.
- **Remove** routing table configurations to let them inherit the VPC name.

### 3. Modify main.tf

- **Remove** Access Key and Secret Key entries.
- **Paste** remote backend configuration.
- **Update VPC Tags**: Instead of passing values for each tag, utilize `locals` for common tag values.

### 4. Create locals.tf

- Define local variables for common tag values.
- Access local variables in the VPC configuration using the appropriate syntax.

### 5. Update Subnet Configurations

#### Public Subnets
- Remove additional public subnets (subnet 2 and 3).
- Use `count = 3` to create the necessary number of public subnets.
- Utilize the `element` function to reference specific CIDR blocks based on the count index.

#### Private Subnets
- Rename resources to reflect they are private.
- Adjust tags accordingly.

### 6. Route Tables Configuration

- Define separate route tables for public and private subnets.
- **Comment Out** route table associations temporarily.
- Use `terraform plan` to preview subnet configurations.

### 7. Organize Subnets into subnet.tf

- Move all subnet resources to `subnet.tf`.
- Use `count.index + 1` to manage subnet indexing dynamically.

### 8. Create routing_table.tf

- Move all route table blocks to this file.
- Address subnet ID issues by ensuring the correct variable references.
- Introduce Splat syntax for managing multiple subnet associations.

### 9. Dynamic Security Group Management

#### sg.tf
- Copy necessary configurations from `main.tf` into `sg.tf`.
- Add ports 443 and 22 to the security group.
- Implement dynamic ingress rules by creating a `service_ports` variable.
- Populate this variable with values for multiple ports: `["80", "8080", "443", "8443", "22", "3306", "1433"]`.

### 10. Finalization

- Run `terraform fmt` to format the configuration files.
- Execute `terraform plan` and `terraform apply` to validate and deploy the infrastructure.
- Check inbound and outbound rules to ensure proper configuration.

## Conclusion

By following these steps and utilizing Terraform functions, we can efficiently manage our cloud infrastructure with minimal duplication and improved scalability. This project serves as a template for creating robust Terraform configurations.

---


================================================
FILE: Day 14 AWS-Terraform-Functions-1/RTA.tf
================================================
resource "aws_route_table_association" "public-subnets" {
  #   count          = 3
  count          = length(var.public_cird_block)
  subnet_id      = element(aws_subnet.public-subnet.*.id, count.index)
  route_table_id = aws_route_table.public-route-table.id
}


resource "aws_route_table_association" "private-subnets" {
  #   count          = 3
  count          = length(var.private_cird_block)
  subnet_id      = element(aws_subnet.private-subnet.*.id, count.index)
  route_table_id = aws_route_table.private-route-table.id
}



================================================
FILE: Day 14 AWS-Terraform-Functions-1/locals.tf
================================================
locals {
  Owner      = "Prod-Team"
  costcenter = "Hyd-8080"
  TeamDL     = "Saikiran.pinapathruni18@gmail.com"
}


================================================
FILE: Day 14 AWS-Terraform-Functions-1/main.tf
================================================
#This Terraform Code Deploys Basic VPC Infra.
provider "aws" {
  region = var.aws_region
}

terraform {
  backend "s3" {
    bucket = "workspacesbucket01"
    key    = "function.tfstate"
    region = "us-east-1"
  }
}


resource "aws_vpc" "default" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true

  tags = {
    Name        = "${var.vpc_name}"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = "${var.environment}"
  }
}

resource "aws_internet_gateway" "default" {
  vpc_id = aws_vpc.default.id
  tags = {
    Name = "${var.vpc_name}-IGW"
  }
}

resource "aws_route_table" "public-route-table" {
  vpc_id = aws_vpc.default.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.default.id
  }

  tags = {
    Name        = "${var.vpc_name}-Public-RT"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = "${var.environment}"

  }
}


resource "aws_route_table" "private-route-table" {
  vpc_id = aws_vpc.default.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.default.id
  }

  tags = {
    Name        = "${var.vpc_name}-private-RT"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = "${var.environment}"

  }
}




# data "aws_ami" "my_ami" {
#      most_recent      = true
#      #name_regex       = "^sai"
#      owners           = ["232323232323232323"]
# }


# resource "aws_instance" "web-1" {
#     ami = "${data.aws_ami.my_ami.id}"
#     #ami = "ami-0d857ff0f5fc4e03b"
#     availability_zone = "us-east-1a"
#     instance_type = "t2.micro"
#     key_name = "LaptopKey"
#     subnet_id = "${aws_subnet.subnet1-public.id}"
#     vpc_security_group_ids = ["${aws_security_group.allow_all.id}"]
#     associate_public_ip_address = true	
#     tags = {
#         Name = "Server-1"
#         Env = "Prod"
#         Owner = "sai"
# 	CostCenter = "ABCD"
#     }
#      user_data = <<- EOF
#      #!/bin/bash
#      	sudo apt-get update
#      	sudo apt-get install -y nginx
#      	echo "<h1>${var.env}-Server-1</h1>" | sudo tee /var/www/html/index.html
#      	sudo systemctl start nginx
#      	sudo systemctl enable nginx
#      EOF

# }

# resource "aws_dynamodb_table" "state_locking" {
#   hash_key = "LockID"
#   name     = "dynamodb-state-locking"
#   attribute {
#     name = "LockID"
#     type = "S"
#   }
#   billing_mode = "PAY_PER_REQUEST"
# }

##output "ami_id" {
#  value = "${data.aws_ami.my_ami.id}"
#}
#!/bin/bash
# echo "Listing the files in the repo."
# ls -al
# echo "+++++++++++++++++++++++++++++++++++++++++++++++++++++"
# echo "Running Packer Now...!!"
# packer build -var=aws_access_key=AAAAAAAAAAAAAAAAAA -var=aws_secret_key=BBBBBBBBBBBBB packer.json
# echo "+++++++++++++++++++++++++++++++++++++++++++++++++++++"
# echo "Running Terraform Now...!!"
# terraform init
# terraform apply --var-file terraform.tfvars -var="aws_access_key=AAAAAAAAAAAAAAAAAA" -var="aws_secret_key=BBBBBBBBBBBBB" --auto-approve


================================================
FILE: Day 14 AWS-Terraform-Functions-1/sg.tf
================================================
resource "aws_security_group" "allow_all" {
  name        = "${var.vpc_name}-allow-all"
  description = "Allow all Inbound traffic"
  vpc_id      = aws_vpc.default.id

  # Ingress rule block with dynamic iteration over service_ports
  dynamic "ingress" {
    for_each = var.ingress_value
    content {
      from_port   = ingress.value
      to_port     = ingress.value
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]  # Allow traffic from any IP
    }
  }

  # Egress rule block
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]  # Allow outbound traffic to any IP
  }

  # Tags block
  tags = {
    Name        = "${var.vpc_name}-allow-all"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = var.environment
  }
}


================================================
FILE: Day 14 AWS-Terraform-Functions-1/subnet.tf
================================================
resource "aws_subnet" "public-subnet" {
  #count             = 3 #012
  count             = length(var.public_cird_block)
  vpc_id            = aws_vpc.default.id
  cidr_block        = element(var.public_cird_block, count.index + 1)
  availability_zone = element(var.azs, count.index)

  tags = {
    Name        = "${var.vpc_name}-public-subnet-${count.index + 1}"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = "${var.environment}"

  }
}

resource "aws_subnet" "private-subnet" {
  #   count             = 3 #012
  count             = length(var.private_cird_block)
  vpc_id            = aws_vpc.default.id
  cidr_block        = element(var.private_cird_block, count.index + 1)
  availability_zone = element(var.azs, count.index)

  tags = {
    Name        = "${var.vpc_name}-private-subnet-${count.index + 1}"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = "${var.environment}"

  }
}


================================================
FILE: Day 14 AWS-Terraform-Functions-1/terraform.tfvars
================================================
aws_region         = "us-east-1"
vpc_cidr           = "172.18.0.0/16"
vpc_name           = "DevSecOps-Vpc"
key_name           = "SecOps-Key"
azs                = ["us-east-1a", "us-east-1b", "us-east-1c"]
public_cird_block  = ["172.18.1.0/24", "172.18.2.0/24", "172.18.3.0/24", "172.18.4.0/24", "172.18.5.0/24"]
private_cird_block = ["172.18.10.0/24", "172.18.20.0/24", "172.18.30.0/24", "172.18.40.0/24", "172.18.50.0/24"]
environment        = "Prod"
ingress_value      = ["80", "8080", "443", "8443", "22", "3306", "1900", "1443"]


================================================
FILE: Day 14 AWS-Terraform-Functions-1/variables.tf
================================================
variable "aws_region" {}
variable "vpc_cidr" {}
variable "vpc_name" {}
variable "key_name" {}
variable "azs" {}
variable "public_cird_block" {}
variable "private_cird_block" {}
variable "environment" {}
variable "ingress_value" {}


================================================
FILE: Day 15 AWS-Terraform-Functions-2/README.md
================================================
![a-futuristic-3d-scene-featuring-an-astronaut-sitti-JmnDsV37TdiaW1tmnfgktg-hPykpO-xSY6aYtvVHr0G_g](https://github.com/user-attachments/assets/5bd8031e-c1a2-4305-b371-b7551ad62055)


# Terraform Functions - 2 

This repository demonstrates the usage of various Terraform functions such as `lookup`, `count`, and `condition`, along with implementing file provisioners (`remote-exec`, `local-exec`). The goal is to dynamically manage infrastructure using variables, conditional logic, and provisioning tasks.

## Project Structure

- **`ec2.tf`**: Main file to create EC2 instances.
- **`variables.tf`**: Define variables such as AMIs, instance type, keyname, and environment.
- **`terraform.tfvars`**: Assign values to variables such as AMI IDs for different regions and the environment.
- **`null.tf`**: Implements `null_resource` to run scripts without recreating instances.
- **`userdata.sh`**: Script to install software on EC2 instances after they are created.

## Terraform Functions Overview

### 1. AMI Lookup

The `lookup` function helps dynamically retrieve AMI IDs based on the region. 

Example:
```hcl
variable "amis" {
  type = map(string)
}

# In terraform.tfvars
amis = {
  us-east-1 = "ami-0abcd1234efgh5678"
  us-east-2 = "ami-0wxyz1234mnop5678"
}

# In ec2.tf
ami = lookup(var.amis, var.aws_region)
```

This setup allows us to deploy EC2 instances using region-specific AMIs. For example, AMIs in `us-east-1` may not work in `us-east-2`.

### 2. Instance Count with Subnet Mapping

We declare three subnets, and each subnet must map to one EC2 instance. By using `count`, we can define how many instances to create based on the length of subnets.

```hcl
count = length(var.public_cidr_block)

subnet_id = element(var.subnets, count.index)
```

### 3. Conditional Deployment

Using a condition, we can decide how many instances to create based on the environment.

```hcl
count = var.environment == "Prod" ? 3 : 1
```

This means if the environment is `Prod`, 3 instances are created; otherwise, 1 instance is created.

## Provisioners

### File Provisioning with `remote-exec`

We use provisioners to apply scripts after EC2 instances are created without recreating the instances.

- **User Data**: Initially, the user data script is passed during instance creation.
- **Provisioners**: To avoid recreating instances for every change, we use `null_resource` to run scripts or commands on existing instances.

Example:
```hcl
resource "null_resource" "cluster" {
  count = length(var.public_cidr_block)
  
  provisioner "remote-exec" {
    connection {
      type     = "ssh"
      user     = "ec2-user"
      private_key = file("path/to/key.pem")
      host     = aws_instance.example.public_ip
    }
    inline = [
      "sudo bash /tmp/script.sh"
    ]
  }
}
```

### Tainting Resources

If we need to recreate a resource, we can use Terraform's `taint` feature. Marking a resource as "tainted" forces Terraform to recreate it during the next apply.

Example:
```bash
terraform taint null_resource.cluster
```

This marks the resource as needing recreation, allowing the new script to be applied without affecting the rest of the infrastructure.

## Commands

```bash
terraform init      # Initialize Terraform
terraform fmt       # Format the code
terraform validate  # Validate the configuration
terraform apply     # Apply the configuration
```

### Taint Example

```bash
terraform taint null_resource.cluster
terraform apply
```

## Next Steps

- Explore **Terraform Modules** for better structuring and reuse of code.

## Interview Tips

**What is taint in Terraform?**
Taint marks a resource for recreation. You can manually taint a resource using the `terraform taint` command, causing Terraform to destroy and recreate it during the next `apply`. Conversely, you can "untaint" a resource to prevent it from being recreated.

---

Stay tuned for the next session where we’ll dive into **Terraform Modules**!


================================================
FILE: Day 15 AWS-Terraform-Functions-2/private-ec2.tf
================================================
resource "aws_instance" "private-server" {
  # count = length(var.private_cird_block)
  count                  = var.environment == "Prod" ? 3 : 1
  ami                    = lookup(var.amis, var.aws_region)
  instance_type          = "t2.micro"
  key_name               = var.key_name
  subnet_id              = element(aws_subnet.private-subnet.*.id, count.index + 1)
  vpc_security_group_ids = ["${aws_security_group.allow_all.id}"]
  # associate_public_ip_address = true	
  tags = {
    Name        = "${var.vpc_name}-Private-Server-${count.index + 1}"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = "${var.environment}"
  }
  user_data = <<-EOF
     #!/bin/bash
     sudo apt update
     sudo apt install nginx -y
     sudo apt install git -y
     sudo git clone https://github.com/saikiranpi/SecOps-game.git
     sudo rm -rf /var/www/html/index.nginx-debian.html
     sudo cp  SecOps-game/index.html /var/www/html/index.html
     echo "<h1>${var.vpc_name}-public-Server-${count.index + 1}</h1>" >> /var/www/html/index.html
     sudo systemctl start nginx
     sudo systemctl enable nginx
 EOF
}


================================================
FILE: Day 15 AWS-Terraform-Functions-2/public-ec2.tf
================================================
resource "aws_instance" "public-server" {
  # count = length(var.public_cird_block)
  count                       = var.environment == "Prod" ? 3 : 1
  ami                         = lookup(var.amis, var.aws_region)
  instance_type               = "t2.micro"
  key_name                    = var.key_name
  subnet_id                   = element(aws_subnet.public-subnet.*.id, count.index + 1)
  vpc_security_group_ids      = ["${aws_security_group.allow_all.id}"]
  associate_public_ip_address = true
  tags = {
    Name        = "${var.vpc_name}-Public-Server-${count.index + 1}"
    Owner       = local.Owner
    costcenter  = local.costcenter
    TeamDL      = local.TeamDL
    environment = "${var.environment}"
  }

}


================================================
FILE: Day 15 AWS-Terraform-Functions-2/terraform.tfvars
================================================
aws_region         = "us-east-1"
vpc_cidr           = "172.18.0.0/16"
vpc_name           = "DevSecOps-Vpc"
key_name           = "SecOps-Key"
azs                = ["us-east-1a", "us-east-1b", "us-east-1c"]
public_cird_block  = ["172.18.1.0/24", "172.18.2.0/24", "172.18.3.0/24"]
private_cird_block = ["172.18.10.0/24", "172.18.20.0/24", "172.18.30.0/24"]
environment        = "Dev"
ingress_value      = ["80", "8080", "443", "8443", "22", "3306", "1900", "1443"]
amis = {
  us-east-1 = "ami-0866a3c8686eaeeba"
  us-east-2 = "ami-0ea3c35c5c3284d82"
}


================================================
FILE: Day 15 AWS-Terraform-Functions-2/txt.tf
================================================
#   user_data = <<-EOF
#     #!/bin/bash
#     sudo apt update
#     sudo apt install nginx -y
#     sudo apt install git -y
#     sudo git clone https://github.com/saikiranpi/SecOps-game.git
#     sudo rm -rf /var/www/html/index.nginx-debian.html
#     sudo cp  SecOps-game/index.html /var/www/html/index.html
#     echo "<h1>${var.vpc_name}-private-Server-${count.index + 1}</h1>" >> /var/www/html/index.html
#     sudo systemctl start nginx
#     sudo systemctl enable nginx
# EOF


# provisioner "file" {
#   source      = "user_data.sh"
#   destination = "/tmp/user_data.sh"

#   connection {
#     type        = "ssh"
#     user        = "ubuntu"
#     private_key = file("LaptopKey.pem")
#     host        = element(aws_instance.public-servers.*.public_ip, count.index)
#   }
# }

# provisioner "remote-exec" {
#   inline = [
#     "sudo chmod 777 /tmp/userdata.sh",
#     "sudo /tmp/userdata.sh",
#     "sudo apt update",
#     "sudo apt install jq unzip -y",
#   ]

#   connection {
#     type        = "ssh"
#     user        = "ubuntu"
#     private_key = file("SecOps-Key.pem")
#     host        = element(aws_instance.public-server.*.public_ip, count.index)
#   }
# }


================================================
FILE: Day 15 AWS-Terraform-Functions-2/user-data.sh
================================================
#!/bin/bash
sudo apt update
sudo apt install nginx -y
sudo apt install git -y
sudo git clone https://github.com/saikiranpi/SecOps-game.git
sudo rm -rf /var/www/html/index.nginx-debian.html
sudo cp  SecOps-game/index.html /var/www/html/index.html
echo "<h1>${var.vpc_name}-public-Server-${count.index + 1}</h1>" >> /var/www/html/index.html
sudo systemctl start nginx
sudo systemctl enable nginx
#testing 
#testing
#restng 


================================================
FILE: Day 15 AWS-Terraform-Functions-2/variable.sh
================================================
variable "aws_region" {}
variable "vpc_cidr" {}
variable "vpc_name" {}
variable "key_name" {}
variable "azs" {}
variable "public_cird_block" {}
variable "private_cird_block" {}
variable "environment" {}
variable "ingress_value" {}
variable "amis" {}


================================================
FILE: Day 16 AWS-Terraform-Part-6 Modules-Part-1/README.md
================================================
# Terraform Project: Modularized Infrastructure Setup


![a-vibrant-and-energetic-youtube-thumbnail-with-a-s-giqGaHBwT7yCh792W1jUEQ-NkAg-GSlQvynsgO8mL7hAw](https://github.com/user-attachments/assets/ca2885eb-cae5-4a18-90c1-461c349a7fb1)


This repository demonstrates how to modularize Terraform code for a scalable, manageable infrastructure deployment across multiple environments (e.g., dev, QA, production). The key idea is to break down the Terraform code into modules for various infrastructure components like networking, compute, security groups, load balancers, and NAT gateways. This modular approach minimizes manual changes and overhead when switching between environments.

## Problem Overview

In typical infrastructure deployments, environments like dev, QA, and production might have different requirements (e.g., dev doesn’t need a load balancer or Route53). Managing these differences with a single Terraform codebase can lead to manual changes, which is inefficient. By breaking the code into modules, you can dynamically include/exclude components based on environment requirements, making the infrastructure easier to manage.

## Solution

We break the infrastructure into the following modules:
- **Network**: VPC, subnets, routing
- **Compute**: EC2 instances (public and private)
- **Security Groups (SG)**: For securing VPC resources
- **NAT**: NAT gateway for private instance internet access
- **ELB**: Elastic Load Balancers (optional)
- **IAM**: Identity and Access Management

### Folder Structure

```
/modules
  ├── network
  ├── compute
  ├── sg
  ├── nat
  ├── elb
  ├── iam
/development
  ├── main.tf
  ├── variables.tf
  ├── terraform.tfvars
  └── ec2.tf
/production
  ├── infrastructure.tf
  ├── variables.tf
  ├── terraform.tfvars
```

## Step-by-Step Setup

### 1. Create Network Module

1. **Files in `/modules/network`:**
   - `vpc.tf`: Defines the VPC and internet gateway.
   - `public_subnets.tf`: Public subnets configuration.
   - `private_subnets.tf`: Private subnets configuration.
   - `routing.tf`: Routing tables for public and private subnets.
   - `variables.tf`: Define necessary input variables.
   - `outputs.tf`: Export important values (e.g., VPC ID, subnet IDs).
   - `locals.tf`: Set local values for environment or naming conventions.

2. **Import Network Module in Development:**
   - In `/development/infra.tf`, import the network module:
     ```hcl
     module "dev_vpc_1" {
       source = "../modules/network"
       # Specify the necessary variables
       vpc_cidr = var.vpc_cidr
       ...
     }
     ```

3. **Deploy the Network Module:**
   ```bash
   cd development
   terraform init
   terraform fmt
   terraform validate
   terraform apply
   ```

### 2. Configure for Production

- **Copy Files**: Copy the infrastructure setup from `development` to `production`.
  - Ensure variable values are updated (e.g., CIDR blocks should not overlap between environments).

- **Customize Values**: Modify `terraform.tfvars` and `variables.tf` in the `production` folder to match production settings (e.g., CIDR range, environment = "production").

```bash
cd production
terraform init
terraform fmt
terraform apply
```

### 3. Add Security Groups Module

1. **Create `/modules/sg`:**
   - `sg.tf`: Security group configurations.
   - `variables.tf`: Define necessary input variables.
   - `outputs.tf`: Export security group IDs.

2. **Import in Development:**
   - Add the security group module to `development`'s `infra.tf`:
     ```hcl
     module "dev_sg_1" {
       source = "../modules/sg"
       vpc_id = module.dev_vpc_1.vpc_id
       ...
     }
     ```

3. **Deploy SG Module:**
   ```bash
   cd development
   terraform get
   terraform apply
   ```

4. **Replicate for Production**: Similarly, copy the security group module to `production`, making necessary adjustments.

### 4. EC2 (Compute) Module

1. **Create `/modules/compute`:**
   - `private_ec2.tf`: For private EC2 instances.
   - `public_ec2.tf`: For public EC2 instances.
   - `variables.tf`: Define EC2-related variables.
   - `outputs.tf`: Export EC2 instance IDs or other resources.

2. **Deploy in Development**: Add EC2 configuration in `development/ec2.tf`, referencing the module:
   ```hcl
   module "dev_compute_1" {
     source = "../modules/compute"
     vpc_id = module.dev_vpc_1.vpc_id
     ...
   }
   ```

3. **Replicate for Production**: Follow the same process for production, customizing as needed.

### 5. NAT Gateway Module

1. **Create `/modules/nat`:**
   - `natgw.tf`: Defines the NAT gateway.
   - `variables.tf`: Input variables like subnet ID.
   - `outputs.tf`: Export NAT gateway ID.

2. **Deploy NAT in Development and Production**:
   - Ensure the NAT module is added in both environments, with appropriate changes in `terraform.tfvars`.

### Final Steps

- **Destroy**: To clean up, run the following in both environments:
  ```bash
  cd production
  terraform destroy -auto-approve
  cd development
  terraform destroy -auto-approve
  ```

## Key Terraform Commands

- **Format and Validate**:
  ```bash
  terraform fmt
  terraform validate
  ```
- **Initialize**:
  ```bash
  terraform init
  ```
- **Apply Changes**:
  ```bash
  terraform apply
  ```
- **Check State**:
  ```bash
  terraform state list
  ```

## Notes on Output Values

The `output.tf` files in each module play a crucial role in passing data between modules. For example, the VPC module exports the `vpc_id`, which is consumed by the Security Group module and EC2 module. This modular approach helps ensure that all components are properly linked, and their dependencies are clear.

## Conclusion

This repository demonstrates how to efficiently manage and deploy infrastructure across multiple environments using Terraform modules. By breaking infrastructure code into reusable modules, we reduce complexity, manual work, and potential errors, leading to a more scalable and maintainable solution.


================================================
FILE: Day 17 AWS-Terraform-Full-Course/README.md
================================================
![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/user-attachments/assets/64a1a02f-c8c8-4248-876e-685505d76e4b)


# Day 17 Terraform Full Course Link here : https://youtu.be/bqvdpa649nU?si=EQJNm-VPDgypTkwc


================================================
FILE: Day 18 AWS-Terraform-Part-8 TerraformCloud/README.md
================================================

![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/user-attachments/assets/022eb8c9-67e4-4f71-b01c-2591e65ea62d)

# Day 18 AWS-Terraform-Part-8 TerraformCloud - Covered under terraform full course. 
# TimeStamp Link : https://youtu.be/bqvdpa649nU?list=PLMj5OfHGyNU81vI77YRFg9WWvbGKqbyXD&t=23642


================================================
FILE: Day 19 AWS-Terraform-Part-9 GitLab-Pipeline/README.md
================================================
# Day 19 Terraform Modules with GitLab 

![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-MdC5XT42QNySa2zI6fo6Sw-mIHViexFR9C60umSgtcnBg](https://github.com/user-attachments/assets/aa7fdce1-98ee-448a-9c96-343b0fbdba0d)


Complete Source file here : https://gitlab.com/saikiranpi1/modules-gitlab.git

```markdown
# Terraform - GitLab Integration

This repository contains instructions and YAML configurations for integrating Terraform with GitLab CI/CD, allowing for efficient infrastructure management and deployment.

## Table of Contents
- [Overview](#overview)
- [Getting Started](#getting-started)
- [GitLab CI Configuration](#gitlab-ci-configuration)
- [Using tfenv](#using-tfenv)
- [Installing GitLab Runner](#installing-gitlab-runner)
- [Deploying an Ubuntu Server](#deploying-an-ubuntu-server)
- [Cleaning Up](#cleaning-up)
- [Troubleshooting](#troubleshooting)
- [Conclusion](#conclusion)

## Overview

This project demonstrates how to set up Terraform with GitLab CI/CD using YAML for configuration. We will focus on tasks such as pushing code to GitLab, setting up CI/CD variables, and deploying infrastructure.

## Getting Started

1. **Create a new GitLab project**:
   - Go to your GitLab dashboard and click on "New Project."
   - Select "Public" and create the project.

2. **Push your Terraform code**:
   ```bash
   git init
   git add .
   git commit -m "Infra"
   git remote add origin <your-repo-url>
   git push origin master
   ```

## GitLab CI Configuration

1. **Access CI/CD Settings**:
   - Navigate to your project, then go to `Settings` > `CI/CD`.

2. **Upload Secure Files**:
   - Under the "Secure Files" section, upload your PEM file.

3. **Add CI/CD Variables**:
   - Scroll to "Variables" and click "Add."
   - Add the following masked variables:
     - `AWS_ACCESS_KEY`
     - `AWS_SECRET_KEY`

4. **Set Up a New GitLab Runner**:
   - Navigate to `Runners` and select "New project runner."
   - Choose "Linux" and set the following:
     - **Tags**: `terraform,AWS`
     - **Description**: A brief description of your runner.
     - **Timeout**: 600 seconds.
   - Click "Create Runner."

## Using tfenv

To manage different Terraform versions easily, we will use `tfenv`. Follow these steps:

1. **Install tfenv**:
   - Follow the instructions available on the [tfenv GitHub page](https://github.com/tfutils/tfenv).

2. **Install the Required Terraform Version**:
   ```bash
   sudo apt install unzip
   tfenv list-remote  # Lists all available versions
   tfenv install 1.5.5 # Installs the specified version
   ```

## Installing GitLab Runner

1. **Install GitLab Runner**:
   - Open your console and follow the installation commands provided on the [GitLab Runner page](https://docs.gitlab.com/runner/install/).

2. **Register the Runner**:
   - Enter the token and name for the runner, choose "shell" as the executor.

3. **Modify Your Code and Push**:
   - Make minor changes to your code and push it. This should trigger the CI/CD pipeline.

4. **Run Commands as gitlab-runner**:
   ```bash
   cat /etc/passwd
   sudo rm -r /home/gitlab-runner/.bash_logout
   su - gitlab-runner  # Switch to gitlab-runner user
   ```

## Deploying an Ubuntu Server

Log into the server and deploy the necessary infrastructure using your Terraform scripts.

## Cleaning Up

To destroy the infrastructure, run:
```bash
terraform destroy -auto-approve
```

You can use **Checkov**, a free tool, to scan your Terraform code for security issues:
```bash
apt install -y python3-pip
```

## Troubleshooting

If you encounter errors:
- Check the GitLab CI/CD pipeline logs for error messages.
- Google any error codes for potential solutions.

## Conclusion

This setup provides a streamlined approach to managing infrastructure with Terraform in a GitLab CI/CD environment. Feel free to customize the configurations as needed to fit your specific requirements.

For further assistance, refer to the [official Terraform documentation](https://www.terraform.io/docs/index.html) or [GitLab CI/CD documentation](https://docs.gitlab.com/ee/ci/).

```

Feel free to adjust any sections as needed or add more details specific to your project's requirements!


================================================
FILE: Day 20 AWS-Packer/README.md
================================================
# Day 20 AWS-Packer

![a-vibrant-and-eye-catching-youtube-thumbnail-with--CWD0OBoeRVO1Jw5QXUd3iw-PZaqUMYdQ0eS9Tv6GFm_VQ](https://github.com/user-attachments/assets/5cc2de07-938e-4197-8e07-c99bdcdd0180)


Here's an outline to help you implement and visualize this process:

### 1. **Introduction to Packer and Ansible**

- **Packer**: A tool to create images for multiple platforms from a single source configuration.
- **Ansible**: A configuration management tool used for automation, specifically post-deployment configuration.

### 2. **Why Ansible?**

After deploying infrastructure with tools like **Terraform**, configuration management is needed for more specific setups on the deployed resources. Here’s where **Ansible** comes in:

- **Controller-Client Model**:
  - **Controller**: The machine where Ansible commands are run.
  - **Clients** (Nodes): Machines receiving configuration commands from the controller.

- **No Client Software Needed**: Ansible only requires SSH and Python on the nodes, simplifying the setup.

### 3. **Diagram of Ansible Setup**
For a visual, imagine:
   - A **controller node** communicating with **client nodes** using SSH.
   - Commands are sent from the controller, received by nodes, and executed without needing any additional software on the client side.

### 4. **Task: AMI Creation and Deployment**
   1. **Create an AMI Image** using Packer for a base instance.
   2. **Deploy an Instance** with this AMI.
   3. Verify functionality, ensuring services like Node Exporter (on port 9100) are working.

### 5. **Steps to Install and Configure Ansible on Deployed Instances**
   - **Install Ansible**:
     - Refer to [Ansible documentation](https://docs.ansible.com/) for the latest installation steps.
   - **Configuration File**:
     - Run `sudo ansible-config init --disabled > ansible.cfg` in `/etc` to generate the config file.
   - **Update Ansible Configurations**:
     - Open the file with `ctrl+w` to search and configure:
       - Set `host_key_checking = false`.
       - Define the `remote_user` as `ansibleadmin`.
       - Define `private_key_file` as `/home/ansibleadmin/key.pem` (ensure key permissions are set to read-only, i.e., `chmod 444 key.pem`).

Following these steps will provide a setup ready for deploying configurations across instances effectively using Ansible.


================================================
FILE: Day 21 AWS-Ansible-Part-1/.gitignore
================================================
.terraform.lock.hcl
.terraform/*
6.ansible-playbook-nginx.yml
invfile*


================================================
FILE: Day 21 AWS-Ansible-Part-1/1.provider.tf
================================================
provider "aws" {
  region = var.aws_region
}

terraform {
  required_version = "<= 1.8.5" #Forcing which version of Terraform needs to be used
  required_providers {
    aws = {
      version = "<= 6.0.0" #Forcing which version of plugin needs to be used.
      source  = "hashicorp/aws"
    }
  }
  backend "s3" {
    bucket         = "workspacesbucket01"
    key            = "Ansible.tfstate"
    region         = "us-east-1"
    # dynamodb_table = "-terraform-locks"
    encrypt        = true
  }
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/10.locals.tf
================================================
#distinct takes a list and returns a new list with any duplicate elements removed.
#toset takes a list will remove any duplicate elements and discard the ordering of the elements.
locals {
  new_public_subnet_cidrs  = distinct(var.public_subnet_cidrs)
  new_private_subnet_cidrs = distinct(var.private_subnet_cidrs)
  new_environment          = lower(var.environment)
  projid                   = format("%s-%s", lower(var.vpc_name), lower(var.projid))
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/11.localfile_ansible_inventory.tf
================================================
resource "local_file" "ansible-inventory-file" {
  content = templatefile("publicservers.tpl",
    {

      testserver01    = aws_instance.webservers.0.public_ip
      testserver02    = aws_instance.webservers.1.public_ip
      testserver03    = aws_instance.webservers.2.public_ip
      pvttestserver01 = aws_instance.webservers.0.private_ip
      pvttestserver02 = aws_instance.webservers.1.private_ip
      pvttestserver03 = aws_instance.webservers.2.private_ip
    }
  )
  filename = "${path.module}/invfile"
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/12.localfile_ansible_inventory_yaml.tf
================================================
resource "local_file" "ansible-inventory-file-yaml" {
  content = templatefile("publicservers_yaml.tpl",
    {

      testserver01    = aws_instance.webservers.0.public_ip
      testserver02    = aws_instance.webservers.1.public_ip
      testserver03    = aws_instance.webservers.2.public_ip
      pvttestserver01 = aws_instance.webservers.0.private_ip
      pvttestserver02 = aws_instance.webservers.1.private_ip
      pvttestserver03 = aws_instance.webservers.2.private_ip
    }
  )
  filename = "${path.module}/invfile.yaml"
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/13.null-local-exec.tf
================================================
resource "null_resource" "webservers" {
  provisioner "local-exec" {
    command = <<EOH
      sleep 10
      ansible -i invfile pvt -m ping
    EOH
  }
  depends_on = [local_file.ansible-inventory-file]
}



================================================
FILE: Day 21 AWS-Ansible-Part-1/14.outputs.tf
================================================
output "vpc_id" {
  value = aws_vpc.default.id
}

output "vpc_arn" {
  value = aws_vpc.default.arn
}

# output "subnet1_id" {
#   value = aws_subnet.subnet1-public.id
# }


output "sg_id" {
  value = aws_security_group.allow_all.id
}


================================================
FILE: Day 21 AWS-Ansible-Part-1/15.terraform.tfvars
================================================
aws_region           = "us-east-1"
vpc_cidr             = "10.37.0.0/16"
vpc_name             = "Ansible-Vpc"
key_name             = "SecOps-Key"
public_subnet_cidrs  = ["10.37.1.0/24", "10.37.2.0/24", "10.37.3.0/24"]    #List
private_subnet_cidrs = ["10.37.10.0/24", "10.37.20.0/24", "10.37.30.0/24"] #List
azs                  = ["us-east-1a", "us-east-1b", "us-east-1c"]          #List
environment          = "production"
instance_type = {
  development = "t2.small"
  testing     = "t2.small"
  production  = "t2.small"
}
amis = {
  us-east-1 = "ami-0149b2da6ceec4bb0" # Canonical, Ubuntu, 20.04 LTS, amd64 focal image
  us-east-2 = "ami-0430580de6244e02e" # Canonical, Ubuntu, 20.04 LTS, amd64 focal image
}
projid    = "PHOENIX-123"
imagename = "ami-0149b2da6ceec4bb0"


================================================
FILE: Day 21 AWS-Ansible-Part-1/16.variables.tf
================================================
variable "aws_region" { type = string }
variable "amis" { type = map(any) }
variable "vpc_cidr" { type = string }
variable "vpc_name" { type = string }
variable "key_name" { type = string }
variable "public_subnet_cidrs" { type = list(any) }
variable "private_subnet_cidrs" { type = list(any) }
variable "azs" { type = list(any) }
variable "environment" { type = string }
variable "instance_type" { type = map(any) }
variable "projid" { type = string }
variable "imagename" { type = string }




================================================
FILE: Day 21 AWS-Ansible-Part-1/2.vpc.tf
================================================
resource "aws_vpc" "default" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  tags = {
    Name              = var.vpc_name
    Owner             = "Saikiran Pinapathruni"
    environment       = local.new_environment
    Terraform-Managed = "Yes"
    ProjectID         = local.projid
  }
}

resource "aws_internet_gateway" "default" {
  vpc_id = aws_vpc.default.id
  tags = {
    Name              = "${var.vpc_name}-IGW"
    Terraform-Managed = "Yes"
    Env               = local.new_environment
    ProjectID         = local.projid
  }
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/3.public-subnets.tf
================================================
resource "aws_subnet" "public-subnets" {
  #count             = 4 # 0 1 2
  count             = length(local.new_public_subnet_cidrs)
  vpc_id            = aws_vpc.default.id
  cidr_block        = element(local.new_public_subnet_cidrs, count.index)
  availability_zone = element(var.azs, count.index)
  tags = {
    Name              = "${var.vpc_name}-PublicSubnet-${count.index + 1}"
    Terraform-Managed = "Yes"
    Env               = local.new_environment
    ProjectID         = local.projid
  }
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/4.private-subnets.tf
================================================
resource "aws_subnet" "private-subnets" {
  #count             = 4 # 0 1 2
  count             = length(local.new_private_subnet_cidrs)
  vpc_id            = aws_vpc.default.id
  cidr_block        = element(local.new_private_subnet_cidrs, count.index)
  availability_zone = element(var.azs, count.index)
  tags = {
    Name              = "${var.vpc_name}-PrivateSubnet-${count.index + 1}"
    Terraform-Managed = "Yes"
    Env               = local.new_environment
    ProjectID         = local.projid
  }
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/5.public-routing.tf
================================================
resource "aws_route_table" "terraform-public" {
  vpc_id = aws_vpc.default.id

  # route {
  #   cidr_block = "0.0.0.0/0"
  #   gateway_id = aws_internet_gateway.default.id
  # }

  tags = {
    Name              = "${var.vpc_name}-MAIN-RT"
    Terraform-Managed = "Yes"
    Env               = local.new_environment
    ProjectID         = local.projid
  }
}

#VPC Peering Routes are getting recreated when we apply. To overcome this issue Routing Table
#is created with out any routes & routes for igw,peering are created seperatly.
#https://stackoverflow.com/questions/49174421/terraform-route-table-forcing-new-resource-every-apply

resource "aws_route" "igw-route" {
  route_table_id         = aws_route_table.terraform-public.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.default.id
}

resource "aws_route_table_association" "terraform-public" {
  #count             = 4 # 0 1 2
  count = length(local.new_public_subnet_cidrs)
  #Using * is called Splat Syntax
  subnet_id      = element(aws_subnet.public-subnets.*.id, count.index)
  route_table_id = aws_route_table.terraform-public.id
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/6.private-routing.tf
================================================
resource "aws_route_table" "terraform-private" {
  vpc_id = aws_vpc.default.id

  tags = {
    Name              = "${var.vpc_name}-Private-RT"
    Terraform-Managed = "Yes"
    Env               = local.new_environment
    ProjectID         = local.projid
  }
}

resource "aws_route_table_association" "terraform-private" {
  #count             = 4 # 0 1 2
  count = length(local.new_private_subnet_cidrs)
  #Using * is called Splat Syntax
  subnet_id      = element(aws_subnet.private-subnets.*.id, count.index)
  route_table_id = aws_route_table.terraform-private.id
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/7.ec2.tf
================================================
data "aws_ami" "my_ami" {
  most_recent = true
  name_regex  = "^DevSecOps"
  owners      = ["211125710812"]
}


resource "aws_instance" "webservers" {
  #count                       = local.new_environment == "production" ? 3 : 1
  count                       = 3
  ami                         = data.aws_ami.my_ami.id
  instance_type               = lookup(var.instance_type, local.new_environment)
  key_name                    = var.key_name
  subnet_id                   = element(aws_subnet.public-subnets.*.id, count.index)
  vpc_security_group_ids      = ["${aws_security_group.allow_all.id}"]
  associate_public_ip_address = true
  tags = {
    Name              = "${var.vpc_name}-PublicServer-${count.index + 1}"
    Terraform-Managed = "Yes"
    Env               = local.new_environment
    ProjectID         = local.projid
    ManagedBy         = "Terraform"
  }
}



================================================
FILE: Day 21 AWS-Ansible-Part-1/8.sg.tf
================================================
resource "aws_security_group" "allow_all" {
  name        = "allow_all"
  description = "Allow all inbound traffic"
  vpc_id      = aws_vpc.default.id

  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["10.1.1.0/32"]
  }

  ingress {
    from_port   = 3389
    to_port     = 3389
    protocol    = "tcp"
    cidr_blocks = ["10.1.1.0/32"]
  }

  ingress {
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    cidr_blocks = ["10.2.1.0/32"]
  }



  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  # lifecycle {
  #   ignore_changes = [
  #     ingress,
  #   ]
  # }
}


================================================
FILE: Day 21 AWS-Ansible-Part-1/9.vpc-peering.tf
================================================
data "aws_vpc" "ansible_vpc" {
  id = "vpc-036e5c5d11bdf83de"
}

data "aws_route_table" "ansible_vpc_rt" {
  subnet_id = "subnet-05597e96c163e70fd"
  #If subnet_id giving errors use route table id as below
  #route_table_id = data.aws_route_table.ansible_vpc_rt.id
}

resource "aws_vpc_peering_connection" "ansible-vpc-peering" {
  peer_vpc_id = data.aws_vpc.ansible_vpc.id
  vpc_id      = aws_vpc.default.id
  auto_accept = true
  accepter {
    allow_remote_vpc_dns_resolution = true
  }

  requester {
    allow_remote_vpc_dns_resolution = true
  }

  tags = {
    Name = "Ansible-${var.vpc_name}-Peering"
  }
}

resource "aws_route" "peering-to-ansible-vpc" {
  route_table_id            = aws_route_table.terraform-public.id
  destination_cidr_block    = "10.0.0.0/16"
  vpc_peering_connection_id = aws_vpc_peering_connection.ansible-vpc-peering.id
  #depends_on                = [aws_route_table.terraform-public]
}

resource "aws_route" "peering-from-ansible-vpc" {
  route_table_id            = data.aws_route_table.ansible_vpc_rt.id
  destination_cidr_block    = "10.37.0.0/16"
  vpc_peering_connection_id = aws_vpc_peering_connection.ansible-vpc-peering.id
  #depends_on                = [aws_route_table.terraform-public]
}

================================================
FILE: Day 21 AWS-Ansible-Part-1/Playbooks
================================================
# CHECK HERE FOR PLAYBOOKS : https://github.com/saikiranpi/Ansible-Testing.git


================================================
FILE: Day 21 AWS-Ansible-Part-1/README.md
================================================
# Day 21 AWS-Ansible-Part-1

![image](https://github.com/user-attachments/assets/5cec40df-0b9a-4757-8399-d2fbe42fb064)

# Project Setup with Packer, Ansible, and Terraform

## Overview

In this project, we utilize several DevOps tools to set up, configure, and manage infrastructure and application deployment:
- **Packer**: Used for building machine images.
- **Ansible**: Configuration management tool, enabling automated configuration of our infrastructure post-deployment.
- **Terraform**: Infrastructure as Code (IaC) tool for provisioning resources.

We’ll walk through how to integrate **Ansible** with **Terraform** to manage configurations on an infrastructure that's already deployed, setting up an Ansible Controller and ensuring communication between it and the client servers.

## Architecture and Components

1. **Ansible Controller**: Runs all configuration commands on the clients/nodes.
2. **Ansible Clients**: Servers that Ansible manages remotely.

**Note**: Ansible doesn’t require client software installation, as it connects to clients via SSH and Python.

### Diagram
- [Add a diagram here depicting the VPC peering, Ansible Controller, and Ansible Clients.]

## Task Workflow

### Step 1: Provisioning with Terraform

1. **Modify the Terraform Configuration**:
   - Update `ec2.tf` with the correct AWS account number.
   - Set up **VPC Peering** to allow communication between the Ansible Controller VPC and the client VPC. Update the **Route Tables** accordingly.

2. **Deploy Resources**:
   - Use `terraform init`, `terraform fmt`, `terraform validate`, and finally `terraform apply -var-file=15.terraform.tfvars` to deploy the infrastructure.
   - Verify that the public and private IPs are assigned correctly.

### Step 2: Configure Ansible Inventory

1. **Inventory File (invfile)**:
   - This is a critical file listing all servers or hosts Ansible will manage.
   - It identifies the target machines, making it easy for Ansible to know where to apply configuration changes.

### Step 3: Set Up Ansible Controller

1. **Prepare SSH Access**:
   - Place your SSH key at `/etc/ansible/ansiblekey.pem` on the Ansible controller and set permissions using `chmod 600`.
   
2. **Install Terraform on the Controller**:
   - Clone the Git repository in the root location of the controller.
   - Navigate to `ansiblecore`, and initialize Terraform with `terraform init`.

3. **Validate Connectivity**:
   - Use Ansible to test connectivity with the client servers:
     ```bash
     ansible -i invfile pvt -m ping
     ```

### Step 4: Working with Ad-Hoc Commands in Ansible

1. **Run Ad-Hoc Commands**:
   - To check disk space across servers:
     ```bash
     ansible -i invfile pvt -m shell -a "df -h"
     ```
   - To filter for root volume only:
     ```bash
     ansible -i invfile pvt -m shell -a "df -h | grep '/dev/root'"
     ```
   - Increase verbosity by appending `-v`, `-vv`, or `-vvv` for debugging:
     ```bash
     ansible -i invfile pvt -m shell -a "df -h | grep '/dev/root'" -vv
     ```

2. **Target Specific Servers**:
   - For example, to exclude a specific server:
     ```bash
     ansible -i invfile 'all:!server01' -m shell -a "df -h | grep '/dev/root'" -v
     ```

### Step 5: Using Ansible Playbooks for Complex Tasks

1. **Create Playbooks Folder**:
   - Organize playbooks in the `playbooks` folder.

2. **Sample Nginx Playbook**:
   - The sample playbook installs nginx on the client servers.
   - Run syntax checks with:
     ```bash
     ansible-playbook -i invfile playbooks/1.nginx/o.sample-playbook.yml --syntax-check
     ```

3. **Run Playbooks**:
   - Deploy nginx using:
     ```bash
     ansible-playbook -i invfile playbooks/1.nginx/1.nginx-local.yml -vvv
     ```

4. **Remote Module Usage**:
   - For copying files from a remote location, use the remote module. To remove unnecessary files:
     ```bash
     ansible -i invfile pvt -m shell -a "rm -rf /var/www/html/index.nginx-debian.html" --become
     ```

### Step 6: User Management

- Run the user creation playbook:
  ```bash
  ansible-playbook -i invfile playbooks/1.nginx/5.user_creation.yml -vv
  ```

### Step 7: Redis Caching (Optional)

- Use Redis to cache Ansible facts for environments with a large number of servers:
  ```bash
  ansible -i invfile all -m setup
  ```

### Final Steps

1. **Push Code Changes**:
   - Regularly push updates from your local machine to Git.

2. **Destroying Resources**:
   - Use Terraform to destroy resources if needed:
     ```bash
     terraform destroy -var-file=15.terraform.tfvars
     ```


================================================
FILE: Day 21 AWS-Ansible-Part-1/publicservers.tpl
================================================
[pub]
server01 ansible_port=22 ansible_host=${testserver01}  ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem 
server02 ansible_port=22 ansible_host=${testserver02} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem
server03 ansible_port=22 ansible_host=${testserver03} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem

[pvt]
testserver01 ansible_port=22 ansible_host=${pvttestserver01}  ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem 
testserver02 ansible_port=22 ansible_host=${pvttestserver02} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem
testserver03 ansible_port=22 ansible_host=${pvttestserver03} ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansible/ansiblekey.pem

[pip]
${testserver01}
${testserver02}
${testserver03}

================================================
FILE: Day 21 AWS-Ansible-Part-1/publicservers_yaml.tpl
================================================
all:
  hosts:
    ${testserver01}:
    ${testserver02}:
    ${testserver03}:
   
  children:
    pub:
     hosts:
       server01:
         ansible_port: 22
         ansible_host: ${testserver01}
         ansible_user: ubuntu
         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem
       server02:
         ansible_port: 22
         ansible_host: ${testserver02}
         ansible_user: ubuntu
         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem
       server03:
         ansible_port: 22
         ansible_host: ${testserver03}
         ansible_user: ubuntu
         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem
    pvt:
     hosts:
       testserver01:
         ansible_port: 22
         ansible_host: ${pvttestserver01}
         ansible_user: ubuntu
         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem
       testserver02:
         ansible_port: 22
         ansible_host: ${pvttestserver02}
         ansible_user: ubuntu
         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem
       testserver03:
         ansible_port: 22
         ansible_host: ${pvttestserver03}
         ansible_user: ubuntu
         ansible_ssh_private_key_file: /etc/ansible/ansiblekey.pem
    pip:
     hosts:
       ${testserver01}:
       ${testserver02}:
       ${testserver03}:

================================================
FILE: Day 22 AWS-Ansible-Part-2/README.md
================================================
![an-eye-catching-image-with-the-glossy-text-ansible-29br2wpbTleCfuUILMcoiA-q17GtADBT7CWaIP5UT9zHQ](https://github.com/user-attachments/assets/a9d0ef13-a6c4-4b52-8666-f177ff397e69)




# Ansible Redis & Vault Setup
#Complete Repo Here: https://github.com/saikiranpi/Ansible-Testing.git

This repository contains Ansible playbooks to configure Redis caching for storing Ansible facts and demonstrates how to use Ansible Vault to securely manage sensitive information. This is a step-by-step guide to setting up Redis as a fast storage for Ansible facts, managing configurations with handlers, and using Ansible Vault to secure sensitive data.

## Prerequisites

- Ansible installed on the controller node
- Python3 and Redis installed on the target servers
- Proper SSH access and configured inventory file (`invfile`)

---

### Step 1: Initial Setup and Verification

1. **Remove Old Playbooks:** Delete any previous playbook versions.
2. **Copy New Playbook:** Paste the latest playbook to the Ansible playbooks location.
3. **Test Connections:** Run a basic ping test to ensure connectivity:
   ```bash
   ansible -i invfile pvt -m ping
   ansible -i invfile pub -m ping
   ```

### Step 2: Collecting Facts with Redis Caching

**Collect Ansible Facts:** Use the following command to gather facts on `tstserver01`:
   ```bash
   ansible -i invfile tstserver01 -m setup
   ```
   
To reduce memory usage, configure Redis as an external caching server for storing these facts.

#### Redis Configuration

1. **Playbooks and Configuration Files:**
   - `redis.config`: Specifies the IP address to bind (use public IP if necessary).
   - `redis.service`: Ensures Redis service starts.
   - `redis.yml`: Runs the Redis setup.

2. **Run Playbook on Controller:**
   ```bash
   ansible-playbook -i invfile playbooks/2/redis.yaml --syntax-check -v
   ```

3. **Verify Installation on Test Server:**
   ```bash
   systemctl status redis
   ```

#### Handlers

Handlers ensure the Redis service restarts only when necessary (if there are changes in `redis.config` or `redis.service`).

### Step 3: Fetch and Store Files

Once Redis is configured:
1. **Backup File Creation:** Backups are created and saved under `/tmp` on the server.
2. **Download Backup File:** Use Ansible to fetch backup files from the test server to your local machine.

```bash
ansible -i invfile all -m setup
```

### Step 4: Secure Sensitive Information with Ansible Vault

Ansible Vault is used to manage sensitive data like AWS credentials securely.

1. **Create Vault File:**
   ```bash
   ansible-vault create aws_creds
   ```
   Insert your AWS credentials (access key and secret key).

2. **Encrypt and Decrypt Files:**
   - Encrypt the file:
     ```bash
     ansible-vault encrypt aws_creds
     ```
   - Decrypt the file:
     ```bash
     ansible-vault decrypt aws_creds
     ```

3. **Run Playbook Using Vault:**
   ```bash
   ansible-playbook -i invfile playbooks/vault/vaulttesting.yml -v
   ```

### Step 5: Handling Failures with Block and Rescue

Define custom error handling with `block` and `rescue` in your playbooks to ensure playbook execution doesn’t halt due to failures.

### Step 6: Secure Configuration with Vault Password File

1. **Set up Vault Password File**:
   - Create a vault password file, `/root/vaultpass`, and set permissions:
     ```bash
     chmod 600 /root/vaultpass
     ```
   - Update `ansible.cfg` to include the vault password file:
     ```ini
     [defaults]
     vault_password_file=/root/vaultpass
     ```

### Next Steps

- Explore the difference between shell, command, and raw modules in Ansible.
- Automate playbook runs with a Cron Job to periodically update facts.

### License

This project is licensed under the MIT License.

---




























PENDING TASK DURING VIDEO 

ON ANSIBLE CONTROLLER 

CD /ETC/ANSIBLE
NANO ANSIBLE.CONF --> PASTE UNDER [defaults] -- > It looks like this

gathering = smart
fact_caching_timeout = 86400
fact_caching = redis
fact_caching_prefix = ansible_DevSecOps_Saikiran
fact_caching_connection = PASTE-YOUR-CLIENT(TESTSERVER01)-PUBLICIP-HERE:6379:0

![image](https://github.com/user-attachments/assets/5a3e46dd-2534-4b97-8fff-0a380c747433)

CTLR+X --> Y > ENTER

apt update
apt install -y python3-pip
pip3 install redis

ON CONTROLLER --> ANSIBLE -I INVFILE PVT -M SETUP

ON CLIENT(TESTSERVER01) --> REDIS-CLI --> KEYS *




================================================
FILE: Day 23 AWS-Ansible-Part-3/README.md
================================================
# Day 23 AWS-Ansible-Part-3

![a-3d-render-of-a-glowing-ansible-logo-below-the-lo-4QgGoilXQ36n-8iyPqrNXQ-mBzRZGehQfeGRujEXVxpTQ](https://github.com/user-attachments/assets/240ba7fd-de4a-4f64-9e16-f36c61ca5720)

# Complete Code here : https://github.com/saikiranpi/Ansible-Testing

---

# Ansible Jinja2 Templating with MySQL and Nginx Playbooks

This project demonstrates the use of Jinja2 templates in Ansible to deploy and configure services on multiple servers. It includes examples of pre- and post-tasks, as well as how to manage MySQL and Nginx configurations using Ansible playbooks.

## Project Setup

1. **Initialize Ansible Configuration**
   - Navigate to the Ansible directory:
     ```bash
     cd /etc/ansible/
     ```
   - Generate the default Ansible configuration:
     ```bash
     ansible-config init --disabled > ansible.cfg
     ```
   - Modify `ansible.cfg` for common settings:
     ```bash
     nano ansible.cfg
     ```
   - Update the following values:
     ```ini
     host_key_checking = False
     remote_user = ansibleadmin
     private_key_file = /home/ansibleadmin/key.pem
     ```

2. **Initialize and Apply Terraform**
   - Ensure you are in the correct directory and apply the Terraform configuration to set up your infrastructure:
     ```bash
     terraform init
     terraform apply
     ```

## Jinja2 Templating with Nginx

The `nginx-jinja2.yml` playbook uses Jinja2 templates to configure Nginx.

1. Run the Nginx playbook:
   ```bash
   ansible-playbook -i invfile nginx-jinja2.yml -v
   ```
2. Once the playbook is complete, check the public IP of the server to verify that Nginx is running.

## MySQL Setup with Jinja2

This section explains how to install and configure MySQL using Ansible and Jinja2 templates. All variable values are defined within the configuration file.

1. Run the MySQL playbook:
   ```bash
   ansible-playbook -i invfile playbooks/mysql-jinja2.yml
   ```
2. Verify MySQL service status:
   ```bash
   ansible -i invfile pvt -m shell -a "service mysql status"
   ```
3. Once the MySQL service is running, log in to the server and confirm that you can access MySQL databases:
   ```sql
   mysql> SHOW DATABASES;
   ```
4. Add data to the `myflixdb` database:
   ```sql
   USE myflixdb;
   SHOW TABLES;
   SELECT * FROM movies;
   ```

## Pre-Tasks and Post-Tasks

Pre-tasks and post-tasks are used to prepare the system before the main tasks or clean up afterward.

### Example Task: Checking `/tmp` Folder

1. Run the playbook with pre-tasks and post-tasks:
   ```bash
   ansible-playbook -i invfile playbooks/pre_post_tasks.yml
   ```

## Running the Playbooks on Multiple Servers

If you need to run these playbooks across 100 or more servers, Ansible's inventory and parallel execution capabilities make this straightforward. Update your inventory file (`invfile`) with the list of servers, and then run the playbooks with the inventory specified.

## Git Commands for Version Control

1. To push any changes to your playbook repository:
   ```bash
   git push
   ```
2. To pull the latest updates:
   ```bash
   git pull
   ```

## File Structure

```
/etc/ansible/
├── ansible.cfg               # Ansible configuration file
├── invfile                   # Inventory file listing server IPs or hostnames
├── playbooks/
│   ├── nginx-jinja2.yml      # Nginx playbook using Jinja2 template
│   ├── mysql-jinja2.yml      # MySQL playbook using Jinja2 template
│   └── pre_post_tasks.yml    # Playbook with pre-tasks and post-tasks
└── templates/
    ├── nginx.j2              # Nginx configuration template
    └── mysql.j2              # MySQL configuration template
```

## Requirements

- Ansible 2.9+
- Terraform (if using for infrastructure setup)
- SSH access to the target servers

## Usage Notes

This project is suitable for dynamic and scalable server setups. With Jinja2 templating, you can easily customize configurations for different environments or requirements, making it highly adaptable for both development and production needs.

---


================================================
FILE: Day 24 Ansible-Part-4 DynamicInventory_AWX/README.md
================================================
# Ansible Dynamic Inventory and Ansible Tower

"Anible Dynamic Inventory" title with Attractive Font for youtube Thumbnail 

This guide explains how to use **Ansible Dynamic Inventory**  for managing dynamic environments, such as those involving auto-scaling groups. Unlike static inventory, dynamic inventory adapts to infrastructure changes, such as scaling up or down during load variations.

---

## Overview

### Static vs Dynamic Use Case

- **Static Use Case**: Targets predefined servers without HA (High Availability) or auto-scaling. Servers remain fixed, without scaling up or down.
- **Dynamic Use Case**: Ideal for environments with auto-scaling groups. Servers scale automatically based on load, requiring a dynamic inventory for effective management.

---

## Prerequisites

1. **Install Required Tools**:
   ```bash
   sudo apt-get update
   sudo apt-get install python3-pip jq -y
   sudo pip3 install boto3
   sudo apt install -y awscli
   aws --version
   ```

2. **Configure Ansible**:
   - Navigate to the Ansible configuration directory:
     ```bash
     cd /etc/ansible
     ```
   - Back up the `ansible.cfg` file:
     ```bash
     cp ansible.cfg ansible.cfg.bak
     ```
   - Edit the `ansible.cfg` file and enable the **inventory plugins**:
     ```bash
     nano ansible.cfg
     ```
     Locate `[inventory]` and update as needed.

3. **Create EC2 Plugin File**:
   - Create a new file for the EC2 plugin:
     ```bash
     nano aws_ec2.yaml
     ```
   - Paste the following configuration:
     ```yaml
     plugin: aws_ec2
     regions:
       - us-east-1
     keyed_groups:
       - key: tags
         prefix: tag
       - prefix: instance_type
         key: instance_type
       - key: placement.region
         prefix: aws_region
     ```

---

## Steps to Use Dynamic Inventory

### Deploy Infrastructure First
1. Validate the dynamic inventory:
   ```bash
   ansible-inventory -i /etc/ansible/aws_ec2.yaml --list
   ansible-inventory -i /etc/ansible/aws_ec2.yaml --list | jq
   ```
2. Test connectivity using tags:
   ```bash
   ansible -i /etc/ansible/aws_ec2.yaml tag_terraform_managed_yes -m ping
   ```

### Target Specific Resources
- Set the dynamic inventory path:
  ```bash
  export dynamic='/etc/ansible/aws_ec2.yaml'
  ```
- Example command to run on specific instance types:
  ```bash
  ansible -i $dynamic instance_type_t2_small -m shell -a "df -h"
  ```

### Run Playbooks
1. Create a playbook targeting specific tags:
   - Edit or create the playbook under the `dynamic_inventory` folder:
     ```bash
     nano dynamic_nginx-jinja2.yaml
     ```
   - Update the `hosts` to:
     ```yaml
     hosts: tag_managedby_terraform
     ```
2. Run the playbook:
   ```bash
   ansible-playbook -i $dynamic playbook/dynamic_inventory/dynamic_nginx.yaml
   ```

3. Replace `nginx` with `mysql` or other playbooks as needed.

---

## Git Workflow for Dynamic Inventory
1. Create and switch to a new branch:
   ```bash
   git checkout -b dynamic_inventory
   ```
2. Push changes to remote:
   ```bash
   git push origin dynamic_inventory
   ```
3. Pull updates to the local repo:
   ```bash
   git pull
   ```

---

## Auto-Scaling Integration
When an auto-scaling group provisions instances, the dynamic inventory automatically updates to target the new resources. Verify using:
```bash
ansible-inventory -i aws_ec2.yaml --list
ansible-inventory -i aws_ec2.yaml --graph
```

---

## Example Playbook Execution
1. Modify the number of instances in your Terraform configuration:
   ```bash
   terraform apply -var-file="vars.tfvars" -auto-approve
   ```
2. Run the playbook:
   ```bash
   ansible-playbook -i /etc/ansible/aws_ec2.yaml playbook/dynamic_inventory/dynamic_nginx.yaml
   ```
3. Validate with the updated inventory.

---

## Notes
- Ensure that the `ansible.cfg` file is correctly configured for plugins.
- Use `jq` to format and verify inventory JSON outputs.
- Replace line endings with `LF` if issues arise during playbook execution.

End of Dynamic Inventory.


================================================
FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/0-steps.sh
================================================
1
sudo certbot certonly --manual --preferred-challenges=dns --key-type rsa \
    --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \
    --agree-tos -d *.cloudvishwakarma.in

# Certificate is saved at: /etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem
# Key is saved at:         /etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem

+++++IF ISSUE+++++

free -m
top
#DRY-RUN
certbot certonly --dry-run --manual --preferred-challenges=dns --key-type rsa \
    --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \
    --agree-tos -d *.cloudvishwakarma.in

+++++IF ISSUE+++++

2
apt update && apt install -y unzip net-tools

3
wget https://releases.hashicorp.com/vault/1.13.2/vault_1.13.2_linux_amd64.zip
unzip vault_1.13.2_linux_amd64.zip
cp vault /usr/bin/vault
mkdir -p /etc/vault
mkdir -p /var/lib/vault/data
vault version

4
nano config.hcl
cp config.hcl /etc/vault/config.hcl

5
nano /etc/systemd/system/vault.service

6
sudo systemctl daemon-reload
sudo systemctl stop vault
sudo systemctl start vault
sudo systemctl enable vault
sudo systemctl status vault --no-pager

7 #VAULT STATUS FROM CLI
ps -ef | grep -i vault | grep -v grep

8
export VAULT_ADDR=https://kmsvault.cloudvishwakarma.in:8200
echo "export VAULT_ADDR=https://kmsvault.cloudvishwakarma.in:8200" >>~/.bashrc

vault status

9
vault operator init | tee -a /etc/vault/init.file

10
vault operator init | tee -a /etc/vault/init.file


================================================
FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/1-config.hcl
================================================
disable_cache = true
disable_mlock = true
ui            = true
listener "tcp" {
  address                  = "0.0.0.0:8200"
  tls_disable              = 0
  tls_cert_file            = "/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem"
  tls_key_file             = "/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem"
  tls_disable_client_certs = "true"

}
storage "file" {
  path = "/var/lib/vault/data"
}
api_addr                = "https://kmsvault.cloudvishwakarma.in:8200"
max_lease_ttl           = "10h"
default_lease_ttl       = "10h"
cluster_name            = "vault"
raw_storage_endpoint    = true
disable_sealwrap        = true
disable_printable_check = true

================================================
FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-config-kms.hcl
================================================
disable_cache = true
disable_mlock = true
ui            = true
listener "tcp" {
  address                  = "0.0.0.0:8200"
  tls_disable              = 0
  tls_cert_file            = "/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem"
  tls_key_file             = "/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem"
  tls_disable_client_certs = "true"

}
storage "s3" {
  bucket = "workspacesbucket01"
}

seal "awskms" {
  region     = "us-east-1"
  kms_key_id = "KMSID here"
  endpoint   = "kms.us-east-1.amazonaws.com"
}

api_addr                = "https://kmsvault.cloudvishwakarma.in:8200"
max_lease_ttl           = "10h"
default_lease_ttl       = "10h"
cluster_name            = "vault"
raw_storage_endpoint    = true
disable_sealwrap        = true
disable_printable_check = true




================================================
FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-vault.service
================================================
[Unit]
Description=HashiCorp Vault - A tool for managing secrets
Documentation=https://www.vaultproject.io/docs/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/vault/config.hcl

[Service]
ProtectSystem=full
ProtectHome=read-only
PrivateTmp=yes
PrivateDevices=yes
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=/usr/bin/vault server -config=/etc/vault/config.hcl
ExecReload=/bin/kill --signal HUP
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
StartLimitBurst=3
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target


================================================
FILE: Day 25 HashicorpVault AWSIntegration/README.md
================================================
# Day 27 HashicorpVault AWSIntegration
Below is a structured GitHub repository content outline and README for the integration of HashiCorp Vault with Ansible, based on the provided instructions:

---

### Repository Structure

```plaintext
HashiCorp-Vault-Ansible-Integration/
├── README.md
├── terraform/
│   ├── main.tf
│   ├── variables.tf
│   ├── outputs.tf
├── vault/
│   ├── config.hcl
│   ├── config-kms.hcl
│   ├── init.file
├── ansible/
│   ├── playbook.yml
│   ├── vault_secret_retrieve.yml
├── docs/
│   ├── installation_steps.md
│   ├── troubleshooting.md
└── scripts/
    ├── setup_docker.sh
    ├── setup_ssl.sh
```

---

### **README.md**

```markdown
# HashiCorp Vault Integration with Ansible

This repository demonstrates the integration of **HashiCorp Vault** with **Ansible** for managing secrets in real-world scenarios, specifically focusing on environments where servers need to retrieve sensitive information after unexpected reboots. The solution leverages Terraform for provisioning, AWS KMS for auto-unsealing, and Docker to host Vault.

---

## **Use Case**

A Java application is running on a server. When the server reboots due to a disaster or maintenance:
- The application must securely retrieve sensitive information (e.g., credentials) from a centralized Key Management System (KMS).
- HashiCorp Vault is used for this purpose, ensuring compatibility with both on-premises and cloud environments.

### Why not Ansible Vault?
- **Ansible Vault** is ideal for encrypting sensitive data like API keys or database credentials within playbooks. However, it cannot autonomously retrieve secrets from another server when triggered by events like server reboots.
- **HashiCorp Vault**, combined with AWS KMS, provides auto-unsealing capabilities and centralized secret management.

---

## **Solution Overview**

1. **HashiCorp Vault Setup**:
   - Install Vault on a t2.medium instance.
   - Configure Vault with auto-unsealing using AWS KMS.
   - Store Vault initialization keys securely in S3.

2. **Terraform Configuration**:
   - Provisions Vault server.
   - Sets up IAM roles and S3 buckets for storing Vault keys.
   - Configures KMS for encryption and auto-unsealing.

3. **Ansible Integration**:
   - Demonstrates how to retrieve secrets stored in Vault using Ansible playbooks.

---

## **Setup Instructions**

### 1. Prerequisites
- AWS Account with administrative access.
- A t2.medium EC2 instance with Docker installed.
- Terraform installed locally.
- Ansible installed locally.

### 2. Vault Installation
Follow the steps in `docs/installation_steps.md` to:
1. Start an EC2 instance.
2. Install Docker and SSL.
3. Configure Vault.

### 3. Configuring AWS KMS
- Navigate to AWS Management Console > KMS.
- Create a symmetric key with "Encrypt and Decrypt" permissions.
- Add the IAM role of the EC2 instance to allow access.

### 4. Configuring Vault with KMS
1. Replace the Vault config file:
   ```bash
   sudo nano /etc/vault/config.hcl
   ```
   Copy and paste the contents from `vault/config-kms.hcl`.
2. Ensure S3 bucket details are correctly updated.
3. Initialize Vault:
   ```bash
   vault operator init | tee -a /etc/vault/init.file
   ```

### 5. Terraform Setup
- Navigate to the `terraform/` directory.
- Update variables in `variables.tf` for your environment.
- Apply the configuration:
  ```bash
  terraform apply
  ```

### 6. Reboot Handling
- After rebooting the server:
  ```bash
  terraform apply
  ```
- Verify that Vault is accessible and unsealed automatically.

---

## **Ansible Playbook Example**

Retrieve secrets from Vault after server reboot:
```yaml
---
- name: Retrieve secrets from HashiCorp Vault
  hosts: localhost
  tasks:
    - name: Fetch secret from Vault
      uri:
        url: "http://<vault-server-ip>:8200/v1/secret/data/my-secret"
        method: GET
        headers:
          X-Vault-Token: "{{ vault_token }}"
      register: secret_response

    - name: Debug retrieved secret
      debug:
        msg: "{{ secret_response.json }}"
```

---

## **Troubleshooting**
- Refer to `docs/troubleshooting.md` for common issues, such as:
  - Vault not unsealing after reboot.
  - KMS misconfiguration.
  - Terraform or Ansible errors.

---

## **License**
This repository is licensed under the MIT License. See `LICENSE` for details.
```

---

### Additional Notes
1. **Scripts**:
   - `setup_docker.sh`: Automates Docker installation.
   - `setup_ssl.sh`: Configures SSL for Vault.

2. **Documentation**:
   - `docs/installation_steps.md`: Step-by-step guide for setting up Vault and related components.
   - `docs/troubleshooting.md`: Solutions for potential issues during setup and execution.


================================================
FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/1-provider.tf
================================================
provider "aws" {
}

provider "vault" {
  address         = var.vault_addr
  token           = var.vault_token
  skip_tls_verify = true
}

================================================
FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/2-random-passwords.tf
================================================
#Generating random password for Linux Machines
resource "random_password" "linux-machine-passwords" {
  count            = var.vm_count
  length           = 16
  special          = true
  override_special = "!@#$%^"
  min_upper        = 4
  min_lower        = 4
  min_special      = 4
  min_numeric      = 4
}

================================================
FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/3-hashi-vault-passwords.tf
================================================
resource "vault_mount" "java-app-dev" {
  path        = "java-app-dev"
  type        = "kv"
  options     = { version = "1" }
  description = "KV Version 1 secret engine mount"
}

resource "vault_kv_secret" "linux-machine-1" {
  path = "${vault_mount.java-app-dev.path}/linux-machine-1"
  data_json = jsonencode(
    {
      linux-machine-1 = random_password.linux-machine-passwords.0.result
    }
  )
}

resource "vault_kv_secret" "linux-machine-2" {
  path = "${vault_mount.java-app-dev.path}/linux-machine-2"
  data_json = jsonencode(
    {
      linux-machine-2 = random_password.linux-machine-passwords.1.result
    }
  )
}

resource "vault_kv_secret" "linux-machine-3" {
  path = "${vault_mount.java-app-dev.path}/linux-machine-3"
  data_json = jsonencode(
    {
      linux-machine-3 = random_password.linux-machine-passwords.2.result
    }
  )
}

================================================
FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/policy.yaml
================================================
{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": "kms:*",
    "Resource": "*"
  }
}


================================================
FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/user.tf
================================================
resource "random_password" "vm-passwords" {
  count            = 3
  length           = 16
  special          = true
  override_special = "!#$%&*()-_=+[]{}<>:?"
}

resource "vault_mount" "avinash" {
  path        = "avinash"
  type        = "kv-v2"
  description = "This Container avinash Family Secrets"
}

resource "vault_mount" "saikiran" {
  path        = "saikiran"
  type        = "kv-v2"
  description = "This Container saikiran Family Secrets"
}


resource "vault_kv_secret_v2" "Prod-secrets" {
  count               = 3
  mount               = vault_mount.avinash.path
  name                = "linux-machine-${count.index + 1}"
  cas                 = 1
  delete_all_versions = true
  data_json = jsonencode(
    {
      username = "adminsai",
      password = element(random_password.vm-passwords.*.result, count.index)
    }
  )
  custom_metadata {
    max_versions = 5
    data = {
      foo = "vault@avinash.com"
    }
  }
}


#Creating saikiran Secrets
resource "vault_kv_secret_v2" "super-secrets" {
  count               = 3
  mount               = vault_mount.saikiran.path
  name                = "super-linux-machine-${count.index + 1}"
  cas                 = 1
  delete_all_versions = true
  data_json = jsonencode(
    {
      username = "adminsai",
      password = element(random_password.vm-passwords.*.result, count.index)
    }
  )
  custom_metadata {
    max_versions = 5
    data = {
      foo = "vault@saikiran.com"
    }
  }
}

================================================
FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/variables.tf
================================================
variable "vault_addr" {
  default = "https://kmsvault.cloudvishwakarma.in:8200"
}
variable "vault_token" {
  default = "TOKEN-HERE"
}

variable "vm_count" {
  default = 3
}





================================================
FILE: Day 26 Docker-Full-Course/README.md
================================================
# Day 28 Docker-Full-Course

![00](https://github.com/user-attachments/assets/77c9bf84-ffca-478a-b288-058f5e28b9ab)

https://youtu.be/5GhbkrMukmk?si=SqzutdvGZy-A8Hex



================================================
FILE: Day 27 Maven-JFrog-Sonarqube/README.md
================================================

![Untitled design](https://github.com/user-attachments/assets/dfaf3392-9cfd-43b2-86c1-e1bdd956b3ee)


# Maven-Jfrog Integration

This repository showcases the integration of **Maven**, **JFrog**, and **SonarQube** to build, manage, and analyze a Java-based Spring Boot application. Below are the detailed steps to set up and deploy a sample application.

---

## Table of Contents

1. [Introduction](#introduction)
2. [Prerequisites](#prerequisites)
3. [Setup and Installation](#setup-and-installation)
4. [Maven Lifecycle](#maven-lifecycle)
5. [Integrating with JFrog](#integrating-with-jfrog)
6. [Pushing Artifacts to JFrog](#pushing-artifacts-to-jfrog)
7. [Version Management](#version-management)
8. [License](#license)

---

## Introduction

This project demonstrates:
- Building a Spring Boot application using Maven.
- Managing dependencies with `pom.xml`.
- Storing and managing build artifacts using JFrog Artifactory.
- Incremental versioning of artifacts.
- Deployment to a private repository for reuse in other projects.

**Note:** While this project highlights all major steps, application-specific code and configurations will typically be managed by your development team.

---

## Prerequisites

1. **AWS EC2 Instance**:
   - Instance type: `T2.large`
   - Storage: `20 GB`
   - OS: Ubuntu 20.04+
2. **Tools**:
   - **Maven**: Installed and configured.
   - **OpenJDK**: Version 17 or higher.
   - **JFrog Artifactory**: Installed and licensed.
   - **Git**: Configured with SSH authentication.
3. **Networking**:
   - Configure DNS using Route 53 (if applicable).

---

## Setup and Installation

### 1. Create EC2 Instance
Launch an EC2 instance and install required tools:

```bash
sudo apt update
sudo apt install -y openjdk-17-jdk maven git jq net-tools
```

### 2. Clone and Build the Application

```bash
git clone https://github.com/spring-projects/spring-petclinic.git
cd spring-petclinic
mvn clean package
```

### 3. Push Code to Azure DevOps
1. Initialize a new Git repository if needed:
   ```bash
   rm -rf .git
   git init
   ```
2. Set up SSH authentication:
   - Generate an SSH key: `ssh-keygen`
   - Add the public key to Azure DevOps under **User Settings > SSH Public Keys**.
   - Clone the repository using the SSH link.

3. Push code:
   ```bash
   git add .
   git commit -m "Initial commit"
   git remote add origin <ssh-link>
   git push -u origin master
   ```

---

## Maven Lifecycle

### Maven Commands Overview

1. **Validate**:
   ```bash
   mvn validate
   ```
   Ensures the `pom.xml` is valid.

2. **Compile**:
   ```bash
   mvn compile
   ```
   Compiles Java files into `.class` files.

3. **Package**:
   ```bash
   mvn package
   ```
   Packages the compiled code into `.jar` or `.war` artifacts.

4. **Run Application**:
   ```bash
   java -jar target/*.jar
   ```

5. **Clean**:
   ```bash
   mvn clean
   ```
   Deletes previous build artifacts.

---

## Integrating with JFrog

1. **Install JFrog**:
   ```bash
   wget -O jfrog-deb-installer.tar.gz "https://releases.jfrog.io/artifactory/jfrog-prox/org/artifactory/pro/deb/jfrog-platform-trial-prox/[RELEASE]/jfrog-platform-trial-prox-[RELEASE]-deb.tar.gz"
   tar -xvzf jfrog-deb-installer.tar.gz
   cd jfrog-platform-trial-pro*
   sudo ./install.sh
   sudo systemctl start artifactory.service
   ```

2. **Configure JFrog**:
   - Access JFrog via `http://<instance-ip>:8082`.
   - Apply the trial license.
   - Create a Maven repository (`libs-release-local`).

3. **Update Maven Configuration**:
   Add the following in your `settings.xml`:

   ```xml
   <servers>
      <server>
         <id>central</id>
         <username>YOUR_USERNAME</username>
         <password>YOUR_PASSWORD</password>
      </server>
   </servers>
   ```

---

## Pushing Artifacts to JFrog

1. **Add Distribution Management to `pom.xml`**:

   ```xml
   <distributionManagement>
      <repository>
         <id>central</id>
         <name>libs-release</name>
         <url>http://<jfrog-instance>:8081/artifactory/libs-release-local</url>
      </repository>
      <snapshotRepository>
         <id>snapshots</id>
         <name>libs-snapshot</name>
         <url>http://<jfrog-instance>:8081/artifactory/libs-snapshot-local</url>
      </snapshotRepository>
   </distributionManagement>
   ```

2. **Deploy Artifact**:
   ```bash
   mvn clean install deploy
   ```

3. Verify the artifact in JFrog's repository.

---

## Version Management

Update versions dynamically using Maven's version plugin:

```bash
mvn versions:set -DnewVersion=1.0.0
mvn clean install deploy
```

Repeat for subsequent versions:
```bash
mvn versions:set -DnewVersion=1.0.1
```

---

## License

This project is licensed under the [MIT License](LICENSE).


================================================
FILE: Day 28 SAST-AzureDevOps-Part-1/0-maven.sh
================================================
Create T2-xl
create Simplerecord for Jfrog with publicIP
sudo apt update && apt install -y openjdk-17-jdk && sudo apt update && apt install -y maven

clone same in local from powershell and push to azuredevops repo

clone petclinicapp https://github.com/saikiranpi/springboot-petclinic.git on linux and make sure you ssh keys

mvn clean install deploy

-----
now lets deploy jfrog for storing our artifacts

cd /usr/local/bin
wget -O jfrog-deb-installer.tar.gz "https://releases.jfrog.io/artifactory/jfrog-prox/org/artifactory/pro/deb/jfrog-platform-trial-prox/[RELEASE]/jfrog-platform-trial-prox-[RELEASE]-deb.tar.gz"
tar -xvzf jfrog-deb-installer.tar.gz
sudo apt install jq -y && sudo apt install net-tools -y
cd jfrog-platform-trial-pro*
# sudo chown -R postgres:postgres /var/opt/jfrog/postgres/data
# sudo chmod -R 700 /var/opt/jfrog/postgres/data
sudo ./install.sh
sudo systemctl start artifactory.service
sudo systemctl start xray.service

You need license trail license
copy antifactory license and paste it the key  next  next next
we need maven repo here - >jfrog >http://jfrog.cloudvishwakarma.in
finish


http://localhost:8082/


generate settings in the mainfile under settings.xml and change the jfrog
username and password
snapshot as true
change the Jfrog URL accordingly
paste the settings.yml under /root/.m2/settings/xml
stay in petapp dir and run "mvn clean install deploy"

################################################################################
java -jar target/*.jar

mvn versions:set -DnewVersion=1.0.0
mvn clean install deploy


================================================
FILE: Day 28 SAST-AzureDevOps-Part-1/0-sonarqube.sh
================================================
# 1. Set Up PostgreSQL Instance for SonarQube

sudo mkdir -p /var/lib/postgresql/sonarqube
sudo chown postgres:postgres /var/lib/postgresql/sonarqube
sudo su - postgres
/usr/lib/postgresql/15/bin/initdb -D /var/lib/postgresql/sonarqube

# 2. Configure PostgreSQL
Edit postgresql.conf:

sudo nano /var/lib/postgresql/sonarqube/postgresql.conf

Add:

listen_addresses = 'localhost'
port = 5433
unix_socket_directories = '/var/run/postgresql'

Edit pg_hba.conf:
sudo nano /var/lib/postgresql/sonarqube/pg_hba.conf

Add:

local all postgres trust
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5

# 3. Create and Start PostgreSQL Service

sudo nano /etc/systemd/system/postgresql-sonarqube.service

Add service content:

[Unit]
Description=PostgreSQL for SonarQube
After=network.target

[Service]
Type=forking
User=postgres
Group=postgres
ExecStart=/usr/lib/postgresql/15/bin/pg_ctl -D /var/lib/postgresql/sonarqube -l /var/log/postgresql/postgresql-sonarqube.log start
ExecStop=/usr/lib/postgresql/15/bin/pg_ctl -D /var/lib/postgresql/sonarqube stop
TimeoutSec=300

[Install]
WantedBy=multi-user.target

Start service:

sudo systemctl daemon-reload
sudo systemctl start postgresql-sonarqube
sudo systemctl enable postgresql-sonarqube

# 4. Create Database and User

psql -p 5433 -U postgres

CREATE USER sonar WITH ENCRYPTED PASSWORD 'my_strong_password'
CREATE DATABASE sonarqube OWNER sonar
GRANT ALL PRIVILEGES ON DATABASE sonarqube to sonar
\q

# 5. Install SonarQube

sudo apt-get install zip -y
cd /opt
sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.7.1.62043.zip
sudo unzip sonarqube-9.7.1.62043.zip
sudo mv sonarqube-9.7.1.62043 sonarqube
rm -rf sonarqube-9.7.1.62043.zip

# 6. Configure SonarQube User and Permissions

sudo groupadd sonar
sudo useradd -d /opt/sonarqube -g sonar sonar
sudo chown sonar:sonar /opt/sonarqube -R

Edit sonar.properties:

sudo nano /opt/sonarqube/conf/sonar.properties

Add:
properties
sonar.jdbc.username=sonar
sonar.jdbc.password=my_strong_password
sonar.jdbc.url=jdbc:postgresql://localhost:5433/sonarqube

# 7. System Configuration
Create SonarQube service:

sudo nano /etc/systemd/system/sonar.service

Add:

[Unit]
Description=SonarQube service
After=syslog.target network.target

[Service]
Type=forking
ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
User=sonar
Group=sonar
Restart=always
LimitNOFILE=65536
LimitNPROC=4096

[Install]
WantedBy=multi-user.target

Configure system limits:

sudo nano /etc/sysctl.conf

Add:

vm.max_map_count=262144
fs.file-max=65536

Configure user limits:

sudo nano /etc/security/limits.conf

Add:

sonar soft nofile 65536
sonar hard nofile 65536
sonar soft nproc 4096
sonar hard nproc 4096

# 8. Start SonarQube

sudo sysctl -p
sudo systemctl daemon-reload
sudo systemctl start sonar
sudo systemctl enable sonar

# 9. Access SonarQube
- Wait 5 minutes
- Access: http://your-server:9000
- Login: admin/admin





================================================
FILE: Day 28 SAST-AzureDevOps-Part-1/1-ado-tools.sh
================================================
sudo apt update && apt install -y unzip jq net-tools
apt install openjdk-17-jdk -y
apt install maven -y && curl https://get.docker.com | bash
usermod -a -G docker adminsai

# aws cli install
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# azurecli ubuntu install
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# terraform.io and packer.io copy the link and install in /usr/local/bin

cd /usr/local/bin
wget https://releases.hashicorp.com/terraform/1.10.3/terraform_1.10.3_linux_amd64.zip
unzip

# packer.io
wget https://releases.hashicorp.com/packer/1.11.2/packer_1.11.2_linux_amd64.zip
unzip

# document.ansible.com  Select ubuntu and download the file accordingly
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible

cd /etc/ansible
cp ansible.cfg ansible.cfg_backup
ansible-config init --disabled >ansible.cfg
nano ansible.cfg

ctrl w  host_key_checking = False

# Install trivy https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb

cd /usr/local/bin
Wget https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb
dpkg -i trivy file
Trivy

reboot the system for configurations.




================================================
FILE: Day 28 SAST-AzureDevOps-Part-1/1-pipeline.yml
================================================
trigger:
  - development
  - uat
  - production

pool:
  name: LinuxAgentPool
  demands:
    - Java -equals Yes
    - Terraform -equals Yes
    - Agent.Name -equals ProdADO
variables:
  global_version: "1.0.0"
  global_email: "pinapathruni.saikiran@gmail.com"
  # azure_dev_sub: "1e9d13b0-73fc-43eb-b04e-4b4f5a5ea96f"
  isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/development')]
  isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/production')]

steps:
  - script: docker version && packer version && terraform version && aws --version && java -version && mvn --version
    displayName: "Testin A Newly Created Agent and rools"


================================================
FILE: Day 28 SAST-AzureDevOps-Part-1/2-pipeline.yml
================================================
trigger:
  branches:
    include:
      - development
      - uat
      - production
    exclude: ["master", "feature*", "README.md"]


================================================
FILE: Day 28 SAST-AzureDevOps-Part-1/README.md
================================================
![escape](https://github.com/user-attachments/assets/63a9188f-edea-4fc4-9f94-c28b46c5bb37)

# Day28 AzureDevOps_Part_1

## CI-CD-CD (Continuous Integration, Continuous Delivery, Continuous Deployment)

This project focuses on setting up a CI/CD pipeline in Azure DevOps to automate the processes of code integration, delivery, and deployment. The pipeline ensures secure, efficient, and seamless transitions from development to production.

---

### **Continuous Integration**
1. **Code Readiness:**
   - Code is committed and merged into the repository.
   - Static Application Security Testing (SAST) is performed to identify vulnerabilities.

2. **Build:**
   - Uses Maven to generate a JAR file.
   - Docker is employed to create an image using a `Dockerfile`.

3. **Artifacts Publishing:**
   - Built artifacts are stored for further stages in the CI/CD pipeline.

**Example Release Strategy:**
- A versioning system is employed:
  - Stable Version: `23.0.0` (Production-ready).
  - Release Candidates: `23.0.0.0-RC1`, `23.0.0.0-RC2`, etc., for testing.
  - Hotfix Versions: `23.0.0.1` for bug fixes post-release.

---

### **Continuous Delivery**
- Automates deployment to development and staging environments after successful integration testing.
- Focuses on delivering artifacts to lower environments for further testing.

---

### **Continuous Deployment**
- Automates deployment to production after passing all previous stages.
- Often skipped for production in many organizations due to additional manual checks.

---

### **Branching Strategy**
1. **Main/Master Branch:**
   - Represents production-ready code.

2. **Development Branch:**
   - Feature branches are created for changes and merged back into development after review.

3. **Staging/Functional Testing:**
   - Tracks and documents manual/automated test results in an organized manner.

---

### **CI/CD Tools**
Common tools for CI/CD pipelines include:
- Azure DevOps (primary focus)
- Jenkins
- GitLab
- GitHub Actions
- GoCD
- TravisCI
- CircleCI

---

## **Setting up Azure DevOps Pipeline**

### Task Overview:
1. **Create a Pipeline Agent:**
   - Use a self-hosted agent by creating a virtual machine (VM) with the necessary tools installed.
   - VM Configuration:
     - OS: Ubuntu 20.04
     - Specs: 2 CPUs, 8GB RAM
     - Disk: Standard SSD

2. **Configure the Agent:**
   - Install required tools (e.g., Terraform, Packer).
   - Generate a Personal Access Token (PAT) for authentication.
   - Create an agent pool and register the agent in Azure DevOps.

3. **Pipeline Creation:**
   - Create a pipeline for the repository in Azure DevOps.
   - Use a `trigger` to specify branches for automatic execution.

4. **Clone Repository Locally:**
   - Use Git commands to clone the repository and manage changes.

---

### Step-by-Step Instructions:

1. **Create VM:**
   - Set up a virtual machine in Azure with the specified configuration.
   - Configure networking to allow necessary inbound and outbound rules.

2. **Install Required Tools:**
   - Access the VM via SSH (e.g., PuTTY).
   - Install dependencies and configure tools as the admin user.

3. **Generate PAT:**
   - Create a Personal Access Token in Azure DevOps with full access and save it securely.

4. **Setup Agent Pool:**
   - Create and configure an agent pool in Azure DevOps.
   - Register the VM as an agent using provided setup scripts.

5. **Pipeline Creation:**
   - Use Azure Pipelines to create a YAML-based pipeline.
   - Example configuration:
     ```yaml
     trigger:
       branches:
         include:
           - master
     pool:
       name: LinuxAgentPool
     steps:
       - script: echo "Hello, Azure DevOps!"
     ```

6. **Test and Modify Pipeline:**
   - Push changes to trigger pipeline execution.
   - Use Visual Studio Code for editing pipeline configurations.

7. **Add Variables:**
   - Add SonarQube credentials or other required variables in the Azure DevOps pipeline UI.

8. **Service Connections:**
   - Connect the Azure DevOps pipeline to external tools like SonarQube for analysis.

---

## **Advanced Features**
- Conditional expressions for environment-specific pipelines.
- Integration with AWS instances or other external environments.
- Dynamic agent capabilities for task-specific pipelines.

---

### **Additional Resources**
- [Azure DevOps Documentation](https://learn.microsoft.com/en-us/azure/devops/)
- [SonarQube Documentation](https://docs.sonarqube.org/)
- [GitHub for Version Control](https://github.com/)

---

**Contributors:**
- Admin Kiran (Contact: `adminkiran`)

**License:**
- This project is licensed under the MIT License. See the LICENSE file for details.


================================================
FILE: Day 29 AzureDevOps-Part-2/README.md
================================================
# PLEASE COPY THE POM.XML AND PIPELINE SCRIPT FIRST AND DO THE PRACTICALS. REST ALL SAME. 


# Prod-SpringBoot-Pet-App

This repository contains the production-ready Spring Boot application for the `Prod-ADO` instance. Follow the steps below to set up and run the CI/CD pipeline using Azure DevOps (ADO).

## Prerequisites
- AWS and Azure instances must be up and running.
- Proper IP addresses should be updated in Route 53.

## Steps to Set Up the Pipeline

### Stage 1: Initial Setup
1. **Start the agents** on AWS and Azure.
2. **Update the IPs** in Route 53.
3. **Clone the repository** and check the available branches.
4. **Add the SonarQube stage** and build the pipeline accordingly.
5. **Modify the `pom.xml` file** at lines 13 & 16:
   ```xml
   <artifactId>ado-spring-boot-app-dev</artifactId>
   ```

### Stage 2: Connecting the Pipeline to the EC2 Instance
1. **Connect the pipeline to the EC2 instance** where SonarQube and Maven are installed using service connections:
   - Navigate to **Project Settings > Service Connections**.
   - Create a new service connection for the EC2 instance.
2. **Add the token** in the pipeline and push the code to the development branch.
3. **Run the pipeline** and push it to the development environment.
4. If the Maven build fails, skip tests by adding the following line:
   ```yaml
   options: '-DskipTests'
   ```
   Add it above the `displayName` in your YAML file.
5. Push the changes again.
6. If you encounter issues with `sonar.branch.name`, set the development branch as the default branch.
7. Once the job completes, check the results on SonarQube.

### Stage 3: Building with Java and Copying Artifacts to JFrog
1. **Build the application** using Maven and copy the artifact to JFrog.
2. Ensure `settings.yaml` is securely managed:
   - Go to **Libraries > Add Secure Files**.
   - Browse and add the secure file.
3. **Create the necessary directories** on the Azure agent:
   ```bash
   sudo mkdir /artifacts
   sudo chown adminsai:adminsai /artifacts
   ```
   This folder will store the copied artifact.
4. Save and push the changes.
5. If errors occur during the Maven build, log in to the server and debug using:
   ```bash
   grep -i "failure" *.txt
   ```
   Example failure:
   ```
   org.springframework.samples.petclinic.system.CrashControllerIntegrationTests.txt
   ```
   Review and fix the `CrashControllerIntegrationTests` file accordingly.

### Stage 4: Copying Artifacts to Azure Blob Storage
1. **Create a storage account** in Azure:
   - Name: `artifacts`
   - Redundancy: Locally Redundant Storage (LRS)
2. **Create a container** named `artifacts`.
3. **Set up a service principal**:
   - Navigate to **Microsoft Entra ID > App Registration**.
   - Create a new service principal.
   - In **Project Settings > Service Connections**, create a new Azure Resource Manager connection manually.
   - Provide the following details:
     - Tenant ID
     - Client ID (Service Principal ID)
     - Subscription ID
     - Client Secret (Create a new secret under Certificates & Secrets).
4. **Create a new pipeline variable**:
   - Name: `STORAGE_ACCOUNT_KEY`
   - Secret: Yes
   - Value: Copy the access key from the storage account.
5. Push the changes and run the pipeline.

### Stage 5: Adding an S3 Bucket
1. **Create an S3 bucket** with the name specified in the YAML file.
2. **Grant S3 access**:
   - Navigate to **IAM > Users** and grant S3 full access.
3. **Create a new AWS service connection** in ADO:
   - Use the access key and secret key.
   - Connection name: `saikiransecops-s3`
4. Push the changes and verify the artifacts in the S3 bucket.

### Stage 6: Building a Docker Image and Scanning with Trivy
1. **Create a template folder** in VSCode:
   ```bash
   mkdir template
   cd template
   touch junit.tpl
   ```
   Paste the required content into `junit.tpl`.
2. **Create a Dockerfile** in VSCode and paste the necessary code.
3. Push the changes.
4. Test the pipeline step-by-step to ensure correctness.

## Final Notes
- The pipeline may require multiple iterations to achieve perfection. Ensure that each step is tested and validated before proceeding to the next.
- Use secure methods to manage sensitive information such as credentials and keys.

## Troubleshooting
- For Maven build failures, use the following command:
  ```bash
  grep -i "failure" *.txt
  ```
- If issues are found in `CrashControllerIntegrationTests`, review the file and make the necessary changes without altering unrelated parts.
- **SonarQube Upgrade Steps**:
  1. **Stop SonarQube**:
     ```bash
     sudo systemctl stop sonar
     ```
  2. **Download and install a newer version (10.3)**:
     ```bash
     cd /opt
     sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-10.3.0.82913.zip
     sudo unzip sonarqube-10.3.0.82913.zip
     sudo rm -rf sonarqube
     sudo mv sonarqube-10.3.0.82913 sonarqube
     ```
  3. **Fix permissions**:
     ```bash
     sudo chown -R sonar:sonar /opt/sonarqube
     sudo chmod -R 755 /opt/sonarqube
     ```
  4. **Update `sonar.properties` to configure JDK 17 module path**:
     ```bash
     sudo nano /opt/sonarqube/conf/sonar.properties
     ```
     Add the following line:
     ```properties
     sonar.web.javaAdditionalOpts=--add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED
     ```
  5. **Restart SonarQube**:
     ```bash
     sudo systemctl restart sonar
     ```
  This newer version has better compatibility with Java 17. Let the DevOps team know if further errors occur.

## Acknowledgments
Special thanks to the team for their support in setting up and validating the pipeline.

---
For further assistance, please contact the DevOps team.



---




================================================
FILE: Day 29 AzureDevOps-Part-2/azure-pipelines.yml
================================================
trigger:
  - development
  - uat
  - production

pool:
  name: LinuxAgentPool
  demands:
    - JDK -equals 17
    - Terraform -equals Yes
    - Agent.Name -equals ProdADO

variables:
  global_version: "1.0.0"
  global_email: "saikiran@gmail.com"
  # azure_dev_sub: "9ce91e05-4b9e-4a42-95c1-4385c54920c6"
  # azure_prod_sub: "298f2c19-014b-4195-b821-e3
Download .txt
gitextract_dld9c9qw/

├── Day 01 Introduction-BaseLabCreation - Variables-Script-grep-awk-cut/
│   └── README.md
├── Day 02 Arguments-PassingSpecialparams/
│   └── README.md
├── Day 03 OutputRedirection-For-While/
│   └── README.md
├── Day 04 UserAutomation/
│   ├── README.md
│   └── script.sh
├── Day 05 RegEx-Break-Continue-CustomExitCodes/
│   ├── README.md
│   ├── break.sh
│   ├── continue.sh
│   └── exit-code.sh
├── Day 06 Functions/
│   ├── README.md
│   ├── docker.sh
│   ├── ebs.sh
│   ├── log-rotation.sh
│   └── multi-function.sh
├── Day 07 Git-1/
│   └── README.md
├── Day 08 Git-2/
│   └── README.md
├── Day 09 Git-3/
│   └── README.md
├── Day 10 AWS-Terraform-Part-1/
│   └── README.md
├── Day 11 AWS-Terraform-Part-2/
│   └── README.md
├── Day 12 AWS-Terraform-Part-3/
│   └── README.md
├── Day 13 AWS-Terraform-Part-4/
│   └── README.md
├── Day 14 AWS-Terraform-Functions-1/
│   ├── README.md
│   ├── RTA.tf
│   ├── locals.tf
│   ├── main.tf
│   ├── sg.tf
│   ├── subnet.tf
│   ├── terraform.tfvars
│   └── variables.tf
├── Day 15 AWS-Terraform-Functions-2/
│   ├── README.md
│   ├── private-ec2.tf
│   ├── public-ec2.tf
│   ├── terraform.tfvars
│   ├── txt.tf
│   ├── user-data.sh
│   └── variable.sh
├── Day 16 AWS-Terraform-Part-6 Modules-Part-1/
│   └── README.md
├── Day 17 AWS-Terraform-Full-Course/
│   └── README.md
├── Day 18 AWS-Terraform-Part-8 TerraformCloud/
│   └── README.md
├── Day 19 AWS-Terraform-Part-9 GitLab-Pipeline/
│   └── README.md
├── Day 20 AWS-Packer/
│   └── README.md
├── Day 21 AWS-Ansible-Part-1/
│   ├── .gitignore
│   ├── 1.provider.tf
│   ├── 10.locals.tf
│   ├── 11.localfile_ansible_inventory.tf
│   ├── 12.localfile_ansible_inventory_yaml.tf
│   ├── 13.null-local-exec.tf
│   ├── 14.outputs.tf
│   ├── 15.terraform.tfvars
│   ├── 16.variables.tf
│   ├── 2.vpc.tf
│   ├── 3.public-subnets.tf
│   ├── 4.private-subnets.tf
│   ├── 5.public-routing.tf
│   ├── 6.private-routing.tf
│   ├── 7.ec2.tf
│   ├── 8.sg.tf
│   ├── 9.vpc-peering.tf
│   ├── Playbooks
│   ├── README.md
│   ├── publicservers.tpl
│   └── publicservers_yaml.tpl
├── Day 22 AWS-Ansible-Part-2/
│   └── README.md
├── Day 23 AWS-Ansible-Part-3/
│   └── README.md
├── Day 24 Ansible-Part-4 DynamicInventory_AWX/
│   └── README.md
├── Day 25 HashicorpVault AWSIntegration/
│   ├── HashiCorp_Vault/
│   │   ├── 0-steps.sh
│   │   ├── 1-config.hcl
│   │   ├── 2-config-kms.hcl
│   │   └── 2-vault.service
│   ├── README.md
│   └── terraform-vault/
│       ├── 1-provider.tf
│       ├── 2-random-passwords.tf
│       ├── 3-hashi-vault-passwords.tf
│       ├── policy.yaml
│       ├── user.tf
│       └── variables.tf
├── Day 26 Docker-Full-Course/
│   └── README.md
├── Day 27 Maven-JFrog-Sonarqube/
│   └── README.md
├── Day 28 SAST-AzureDevOps-Part-1/
│   ├── 0-maven.sh
│   ├── 0-sonarqube.sh
│   ├── 1-ado-tools.sh
│   ├── 1-pipeline.yml
│   ├── 2-pipeline.yml
│   └── README.md
├── Day 29 AzureDevOps-Part-2/
│   ├── README.md
│   ├── azure-pipelines.yml
│   └── pom.xml
├── Day 30 AzureDevOps-Part-3/
│   ├── README.md
│   ├── azure-pipelines.yml
│   └── pom.xml
├── Day 31 AzureDevOps-Part-4/
│   ├── .gitignore
│   ├── 1-main.tf
│   ├── 2-ec2.tf
│   ├── 3-alb.tf
│   ├── 4-alb-listener.tf
│   ├── 5-route53.tf
│   ├── README.md
│   ├── azure-pipelines.yml
│   ├── details.tpl
│   ├── docker-swarm.yml
│   ├── docker.service
│   ├── localfile.tf
│   ├── packer.json
│   ├── prod.auto.tfvars
│   └── variables.tf
├── Day 32 AzureDevOps-Part-5/
│   └── README.md
├── Day 33 Jenkins-Part-1/
│   ├── Jenkinsfile
│   └── README.md
├── Day 34 Jenkins-Part-2/
│   ├── 0-jenkins_install.sh
│   └── README.md
├── Day 35 Jenkins-Part-3/
│   ├── Jenkinsfile
│   └── README.md
├── Day 36 Jenkins-Part-4/
│   └── README.md
└── README.md
Condensed preview — 114 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (338K chars).
[
  {
    "path": "Day 01 Introduction-BaseLabCreation - Variables-Script-grep-awk-cut/README.md",
    "chars": 4551,
    "preview": "#  Introduction-BaseLabCreation - Variables-Script-grep-awk-cut\n\n![1](https://github.com/user-attachments/assets/bb18e25"
  },
  {
    "path": "Day 02 Arguments-PassingSpecialparams/README.md",
    "chars": 5770,
    "preview": "# Day 02 Arguments-PassingSpecialparams\n\n![02](https://github.com/user-attachments/assets/13165920-47f8-4843-b6d4-00af9c"
  },
  {
    "path": "Day 03 OutputRedirection-For-While/README.md",
    "chars": 4225,
    "preview": "![03](https://github.com/user-attachments/assets/6be236b3-3be1-4c2d-ade5-3341265b409d)\n\n# Day 03 OutputRedirection-For-W"
  },
  {
    "path": "Day 04 UserAutomation/README.md",
    "chars": 6166,
    "preview": "# Day 04 UserAutomation\n\n![a-3d-render-of-a-dark-themed-cybersecurity-confere-TU2eVZcIRda9RcDkaObkyg-yt2DCIPgQIaI9w7_DYZ"
  },
  {
    "path": "Day 04 UserAutomation/script.sh",
    "chars": 3147,
    "preview": "#!/bin/bash\nif [ $# -gt 0 ]; then\n    USER=$1\n    echo $USER\nelse\n    echo \" Please Enter the Valid parameter \"\nfi\n\n##AD"
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/README.md",
    "chars": 3091,
    "preview": "# Day 05 RegEx-Break-Continue-CustomExitCodes\n\n![05](https://github.com/user-attachments/assets/27fd624d-bb91-46d5-b710-"
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/break.sh",
    "chars": 684,
    "preview": "aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2)\n\necho \""
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/continue.sh",
    "chars": 744,
    "preview": "# CONTINUE\n\n#!/bin/bash\naws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1"
  },
  {
    "path": "Day 05 RegEx-Break-Continue-CustomExitCodes/exit-code.sh",
    "chars": 741,
    "preview": "######EXIT CODE############\n#!/bin/bash\naws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-"
  },
  {
    "path": "Day 06 Functions/README.md",
    "chars": 3532,
    "preview": "# Day 06: Functions and Scripts\n\n## Overview\n\nIn this session, we explore the concept of functions in shell scripting an"
  },
  {
    "path": "Day 06 Functions/docker.sh",
    "chars": 26373,
    "preview": "#!/bin/sh\nset -e\n# Docker Engine for Linux installation script.\n#\n# This script is intended as a convenient way to confi"
  },
  {
    "path": "Day 06 Functions/ebs.sh",
    "chars": 777,
    "preview": "#!/bin/bash\n\ndelete_vols() {\n    # Fetch all volumes\n    vols=$(aws ec2 describe-volumes | jq \".Volumes[].VolumeId\" -r)\n"
  },
  {
    "path": "Day 06 Functions/log-rotation.sh",
    "chars": 1193,
    "preview": "#!/bin/bash\n\n# Configuration\nLOG_FILE=\"/var/log/syslog\"          # Path to your log file\nMAX_SIZE=100000000             "
  },
  {
    "path": "Day 06 Functions/multi-function.sh",
    "chars": 1279,
    "preview": "#!/bin/bash\nfunction subnets {\n    echo \"************************************************************\"\n    echo \"**Getti"
  },
  {
    "path": "Day 07 Git-1/README.md",
    "chars": 4950,
    "preview": "# Day 07 GIT Azure Terraform JIRA\n\n![a-3d-scene-with-a-terraform-logo-on-one-side-and-a-UJgTFv-TSs-3jQkKJsSVGQ-TP18QzX3T"
  },
  {
    "path": "Day 08 Git-2/README.md",
    "chars": 15,
    "preview": "# Day 08 Git-2\n"
  },
  {
    "path": "Day 09 Git-3/README.md",
    "chars": 4459,
    "preview": "![an-eye-catching-illustration-of-a-git-merge-and-gi-mich74xdR-iNzhh-DPdCaw-dDLWCUYQQtKBuum9wR-h7w](https://github.com/u"
  },
  {
    "path": "Day 10 AWS-Terraform-Part-1/README.md",
    "chars": 5204,
    "preview": "![a-3d-render-of-a-youtube-thumbnail-with-the-text-d-6vFmUIlxRQ2-ERpv-XkPmg-98wY6FuxTTeyHEHWaD8X5w](https://github.com/u"
  },
  {
    "path": "Day 11 AWS-Terraform-Part-2/README.md",
    "chars": 3882,
    "preview": "![Untitled design](https://github.com/user-attachments/assets/d7d9ad96-e14e-40d8-ac6f-93004fb69da0)\n\n\n\n# Terraform Day 0"
  },
  {
    "path": "Day 12 AWS-Terraform-Part-3/README.md",
    "chars": 6125,
    "preview": "\n![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-cTa2tZAgR1ShW2UwqRQdcQ-fbR1bkc9RlC23TynNHoRhA](https://github.com/"
  },
  {
    "path": "Day 13 AWS-Terraform-Part-4/README.md",
    "chars": 4788,
    "preview": "\n![Untitled design](https://github.com/user-attachments/assets/58f96a76-cbc0-4ba5-ae0c-41e6f85c9b2b)\n\n\n# Terraform Day 5"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/README.md",
    "chars": 4137,
    "preview": "\n# Terraform Functions Part: 1\n\n![Thumb](https://github.com/user-attachments/assets/69bc2680-9ffe-4852-a7f0-f2b9ed8496c5"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/RTA.tf",
    "chars": 531,
    "preview": "resource \"aws_route_table_association\" \"public-subnets\" {\n  #   count          = 3\n  count          = length(var.public_"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/locals.tf",
    "chars": 115,
    "preview": "locals {\n  Owner      = \"Prod-Team\"\n  costcenter = \"Hyd-8080\"\n  TeamDL     = \"Saikiran.pinapathruni18@gmail.com\"\n}\n"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/main.tf",
    "chars": 3105,
    "preview": "#This Terraform Code Deploys Basic VPC Infra.\nprovider \"aws\" {\n  region = var.aws_region\n}\n\nterraform {\n  backend \"s3\" {"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/sg.tf",
    "chars": 847,
    "preview": "resource \"aws_security_group\" \"allow_all\" {\n  name        = \"${var.vpc_name}-allow-all\"\n  description = \"Allow all Inbou"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/subnet.tf",
    "chars": 1024,
    "preview": "resource \"aws_subnet\" \"public-subnet\" {\n  #count             = 3 #012\n  count             = length(var.public_cird_block"
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/terraform.tfvars",
    "chars": 533,
    "preview": "aws_region         = \"us-east-1\"\nvpc_cidr           = \"172.18.0.0/16\"\nvpc_name           = \"DevSecOps-Vpc\"\nkey_name     "
  },
  {
    "path": "Day 14 AWS-Terraform-Functions-1/variables.tf",
    "chars": 231,
    "preview": "variable \"aws_region\" {}\nvariable \"vpc_cidr\" {}\nvariable \"vpc_name\" {}\nvariable \"key_name\" {}\nvariable \"azs\" {}\nvariable"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/README.md",
    "chars": 3938,
    "preview": "![a-futuristic-3d-scene-featuring-an-astronaut-sitti-JmnDsV37TdiaW1tmnfgktg-hPykpO-xSY6aYtvVHr0G_g](https://github.com/u"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/private-ec2.tf",
    "chars": 1167,
    "preview": "resource \"aws_instance\" \"private-server\" {\n  # count = length(var.private_cird_block)\n  count                  = var.env"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/public-ec2.tf",
    "chars": 721,
    "preview": "resource \"aws_instance\" \"public-server\" {\n  # count = length(var.public_cird_block)\n  count                       = var."
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/terraform.tfvars",
    "chars": 549,
    "preview": "aws_region         = \"us-east-1\"\nvpc_cidr           = \"172.18.0.0/16\"\nvpc_name           = \"DevSecOps-Vpc\"\nkey_name     "
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/txt.tf",
    "chars": 1181,
    "preview": "#   user_data = <<-EOF\n#     #!/bin/bash\n#     sudo apt update\n#     sudo apt install nginx -y\n#     sudo apt install gi"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/user-data.sh",
    "chars": 422,
    "preview": "#!/bin/bash\nsudo apt update\nsudo apt install nginx -y\nsudo apt install git -y\nsudo git clone https://github.com/saikiran"
  },
  {
    "path": "Day 15 AWS-Terraform-Functions-2/variable.sh",
    "chars": 250,
    "preview": "variable \"aws_region\" {}\nvariable \"vpc_cidr\" {}\nvariable \"vpc_name\" {}\nvariable \"key_name\" {}\nvariable \"azs\" {}\nvariable"
  },
  {
    "path": "Day 16 AWS-Terraform-Part-6 Modules-Part-1/README.md",
    "chars": 5943,
    "preview": "# Terraform Project: Modularized Infrastructure Setup\n\n\n![a-vibrant-and-energetic-youtube-thumbnail-with-a-s-giqGaHBwT7y"
  },
  {
    "path": "Day 17 AWS-Terraform-Full-Course/README.md",
    "chars": 275,
    "preview": "![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/u"
  },
  {
    "path": "Day 18 AWS-Terraform-Part-8 TerraformCloud/README.md",
    "chars": 364,
    "preview": "\n![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/"
  },
  {
    "path": "Day 19 AWS-Terraform-Part-9 GitLab-Pipeline/README.md",
    "chars": 4174,
    "preview": "# Day 19 Terraform Modules with GitLab \n\n![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-MdC5XT42QNySa2zI6fo6Sw-mIH"
  },
  {
    "path": "Day 20 AWS-Packer/README.md",
    "chars": 2339,
    "preview": "# Day 20 AWS-Packer\n\n![a-vibrant-and-eye-catching-youtube-thumbnail-with--CWD0OBoeRVO1Jw5QXUd3iw-PZaqUMYdQ0eS9Tv6GFm_VQ]"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/.gitignore",
    "chars": 75,
    "preview": ".terraform.lock.hcl\r\n.terraform/*\r\n6.ansible-playbook-nginx.yml\r\ninvfile*\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/1.provider.tf",
    "chars": 521,
    "preview": "provider \"aws\" {\r\n  region = var.aws_region\r\n}\r\n\r\nterraform {\r\n  required_version = \"<= 1.8.5\" #Forcing which version of"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/10.locals.tf",
    "chars": 461,
    "preview": "#distinct takes a list and returns a new list with any duplicate elements removed.\r\n#toset takes a list will remove any "
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/11.localfile_ansible_inventory.tf",
    "chars": 527,
    "preview": "resource \"local_file\" \"ansible-inventory-file\" {\r\n  content = templatefile(\"publicservers.tpl\",\r\n    {\r\n\r\n      testserv"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/12.localfile_ansible_inventory_yaml.tf",
    "chars": 542,
    "preview": "resource \"local_file\" \"ansible-inventory-file-yaml\" {\r\n  content = templatefile(\"publicservers_yaml.tpl\",\r\n    {\r\n\r\n    "
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/13.null-local-exec.tf",
    "chars": 217,
    "preview": "resource \"null_resource\" \"webservers\" {\r\n  provisioner \"local-exec\" {\r\n    command = <<EOH\r\n      sleep 10\r\n      ansibl"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/14.outputs.tf",
    "chars": 250,
    "preview": "output \"vpc_id\" {\r\n  value = aws_vpc.default.id\r\n}\r\n\r\noutput \"vpc_arn\" {\r\n  value = aws_vpc.default.arn\r\n}\r\n\r\n# output \""
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/15.terraform.tfvars",
    "chars": 794,
    "preview": "aws_region           = \"us-east-1\"\r\nvpc_cidr             = \"10.37.0.0/16\"\r\nvpc_name             = \"Ansible-Vpc\"\r\nkey_nam"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/16.variables.tf",
    "chars": 508,
    "preview": "variable \"aws_region\" { type = string }\r\nvariable \"amis\" { type = map(any) }\r\nvariable \"vpc_cidr\" { type = string }\r\nvar"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/2.vpc.tf",
    "chars": 584,
    "preview": "resource \"aws_vpc\" \"default\" {\r\n  cidr_block           = var.vpc_cidr\r\n  enable_dns_hostnames = true\r\n  tags = {\r\n    Na"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/3.public-subnets.tf",
    "chars": 516,
    "preview": "resource \"aws_subnet\" \"public-subnets\" {\r\n  #count             = 4 # 0 1 2\r\n  count             = length(local.new_publi"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/4.private-subnets.tf",
    "chars": 520,
    "preview": "resource \"aws_subnet\" \"private-subnets\" {\r\n  #count             = 4 # 0 1 2\r\n  count             = length(local.new_priv"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/5.public-routing.tf",
    "chars": 1171,
    "preview": "resource \"aws_route_table\" \"terraform-public\" {\r\n  vpc_id = aws_vpc.default.id\r\n\r\n  # route {\r\n  #   cidr_block = \"0.0.0"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/6.private-routing.tf",
    "chars": 588,
    "preview": "resource \"aws_route_table\" \"terraform-private\" {\r\n  vpc_id = aws_vpc.default.id\r\n\r\n  tags = {\r\n    Name              = \""
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/7.ec2.tf",
    "chars": 905,
    "preview": "data \"aws_ami\" \"my_ami\" {\r\n  most_recent = true\r\n  name_regex  = \"^DevSecOps\"\r\n  owners      = [\"211125710812\"]\r\n}\r\n\r\n\r\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/8.sg.tf",
    "chars": 856,
    "preview": "resource \"aws_security_group\" \"allow_all\" {\r\n  name        = \"allow_all\"\r\n  description = \"Allow all inbound traffic\"\r\n "
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/9.vpc-peering.tf",
    "chars": 1273,
    "preview": "data \"aws_vpc\" \"ansible_vpc\" {\r\n  id = \"vpc-036e5c5d11bdf83de\"\r\n}\r\n\r\ndata \"aws_route_table\" \"ansible_vpc_rt\" {\r\n  subnet"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/Playbooks",
    "chars": 79,
    "preview": "# CHECK HERE FOR PLAYBOOKS : https://github.com/saikiranpi/Ansible-Testing.git\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/README.md",
    "chars": 4565,
    "preview": "# Day 21 AWS-Ansible-Part-1\n\n![image](https://github.com/user-attachments/assets/5cec40df-0b9a-4757-8399-d2fbe42fb064)\n\n"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/publicservers.tpl",
    "chars": 891,
    "preview": "[pub]\r\nserver01 ansible_port=22 ansible_host=${testserver01}  ansible_user=ubuntu ansible_ssh_private_key_file=/etc/ansi"
  },
  {
    "path": "Day 21 AWS-Ansible-Part-1/publicservers_yaml.tpl",
    "chars": 1367,
    "preview": "all:\r\n  hosts:\r\n    ${testserver01}:\r\n    ${testserver02}:\r\n    ${testserver03}:\r\n   \r\n  children:\r\n    pub:\r\n     hosts"
  },
  {
    "path": "Day 22 AWS-Ansible-Part-2/README.md",
    "chars": 4391,
    "preview": "![an-eye-catching-image-with-the-glossy-text-ansible-29br2wpbTleCfuUILMcoiA-q17GtADBT7CWaIP5UT9zHQ](https://github.com/u"
  },
  {
    "path": "Day 23 AWS-Ansible-Part-3/README.md",
    "chars": 4024,
    "preview": "# Day 23 AWS-Ansible-Part-3\n\n![a-3d-render-of-a-glowing-ansible-logo-below-the-lo-4QgGoilXQ36n-8iyPqrNXQ-mBzRZGehQfeGRuj"
  },
  {
    "path": "Day 24 Ansible-Part-4 DynamicInventory_AWX/README.md",
    "chars": 4012,
    "preview": "# Ansible Dynamic Inventory and Ansible Tower\n\n\"Anible Dynamic Inventory\" title with Attractive Font for youtube Thumbna"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/0-steps.sh",
    "chars": 1497,
    "preview": "1\nsudo certbot certonly --manual --preferred-challenges=dns --key-type rsa \\\n    --email pinapathruni.saikiran@gmail.com"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/1-config.hcl",
    "chars": 676,
    "preview": "disable_cache = true\ndisable_mlock = true\nui            = true\nlistener \"tcp\" {\n  address                  = \"0.0.0.0:82"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-config-kms.hcl",
    "chars": 798,
    "preview": "disable_cache = true\ndisable_mlock = true\nui            = true\nlistener \"tcp\" {\n  address                  = \"0.0.0.0:82"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-vault.service",
    "chars": 631,
    "preview": "[Unit]\nDescription=HashiCorp Vault - A tool for managing secrets\nDocumentation=https://www.vaultproject.io/docs/\nRequire"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/README.md",
    "chars": 4680,
    "preview": "# Day 27 HashicorpVault AWSIntegration\nBelow is a structured GitHub repository content outline and README for the integr"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/1-provider.tf",
    "chars": 136,
    "preview": "provider \"aws\" {\n}\n\nprovider \"vault\" {\n  address         = var.vault_addr\n  token           = var.vault_token\n  skip_tls"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/2-random-passwords.tf",
    "chars": 309,
    "preview": "#Generating random password for Linux Machines\nresource \"random_password\" \"linux-machine-passwords\" {\n  count           "
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/3-hashi-vault-passwords.tf",
    "chars": 853,
    "preview": "resource \"vault_mount\" \"java-app-dev\" {\n  path        = \"java-app-dev\"\n  type        = \"kv\"\n  options     = { version = "
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/policy.yaml",
    "chars": 118,
    "preview": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": {\n    \"Effect\": \"Allow\",\n    \"Action\": \"kms:*\",\n    \"Resource\": \"*\"\n  }\n}\n"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/user.tf",
    "chars": 1457,
    "preview": "resource \"random_password\" \"vm-passwords\" {\n  count            = 3\n  length           = 16\n  special          = true\n  o"
  },
  {
    "path": "Day 25 HashicorpVault AWSIntegration/terraform-vault/variables.tf",
    "chars": 176,
    "preview": "variable \"vault_addr\" {\n  default = \"https://kmsvault.cloudvishwakarma.in:8200\"\n}\nvariable \"vault_token\" {\n  default = \""
  },
  {
    "path": "Day 26 Docker-Full-Course/README.md",
    "chars": 167,
    "preview": "# Day 28 Docker-Full-Course\n\n![00](https://github.com/user-attachments/assets/77c9bf84-ffca-478a-b288-058f5e28b9ab)\n\nhtt"
  },
  {
    "path": "Day 27 Maven-JFrog-Sonarqube/README.md",
    "chars": 4721,
    "preview": "\n![Untitled design](https://github.com/user-attachments/assets/dfaf3392-9cfd-43b2-86c1-e1bdd956b3ee)\n\n\n# Maven-Jfrog Int"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/0-maven.sh",
    "chars": 1563,
    "preview": "Create T2-xl\ncreate Simplerecord for Jfrog with publicIP\nsudo apt update && apt install -y openjdk-17-jdk && sudo apt up"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/0-sonarqube.sh",
    "chars": 3005,
    "preview": "# 1. Set Up PostgreSQL Instance for SonarQube\n\nsudo mkdir -p /var/lib/postgresql/sonarqube\nsudo chown postgres:postgres "
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/1-ado-tools.sh",
    "chars": 1382,
    "preview": "sudo apt update && apt install -y unzip jq net-tools\r\napt install openjdk-17-jdk -y\r\napt install maven -y && curl https:"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/1-pipeline.yml",
    "chars": 670,
    "preview": "trigger:\r\n  - development\r\n  - uat\r\n  - production\r\n\r\npool:\r\n  name: LinuxAgentPool\r\n  demands:\r\n    - Java -equals Yes\r"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/2-pipeline.yml",
    "chars": 141,
    "preview": "trigger:\r\n  branches:\r\n    include:\r\n      - development\r\n      - uat\r\n      - production\r\n    exclude: [\"master\", \"feat"
  },
  {
    "path": "Day 28 SAST-AzureDevOps-Part-1/README.md",
    "chars": 4670,
    "preview": "![escape](https://github.com/user-attachments/assets/63a9188f-edea-4fc4-9f94-c28b46c5bb37)\n\n# Day28 AzureDevOps_Part_1\n\n"
  },
  {
    "path": "Day 29 AzureDevOps-Part-2/README.md",
    "chars": 5795,
    "preview": "# PLEASE COPY THE POM.XML AND PIPELINE SCRIPT FIRST AND DO THE PRACTICALS. REST ALL SAME. \n\n\n# Prod-SpringBoot-Pet-App\n\n"
  },
  {
    "path": "Day 29 AzureDevOps-Part-2/azure-pipelines.yml",
    "chars": 7970,
    "preview": "trigger:\n  - development\n  - uat\n  - production\n\npool:\n  name: LinuxAgentPool\n  demands:\n    - JDK -equals 17\n    - Terr"
  },
  {
    "path": "Day 29 AzureDevOps-Part-2/pom.xml",
    "chars": 16672,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n  xmlns:xsi=\"http://www.w3.org"
  },
  {
    "path": "Day 30 AzureDevOps-Part-3/README.md",
    "chars": 4576,
    "preview": "# DevSecOps Pipeline Tutorial\n\n![Day 02 (1)](https://github.com/user-attachments/assets/ae4bd8bb-3988-45c9-887d-cb14531c"
  },
  {
    "path": "Day 30 AzureDevOps-Part-3/azure-pipelines.yml",
    "chars": 15337,
    "preview": "trigger:\n  - development\n  - uat\n  - production\n\npool:\n  name: ProdAgentPool\n  demands:\n    - JDK -equals 17\n    - Terra"
  },
  {
    "path": "Day 30 AzureDevOps-Part-3/pom.xml",
    "chars": 17151,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\r\n  xmlns:xsi=\"http://www.w3.o"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/.gitignore",
    "chars": 63,
    "preview": "access.auto.tfvars\nbackend.json\npacker-vars.json\nLaptopKey.pem\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/1-main.tf",
    "chars": 2074,
    "preview": "provider \"aws\" {\n    access_key = \"${var.aws_access_key}\"\n    secret_key = \"${var.aws_secret_key}\"\n    region = \"${var.a"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/2-ec2.tf",
    "chars": 676,
    "preview": "data \"aws_ami\" \"my_ami\" {\n     most_recent      = true\n     name_regex       = \"^Saikiran\"\n     owners           = [\"211"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/3-alb.tf",
    "chars": 1118,
    "preview": "resource \"aws_lb\" \"alb\" {\n  name               = \"app-nlb\"\n  internal           = false\n  load_balancer_type = \"applicat"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/4-alb-listener.tf",
    "chars": 1144,
    "preview": "resource \"aws_lb_listener\" \"alb-https\" {\n  load_balancer_arn = aws_lb.alb.arn\n  port              = \"443\"\n  protocol    "
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/5-route53.tf",
    "chars": 384,
    "preview": "data \"aws_route53_zone\" \"selected\" {\n  name = \"cloudvishwakarma.in\"\n}\n\nresource \"aws_route53_record\" \"nlb\" {\n  zone_id ="
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/README.md",
    "chars": 3491,
    "preview": "# Managing Infrastructure Pipelines - Session Notes\n\n![Azure Devops](https://github.com/user-attachments/assets/a9b32e1f"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/azure-pipelines.yml",
    "chars": 7243,
    "preview": "trigger:\n  branches:\n    include:\n      - master\n    exclude:\n      - releases/old*\n      - feature/*-working\n# resource"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/details.tpl",
    "chars": 601,
    "preview": "[docker_servers]\n${master01} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem\n${master02} an"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/docker-swarm.yml",
    "chars": 4067,
    "preview": "---\n- name: Install Docker and Configure Docker Swarm\n  hosts: docker_servers\n  become: yes\n  become_user: root\n  tasks:"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/docker.service",
    "chars": 1860,
    "preview": "[Unit]\nDescription=Docker Application Container Engine\nDocumentation=https://docs.docker.com\nBindsTo=containerd.service\n"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/localfile.tf",
    "chars": 507,
    "preview": "resource \"local_file\" \"foo\" {\n  content = templatefile(\"details.tpl\",\n    {\n\n      master01 = aws_instance.web-1.0.publi"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/packer.json",
    "chars": 2052,
    "preview": "{\n  \"_comment\": \"Create a AWS AMI ith AMZ Linux 2018 with Java and Tomcat\",\n  \"variables\": {\n    \"aws_access_key\": \"\",\n "
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/prod.auto.tfvars",
    "chars": 475,
    "preview": "aws_region = \"us-east-1\"\nvpc_cidr = \"10.1.0.0/16\"\npublic_subnet1_cidr = \"10.1.1.0/24\"\npublic_subnet2_cidr = \"10.1.2.0/24"
  },
  {
    "path": "Day 31 AzureDevOps-Part-4/variables.tf",
    "chars": 630,
    "preview": "variable \"aws_access_key\" {}\nvariable \"aws_secret_key\" {}\nvariable \"aws_region\" {}\nvariable \"vpc_cidr\" {}\nvariable \"vpc_"
  },
  {
    "path": "Day 32 AzureDevOps-Part-5/README.md",
    "chars": 2567,
    "preview": "# Azure DevOps Project Management Repository\n\nThis repository demonstrates project management practices and workflows in"
  },
  {
    "path": "Day 33 Jenkins-Part-1/Jenkinsfile",
    "chars": 14762,
    "preview": "// Declarative Pipeline\r\ndef VERSION = '1.0.0'\r\n\r\npipeline {\r\n    agent none\r\n    // tools {\r\n    //     maven 'apache-m"
  },
  {
    "path": "Day 33 Jenkins-Part-1/README.md",
    "chars": 5314,
    "preview": "# Day 36 Jenkins-Part-1\n<img width=\"1536\" alt=\"jenkins\" src=\"https://github.com/user-attachments/assets/4519f27d-537b-45"
  },
  {
    "path": "Day 34 Jenkins-Part-2/0-jenkins_install.sh",
    "chars": 4919,
    "preview": "sudo apt update && apt install -y unzip jq net-tools\napt install openjdk-17-jdk -y\napt install maven -y && curl https://"
  },
  {
    "path": "Day 34 Jenkins-Part-2/README.md",
    "chars": 3738,
    "preview": "# Day 37 Jenkins-Part-2\n\n![diagram-export-1-29-2025-8_53_17-PM](https://github.com/user-attachments/assets/123cd71f-a1ff"
  },
  {
    "path": "Day 35 Jenkins-Part-3/Jenkinsfile",
    "chars": 7022,
    "preview": "pipeline {\r\n    agent none\r\n    environment {\r\n        PROJECT = \"WELCOME TO Jenkins-Terraform Modules Pipeline\"\r\n      "
  },
  {
    "path": "Day 35 Jenkins-Part-3/README.md",
    "chars": 1808,
    "preview": "<img width=\"1536\" alt=\"Jenkins Pipeline\" src=\"https://github.com/user-attachments/assets/78c38280-4096-4b14-9891-f924ef8"
  },
  {
    "path": "Day 36 Jenkins-Part-4/README.md",
    "chars": 24,
    "preview": "# Day 39 Jenkins-Part-4\n"
  },
  {
    "path": "README.md",
    "chars": 23,
    "preview": "## MASTERING DEVSECOPS\n"
  }
]

About this extraction

This page contains the full source code of the saikiranpi/Mastering-DevSecOps GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 114 files (307.5 KB), approximately 82.9k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!