Repository: saikiranpi/Mastering-DevSecOps Branch: Master Commit: 985e76aae614 Files: 114 Total size: 307.5 KB Directory structure: gitextract_dld9c9qw/ ├── Day 01 Introduction-BaseLabCreation - Variables-Script-grep-awk-cut/ │ └── README.md ├── Day 02 Arguments-PassingSpecialparams/ │ └── README.md ├── Day 03 OutputRedirection-For-While/ │ └── README.md ├── Day 04 UserAutomation/ │ ├── README.md │ └── script.sh ├── Day 05 RegEx-Break-Continue-CustomExitCodes/ │ ├── README.md │ ├── break.sh │ ├── continue.sh │ └── exit-code.sh ├── Day 06 Functions/ │ ├── README.md │ ├── docker.sh │ ├── ebs.sh │ ├── log-rotation.sh │ └── multi-function.sh ├── Day 07 Git-1/ │ └── README.md ├── Day 08 Git-2/ │ └── README.md ├── Day 09 Git-3/ │ └── README.md ├── Day 10 AWS-Terraform-Part-1/ │ └── README.md ├── Day 11 AWS-Terraform-Part-2/ │ └── README.md ├── Day 12 AWS-Terraform-Part-3/ │ └── README.md ├── Day 13 AWS-Terraform-Part-4/ │ └── README.md ├── Day 14 AWS-Terraform-Functions-1/ │ ├── README.md │ ├── RTA.tf │ ├── locals.tf │ ├── main.tf │ ├── sg.tf │ ├── subnet.tf │ ├── terraform.tfvars │ └── variables.tf ├── Day 15 AWS-Terraform-Functions-2/ │ ├── README.md │ ├── private-ec2.tf │ ├── public-ec2.tf │ ├── terraform.tfvars │ ├── txt.tf │ ├── user-data.sh │ └── variable.sh ├── Day 16 AWS-Terraform-Part-6 Modules-Part-1/ │ └── README.md ├── Day 17 AWS-Terraform-Full-Course/ │ └── README.md ├── Day 18 AWS-Terraform-Part-8 TerraformCloud/ │ └── README.md ├── Day 19 AWS-Terraform-Part-9 GitLab-Pipeline/ │ └── README.md ├── Day 20 AWS-Packer/ │ └── README.md ├── Day 21 AWS-Ansible-Part-1/ │ ├── .gitignore │ ├── 1.provider.tf │ ├── 10.locals.tf │ ├── 11.localfile_ansible_inventory.tf │ ├── 12.localfile_ansible_inventory_yaml.tf │ ├── 13.null-local-exec.tf │ ├── 14.outputs.tf │ ├── 15.terraform.tfvars │ ├── 16.variables.tf │ ├── 2.vpc.tf │ ├── 3.public-subnets.tf │ ├── 4.private-subnets.tf │ ├── 5.public-routing.tf │ ├── 6.private-routing.tf │ ├── 7.ec2.tf │ ├── 8.sg.tf │ ├── 9.vpc-peering.tf │ ├── Playbooks │ ├── README.md │ ├── publicservers.tpl │ └── publicservers_yaml.tpl ├── Day 22 AWS-Ansible-Part-2/ │ └── README.md ├── Day 23 AWS-Ansible-Part-3/ │ └── README.md ├── Day 24 Ansible-Part-4 DynamicInventory_AWX/ │ └── README.md ├── Day 25 HashicorpVault AWSIntegration/ │ ├── HashiCorp_Vault/ │ │ ├── 0-steps.sh │ │ ├── 1-config.hcl │ │ ├── 2-config-kms.hcl │ │ └── 2-vault.service │ ├── README.md │ └── terraform-vault/ │ ├── 1-provider.tf │ ├── 2-random-passwords.tf │ ├── 3-hashi-vault-passwords.tf │ ├── policy.yaml │ ├── user.tf │ └── variables.tf ├── Day 26 Docker-Full-Course/ │ └── README.md ├── Day 27 Maven-JFrog-Sonarqube/ │ └── README.md ├── Day 28 SAST-AzureDevOps-Part-1/ │ ├── 0-maven.sh │ ├── 0-sonarqube.sh │ ├── 1-ado-tools.sh │ ├── 1-pipeline.yml │ ├── 2-pipeline.yml │ └── README.md ├── Day 29 AzureDevOps-Part-2/ │ ├── README.md │ ├── azure-pipelines.yml │ └── pom.xml ├── Day 30 AzureDevOps-Part-3/ │ ├── README.md │ ├── azure-pipelines.yml │ └── pom.xml ├── Day 31 AzureDevOps-Part-4/ │ ├── .gitignore │ ├── 1-main.tf │ ├── 2-ec2.tf │ ├── 3-alb.tf │ ├── 4-alb-listener.tf │ ├── 5-route53.tf │ ├── README.md │ ├── azure-pipelines.yml │ ├── details.tpl │ ├── docker-swarm.yml │ ├── docker.service │ ├── localfile.tf │ ├── packer.json │ ├── prod.auto.tfvars │ └── variables.tf ├── Day 32 AzureDevOps-Part-5/ │ └── README.md ├── Day 33 Jenkins-Part-1/ │ ├── Jenkinsfile │ └── README.md ├── Day 34 Jenkins-Part-2/ │ ├── 0-jenkins_install.sh │ └── README.md ├── Day 35 Jenkins-Part-3/ │ ├── Jenkinsfile │ └── README.md ├── Day 36 Jenkins-Part-4/ │ └── README.md └── README.md ================================================ FILE CONTENTS ================================================ ================================================ FILE: Day 01 Introduction-BaseLabCreation - Variables-Script-grep-awk-cut/README.md ================================================ # Introduction-BaseLabCreation - Variables-Script-grep-awk-cut ![1](https://github.com/user-attachments/assets/bb18e257-ad41-4d32-acfe-4963bb23cb8f) # DevSecOps Scripting Course - Day 01 & 02 ## Course Overview This course is designed to help you get started with DevSecOps by covering shell scripting, cloud infrastructure, and essential security tools. You'll work with real-world tasks, using various tools and services to build a secure and functional DevSecOps environment. --- ## Prerequisites ### Cloud Platforms: - **AWS**, **Azure**, or **GCP** – choose any one. ### DevSecOps Tools: - **SonarQube** – for code quality and security analysis. - **HashiCorp Vault** – for managing secrets and passwords. - **Trivy** – for container image scanning. - **Ansible Vault** – for secure secret management. - **CISO** – for cybersecurity insights. ### Tools Required for Scripting: - **JQ** – For parsing JSON data. - **Net-tools** – Network utilities like `ifconfig`, `nslookup`. - **Unzip** – To extract `.zip` files. --- ## Task: Create a Base Lab Environment ### Objective: Set up a VPC, create a new key pair, deploy an instance, and access it using PuTTY. ### Steps: 1. **Create VPC and Instance**: - Create a new VPC with a single EC2 instance. - Generate a new key pair (PEM format). 2. **Generate PPK File for PuTTY**: - Open PuTTYgen and load the PEM file. - Generate and save a new private key (PPK format). 3. **Login via PuTTY**: - Open PuTTY and connect to `ubuntu@`. - Customize window settings (bold text, window size, colors). - Under `Connection > SSH > Auth`, browse and load your PPK file. - Save the session as "SecOps Session" for future use. > **Note:** In production, avoid running `sudo su -` as you may not have root access. Running root commands could result in access to sensitive operations, like deleting logs. 4. **Install Required Tools**: ```bash sudo apt install jq -y && apt install net-tools -y && apt install unzip -y ``` --- ## Shell Scripting Tasks ### Task 1: Using Tmux To manage multiple servers or sessions, break the screen into two: - Use `Ctrl + b`, then `Shift + "` to split the screen horizontally. - For vertical split: `Ctrl + b`, then `Shift + 5`. - Useful for monitoring multiple servers. ### Task 2: Print Time Repeatedly Print the date and time every second for 10 seconds: ```bash for i in {1..10} do echo $(date) sleep 1 done ``` > **Note:** To get only the day, date, and time, modify the above script using `awk`: ```bash for i in {1..10} do echo $(date) | awk -F " " '{print $1, $2, $3, $4}' sleep 1 done ``` ### Task 3: Understanding Variables in Shell Scripting Variable declaration is useful for repeated values. 1. Declaring a variable and using it: ```bash RG='Saikiran-SecOps' echo $RG echo "${RG}" ``` 2. Using variables with single and double quotes: ```bash X=10 RG='Saikiran-SecOps-$X' # Won't expand the variable echo $RG # Outputs: Saikiran-SecOps-$X RG="Saikiran-SecOps-$X" # Will expand the variable echo $RG # Outputs: Saikiran-SecOps-10 ``` --- ## Task 4: AWS CLI and Data Manipulation ### Install AWS CLI: Run the following commands: ```bash sudo apt install awscli -y aws configure # Configure AWS access and secret keys. ``` ### S3 Bucket Example: 1. List the contents of an S3 bucket: ```bash aws s3 ls ``` 2. Use `cut` to extract specific fields: ```bash aws s3 ls | cut -d ' ' -f1,2,3 ``` 3. Use `awk` for more complex field manipulation: ```bash aws s3 ls | awk -F " " '{print $3,$2,$1}' ``` 4. Use `grep` to find specific patterns: ```bash aws s3 ls | grep -E ^www[-] ``` --- ## Shell Script Example: `get_bucket.sh` ```bash #!/bin/bash aws s3 ls | cut -d ' ' -f 3 | grep -E ^www[-] echo "Hello Saikiran, welcome to DevSecOps!" ``` ### Execution: ```bash chmod +x get_bucket.sh ./get_bucket.sh ``` > **Note:** Do **not** use `chmod 777` as it grants full permissions to everyone, which is a security risk. Use `chmod 700` instead to restrict access to the owner. --- ## Debugging Scripts To enable debugging in a script: ```bash #!/bin/bash set -x # Enable debugging ``` This will print each command before executing it, helping you to debug. --- ## Conclusion This README covers Day 01 of DevSecOps, focusing on basic shell scripting, AWS tools, and security best practices. You should now be familiar with setting up a basic lab, working with shell scripts, and using AWS CLI for DevSecOps tasks. ================================================ FILE: Day 02 Arguments-PassingSpecialparams/README.md ================================================ # Day 02 Arguments-PassingSpecialparams ![02](https://github.com/user-attachments/assets/13165920-47f8-4843-b6d4-00af9ca7ac5f) Welcome to the **Arguments-PassingSpecialparams** repository! This project focuses on demonstrating the usage of parameter passing, special shell parameters, and output redirection in Bash scripting, specifically in the context of AWS VPC management. ## Table of Contents - [Introduction](#introduction) - [Prerequisites](#prerequisites) - [Getting Started](#getting-started) - [Scripts Overview](#scripts-overview) - [get_vpc.sh](#get_vpcsh) - [script.sh](#scriptsh) - [Usage](#usage) - [Running `get_vpc.sh`](#running-get_vpcsh) - [Running `script.sh`](#running-scriptsh) - [Understanding Special Parameters](#understanding-special-parameters) - [`$?`](#-exit-code) - [`$@` and `$*`](#-and-) - [`$#`](#-number-of-arguments) - [Error Handling and Output Redirection](#error-handling-and-output-redirection) - [Contributing](#contributing) - [License](#license) ## Introduction This repository contains Bash scripts designed to interact with AWS EC2 to retrieve VPC (Virtual Private Cloud) details across different regions. The scripts demonstrate: - **Passing Parameters**: How to pass and utilize arguments in Bash scripts. - **Special Parameters**: Utilizing `$?`, `$@`, `$*`, and `$#` to handle script behavior based on inputs and command execution status. - **Output Redirection**: Managing script output and errors effectively. ## Prerequisites Before using the scripts, ensure you have the following installed and configured: - **AWS CLI**: [Installation Guide](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) - **jq**: A lightweight and flexible command-line JSON processor. [Installation Guide](https://stedolan.github.io/jq/download/) - **Bash Shell**: Most Unix-based systems come with Bash pre-installed. - **AWS Credentials**: Ensure your AWS credentials are configured with the necessary permissions to describe VPCs. [Configuration Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) ## Getting Started 1. **Clone the Repository** ```bash git clone https://github.com/yourusername/Arguments-PassingSpecialparams.git cd Arguments-PassingSpecialparams ``` 2. **Make Scripts Executable** ```bash chmod +x get_vpc.sh script.sh ``` ## Scripts Overview ### `get_vpc.sh` This script retrieves VPC IDs from a specified AWS region. **Script Content:** ```bash #!/bin/bash # Check if at least one argument is provided if [ $# -gt 0 ]; then REGIONS=$@ echo "Fetching VPC IDs for regions: $REGIONS" for REGION in $REGIONS; do aws ec2 describe-vpcs --region ${REGION} | jq ".Vpcs[].VpcId" -r done else echo "You have provided $# arguments. Please provide at least one region." exit 1 fi ``` ### `script.sh` This script demonstrates the use of special parameters and error handling by checking the AWS CLI version before proceeding to retrieve VPC details. **Script Content:** ```bash #!/bin/bash # Suppress AWS CLI version output aws --version > /dev/null 2>&1 # Check if the previous command was successful if [ $? -eq 0 ]; then REGIONS=$@ echo "Fetching VPC IDs for regions: $REGIONS" for REGION in $REGIONS; do aws ec2 describe-vpcs --region ${REGION} | jq ".Vpcs[].VpcId" -r done else echo "Incorrect AWS command. Please check your AWS CLI installation." exit 1 fi ``` ## Usage ### Running `get_vpc.sh` Retrieve VPC IDs from one or multiple AWS regions. **Example:** ```bash ./get_vpc.sh us-east-1 ap-south-1 us-east-2 ``` **Output:** ``` vpc-0abcd1234efgh5678 vpc-1bcde2345fghij678 ... ``` ### Running `script.sh` Ensure AWS CLI is correctly installed and then retrieve VPC IDs. **Example:** ```bash ./script.sh us-east-1 us-east-2 ap-southeast-1 ``` **Output:** ``` Fetching VPC IDs for regions: us-east-1 us-east-2 ap-southeast-1 vpc-0abcd1234efgh5678 vpc-1bcde2345fghij678 ... ``` **Handling Errors:** - If AWS CLI is not installed or incorrectly configured, the script will output: ``` Incorrect AWS command. Please check your AWS CLI installation. ``` - If no regions are provided as arguments, the script will output: ``` You have provided 0 arguments. Please provide at least one region. ``` ## Understanding Special Parameters ### `$?` – Exit Code - Represents the exit status of the last executed command. - `0` indicates success, while any non-zero value indicates an error. **Example:** ```bash ls -al echo $? # Outputs 0 if successful ls nonexistentfile echo $? # Outputs a non-zero value indicating an error ``` ### `$@` and `$*` – All Positional Parameters - Both represent all the arguments passed to the script. - The difference lies in how they handle quoted arguments. **Usage in Scripts:** ```bash REGIONS=$@ # or REGIONS=$* ``` ### `$#` – Number of Arguments - Represents the number of arguments passed to the script. **Example:** ```bash echo "Number of arguments: $#" ``` ## Error Handling and Output Redirection **Output Redirection:** - **Standard Output (`stdout`)**: Default output stream. - **Standard Error (`stderr`)**: Output stream for errors. **Redirecting Outputs:** - Suppress standard output: ```bash aws --version > /dev/null ``` - Suppress both standard output and standard error: ```bash aws --version > /dev/null 2>&1 ``` **Using Conditional Statements:** Utilize exit codes to control script flow. ```bash aws --version > /dev/null 2>&1 if [ $? -eq 0 ]; then # Proceed with script else echo "AWS CLI not found. Exiting." exit 1 fi ``` ## License This project is licensed under the [MIT License](LICENSE). --- *Happy Scripting!* ================================================ FILE: Day 03 OutputRedirection-For-While/README.md ================================================ ![03](https://github.com/user-attachments/assets/6be236b3-3be1-4c2d-ade5-3341265b409d) # Day 03 OutputRedirection-For-While This project demonstrates **output redirection** and the use of **for** and **while** loops in Bash scripting, along with examples using **standard input**, **output**, and **error** redirections. ## Key Concepts ### Standard Streams: - **stdin**: Standard Input (File descriptor 0) - **stdout**: Standard Output (File descriptor 1) - **stderr**: Standard Error (File descriptor 2) ### Output Redirection: - `>` : Redirects the output and **overwrites** the content in the file. - `>>` : Redirects the output and **appends** it to the file. - **Tee Command**: Redirects the output to a file and **displays it on the screen** simultaneously. --- ## Script Example: `std-script.sh` This Bash script demonstrates both valid and invalid commands. We'll focus on how to redirect output. ### Script: ```bash #!/bin/bash ls -al # Valid command, prints directory listing Saikiran # Invalid command, will trigger an error df -h # Valid command, prints disk space usage Avinash # Invalid command, will trigger an error free # Valid command, prints memory usage sai # Invalid command, will trigger an error cat /etc/hostname # Valid command, prints hostname avi # Invalid command, will trigger an error ``` ### How to Execute: 1. Save the script as `std-script.sh`. 2. Run it using `bash std-script.sh`. --- ### Requirements: 1. **Print only successful commands**: ```bash bash std-script.sh 2> /dev/null ``` - Redirects any errors (stderr) to `/dev/null`, so only the output of successful commands is shown. 2. **Print only failed commands**: ```bash bash std-script.sh 1> /dev/null ``` - Redirects standard output (stdout) to `/dev/null`, so only error messages (stderr) are displayed. --- ### Overwriting and Appending Output: - To redirect both **stdout** and **stderr** to a file: ```bash bash std-script.sh > /tmp/error 2>&1 ``` - This will **overwrite** the file with both standard output and errors. - To **append** instead of overwriting: ```bash bash std-script.sh >> /tmp/error 2>&1 ``` --- ### Display and Save Output: To display output on the screen **and** save it to a file: ```bash bash std-script.sh | tee /tmp/tee1 ``` - If you want to **append** to the file instead of overwriting: ```bash bash std-script.sh 2>&1 | tee -a /tmp/tee1 ``` --- ## For Loops vs While Loops ### For Loops: Used when the number of iterations is known. For example, printing numbers from 1 to 100. #### Script: `loops.sh` ```bash #!/bin/bash for i in {1..100} do echo $i done ``` ### While Loops: Used when the number of iterations is not known and continues as long as the condition is true. #### Example: Check if a website is working using a **while loop**: ```bash while true do curl https://www.google.com | grep -i google sleep 1 done ``` --- ## Working with Python and Bash ### Python Example: ```python x = 5 * 4 print(x) ``` ### Bash Equivalent: ```bash x=$(expr 5 \* 4) echo $x ``` - In Bash, we need to use `expr` to perform arithmetic. --- ## Printing Even and Odd Numbers ### Even Numbers: ```bash #!/bin/bash for i in {1..100}; do if [ $((i % 2)) -eq 0 ]; then echo "$i is an even number" fi done ``` ### Even and Odd Numbers: ```bash #!/bin/bash for i in {1..100} do if [ $(( i % 2 )) -ne 0 ]; then echo "$i is an odd number" else echo "$i is an even number" fi done ``` --- ## Conclusion This project covers the basic concepts of output redirection in Linux, the usage of for and while loops, and demonstrates both valid and invalid command execution. Whether you are handling script output or automating tasks, understanding how to redirect outputs and loop through commands is essential for DevOps and system automation. Feel free to explore the scripts, modify them, and experiment with different redirection methods and loop structures! --- Happy scripting! 😊 --- This `README.md` provides an overview of the key concepts, code snippets, and practical use cases from the notes. ================================================ FILE: Day 04 UserAutomation/README.md ================================================ # Day 04 UserAutomation ![a-3d-render-of-a-dark-themed-cybersecurity-confere-TU2eVZcIRda9RcDkaObkyg-yt2DCIPgQIaI9w7_DYZnYw](https://github.com/user-attachments/assets/75314cc4-86a5-41bb-b47b-acb0d3765555) This script automates the process of creating new users on a Linux system. It checks if a user already exists, creates a new user if they don't, generates a random password with a special character, and forces the user to reset their password on the first login. ## Features: 1. Checks if the provided username already exists in the system. 2. If the user doesn’t exist, it creates the user with a randomly generated password. 3. The password includes a special character and a random number. 4. The user is forced to reset their password during their first login. 5. Supports creating multiple users in one execution. 6. Includes automated SSH configuration changes to enable password authentication. ## Prerequisites: - You must have root or sudo privileges to run this script. - Ensure that `passwd` and `sed` are installed on your system. ## How It Works: 1. **Check for Existing Users:** The script checks the `/etc/passwd` file to see if the provided username already exists. 2. **Create New User:** If the user does not exist, it creates a new user with the `useradd` command and assigns a randomly generated password. 3. **Generate Random Password:** The password is created using a combination of random numbers and a randomly selected special character from a predefined set. 4. **SSH Configuration:** The script uses `sed` to modify the `/etc/ssh/sshd_config` file to enable password authentication. It also creates a backup of this file before making changes. 5. **Multiple Users Creation:** The script allows you to create multiple users by passing multiple arguments. ## Script Example: ```bash #!/bin/bash if [ $# -gt 0 ]; then USER=$1 echo $USER else echo " Please Enter the Valid parameter " fi ##ADDING-USER## #!/bin/bash if [ $# -gt 0 ]; then USER=$1 echo $USER EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "The $USER you have entered is already present in the machine, Please Enter the Another USername" else echo " Lets Create a New New username" sudo useradd -m $USER --shell /bin/bash fi else echo " Please Enter the Valid parameter " fi ##password ## #!/bin/bash if [ $# -gt 0 ]; then USER=$1 echo $USER EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "The $USER you have entered is already present in the machine, Please Enter the Another USername" else echo " Lets Create a New New username" sudo useradd -m $USER --shell /bin/bash SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1) PASSWORD="IndianArmy@${RANDOM}${SPEC}" echo "$USER:$PASSWORD" | sudo chpasswd echo "The temporary password the $USER is ${PASSWORD}" passwd -e $USER fi else echo " Please Enter the Valid parameter " fi # Sed -i “58 s/.*PasswordAuthentication.*/PasswordAuthentication yes/g” /etc/ssh/sshd_config ##Multi User passing ## #!/bin/bash #!/bin/bash if [ $# -gt 0 ]; then for USER in $@; do echo $USER EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "The $USER you have entered is already present in the machine, Please Enter the Another USername" else echo " Lets Create a New New username" sudo useradd -m $USER --shell /bin/bash SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1) PASSWORD="IndianArmy@${RANDOM}${SPEC}" echo "$USER:$PASSWORD" | sudo chpasswd echo "The temporary password the $USER is ${PASSWORD}" passwd -e $USER fi done else echo " Please Enter the Valid parameter " fi ##regex## #regex- Regular Expressions# #!/bin/bash if [ $# -gt 0 ]; then for USER in $@; do echo $USER if [[ $USER =~ ^[a-zA-Z]+$ ]]; then EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ':' -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "$USER is already exisitin, Please create a New user" else echo "Lets create the New $USER" sudo useradd -m $USER --shell /bin/bash SPEC=$(echo '!@#$%^&*()_' | fold -w1 | shuf | head -1) PASSWORD="IndianArmy@${RANDOM}${SPEC}" echo "$USER:$PASSWORD" | sudo chpasswd echo "The termporary password for the user is ${PASSWORD}" passwd -e $USER fi else echo "The User Must Contain Alphabets" fi done else echo "Please pass the Argument" fi ``` ## SSH Configuration (Optional): To enable password authentication for newly created users, the script modifies the SSH configuration using `sed`. This is important for AWS instances, where password authentication is disabled by default. ```bash # Backup the sshd_config file cp /etc/ssh/sshd_config /etc/ssh/sshd_config_backup # Modify the sshd_config file to enable password authentication sed -i "s/.*PasswordAuthentication.*/PasswordAuthentication yes/g" /etc/ssh/sshd_config # Restart the SSH service sudo service sshd restart ``` ## How to Run the Script: 1. Save the script as `user-automation.sh`. 2. Run the script with a username as an argument: ```bash bash user-automation.sh username1 username2 ``` Example: ```bash bash user-automation.sh alice bob ``` ## Notes: - Ensure that password authentication is enabled on your system if you want to use password-based login for the newly created users. - This script automatically forces the new user to reset their password on first login. --- This README provides an overview of the script in simple terms, helping users understand what it does and how to use it. ================================================ FILE: Day 04 UserAutomation/script.sh ================================================ #!/bin/bash if [ $# -gt 0 ]; then USER=$1 echo $USER else echo " Please Enter the Valid parameter " fi ##ADDING-USER## #!/bin/bash if [ $# -gt 0 ]; then USER=$1 echo $USER EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "The $USER you have entered is already present in the machine, Please Enter the Another USername" else echo " Lets Create a New New username" sudo useradd -m $USER --shell /bin/bash fi else echo " Please Enter the Valid parameter " fi ##password ## #!/bin/bash if [ $# -gt 0 ]; then USER=$1 echo $USER EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "The $USER you have entered is already present in the machine, Please Enter the Another USername" else echo " Lets Create a New New username" sudo useradd -m $USER --shell /bin/bash SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1) PASSWORD="IndianArmy@${RANDOM}${SPEC}" echo "$USER:$PASSWORD" | sudo chpasswd echo "The temporary password the $USER is ${PASSWORD}" passwd -e $USER fi else echo " Please Enter the Valid parameter " fi # Sed -i “58 s/.*PasswordAuthentication.*/PasswordAuthentication yes/g” /etc/ssh/sshd_config ##Multi User passing ## #!/bin/bash #!/bin/bash if [ $# -gt 0 ]; then for USER in $@; do echo $USER EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ":" -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "The $USER you have entered is already present in the machine, Please Enter the Another USername" else echo " Lets Create a New New username" sudo useradd -m $USER --shell /bin/bash SPEC=$(echo ' !@#$%^&*()_' | fold -w1 | shuf | head -1) PASSWORD="IndianArmy@${RANDOM}${SPEC}" echo "$USER:$PASSWORD" | sudo chpasswd echo "The temporary password the $USER is ${PASSWORD}" passwd -e $USER fi done else echo " Please Enter the Valid parameter " fi ##regex## #regex- Regular Expressions# #!/bin/bash if [ $# -gt 0 ]; then for USER in $@; do echo $USER if [[ $USER =~ ^[a-zA-Z]+$ ]]; then EXISTING_USER=$(cat /etc/passwd | grep -i -w $USER | cut -d ':' -f1) if [ "${USER}" = "${EXISTING_USER}" ]; then echo "$USER is already exisitin, Please create a New user" else echo "Lets create the New $USER" sudo useradd -m $USER --shell /bin/bash SPEC=$(echo '!@#$%^&*()_' | fold -w1 | shuf | head -1) PASSWORD="IndianArmy@${RANDOM}${SPEC}" echo "$USER:$PASSWORD" | sudo chpasswd echo "The termporary password for the user is ${PASSWORD}" passwd -e $USER fi else echo "The User Must Contain Alphabets" fi done else echo "Please pass the Argument" fi ================================================ FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/README.md ================================================ # Day 05 RegEx-Break-Continue-CustomExitCodes ![05](https://github.com/user-attachments/assets/27fd624d-bb91-46d5-b710-3b04db991e75) ## Features: 1. **Regular Expressions in Shell Scripts** 2. **Break and Continue for Iteration Control** 3. **Custom Exit Codes** 4. **Arrays in Shell Scripts** --- ## 1. **User Automation with Regex** Regular expressions are a powerful tool in shell scripts for tasks like input validation. In this repository, we demonstrate how to use regular expressions to enforce patterns in username creation, specifically requiring users to create usernames that follow a certain format (e.g., `3 lowercase letters followed by 3 numbers`). **Example:** ```bash if [[ $USER =~ ^[a-z]{3}[0-9]{3}$ ]] ; then echo "Username is valid" else echo "Username is invalid" fi ``` ## 2. **Common Regex Patterns:** - `\d` - Matches any digit. - `\D` - Matches any non-digit character. - `\s` - Matches any whitespace. - `\W` - Matches any non-word character (like punctuation). **Example:** To find a phone number pattern like `123-456-7890`, you can use: ```regex \d{3}-\d{3}-\d{4} ``` --- ## 3. **Iteration Control Using Break and Continue** In shell scripting, `break` and `continue` are essential for controlling loops. - **Break**: Used to exit a loop when a condition is met. - **Continue**: Used to skip the current iteration of the loop and move on to the next iteration. **Example:** ```bash for i in {1..10}; do if [[ $i -eq 5 ]]; then break # Stops the loop when i equals 5 fi echo $i done ``` ## 4. **Custom Exit Codes** In shell scripts, you can use custom exit codes to signal the success or failure of commands. For instance, if an AWS command runs successfully, but you encounter a regional endpoint issue, you can check the exit status to determine what happened. **Example:** ```bash aws ec2 describe-vpcs --region us-east-1 if [[ $? -ne 0 ]]; then echo "Incorrect region, exiting" exit 1 else echo "Correct region" fi ``` ## 5. **Arrays in Shell Scripts** Arrays are a useful way to handle multiple values in a shell script. You can manipulate strings or data using array operations. **Example:** ```bash NAME='SaikiranPinapathruni' echo ${#NAME} # Outputs the length of the string for i in {0..${#NAME}}; do echo ${NAME:$i:1} # Prints one character at a time done ``` --- ## 6. **Practical Scenarios:** 1. **Regex for Phone Numbers**: - Extract phone numbers starting with a specific pattern like `1-234`. Example regex: `\d-[234]\d\d-\d\d\d-\d\d\d\d` 2. **Shell Script for User Creation**: - Create two users: one with lowercase letters and one with uppercase letters. 3. **Exit Code Handling**: - Check whether a command executed successfully and handle errors gracefully based on the exit code. --- ## Conclusion This repository provides a detailed guide on how to use regular expressions, break/continue, arrays, and exit codes in shell scripts. These concepts are essential for automating tasks and creating efficient shell scripts that handle various scenarios gracefully. --- ================================================ FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/break.sh ================================================ aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2) echo "Running the function to list VPCs using the regions list" for region in "${aws_regions[@]}"; do echo "Getting VPCs in $region .. " vpc_list=$(aws ec2 describe-vpcs --region "$region" | jq -r .Vpcs[].VpcId) vpc_arr=(${vpc_list[@]}) if [ ${#vpc_arr[@]} -gt 0 ]; then for vpc in "${vpc_list[@]}"; do echo "The VPC-ID is: $vpc" done echo "##########" else echo "Invalid Region..!!" echo "#######" echo "# Breaking at $region #" echo "################" break fi done ================================================ FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/continue.sh ================================================ # CONTINUE #!/bin/bash aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2) echo "Running the function to list VPCs using the regions list" for region in "${aws_regions[@]}"; do echo "Getting VPCs in $region .. " vpc_list=$(aws ec2 describe-vpcs --region "$region" | jq -r .Vpcs[].VpcId) vpc_arr=(${vpc_list[@]}) if [ ${#vpc_arr[@]} -gt 0 ]; then for vpc in "${vpc_list[@]}"; do echo "The VPC-ID is: $vpc" done echo "##########" else echo "Invalid Region..!!" echo "#######" echo "# Breaking at $region #" echo "################" #break #exit 99 continue fi done ================================================ FILE: Day 05 RegEx-Break-Continue-CustomExitCodes/exit-code.sh ================================================ ######EXIT CODE############ #!/bin/bash aws_regions=(us-east-1 us-east-2 hyd-india-1 eu-north-1 ap-south-1 eu-west-3 eu-west-2 eu-west-1 ap-northeast-2) echo "Running the function to list VPCs using the regions list" for region in "${aws_regions[@]}"; do echo "Getting VPCs in $region .. " vpc_list=$(aws ec2 describe-vpcs --region "$region" | jq -r .Vpcs[].VpcId) vpc_arr=(${vpc_list[@]}) if [ ${#vpc_arr[@]} -gt 0 ]; then for vpc in "${vpc_list[@]}"; do echo "The VPC-ID is: $vpc" done echo "##########" else echo "Invalid Region..!!" echo "#######" echo "# Breaking at $region #" echo "################" #break exit 99 fi done ================================================ FILE: Day 06 Functions/README.md ================================================ # Day 06: Functions and Scripts ## Overview In this session, we explore the concept of functions in shell scripting and how they can be beneficial in managing code effectively. While functions might not be heavily utilized in shell scripting, they become crucial when you transition to languages like Python. ## What is a Function? A **function** is a block of code that can be called whenever needed. It allows for code reuse and better organization. ### Example in Python ```python def addition(a, b): # Passing two parameters: a and b return a + b # Returns the sum of a and b # Calling the function result_a = addition(2, 3) result_b = addition(4, 5) result_c = addition(10, 20) print(result_a + result_b + result_c) # Outputs the sum of all results ``` ### Importance of Functions Functions will only execute when they are called. For instance, in Terraform, you might use functions like: ```hcl count = 3 element length ``` ### Installing Docker To install Docker, you would typically call a function from a script like this: [Docker Installation](https://get.docker.com). ## Defining Functions in Shell Scripting In shell scripting, you can define functions in two ways: 1. **Using the `function` keyword:** ```bash function hello { # code } ``` 2. **Using parentheses:** ```bash hello() { # code } ``` ## Checking Installed Commands You can check if a command is installed using: ```bash command -v jq echo $? # Returns the exit status of the last command command -v aq echo $? ``` If the `command_exist` function wasn’t used, you would need to enter these commands multiple times in your script, making functions very useful for reducing redundancy. ## Running the Delete Volume Scripts 1. **Create three 1 GB EBS volumes.** 2. To automate this task daily, we’ll use **Cron Jobs**. ### Understanding Cron Jobs To set up a Cron job, you would: ```bash crontab -e # Edit the crontab file # Add the following line: * * * * * sudo bash /root/deleteebs.sh us-east-1 # Adjust timing as needed ``` Ensure that your script is saved at `/root/deleteebs.sh`. ## Scheduling Adjustments If you want the task to run every 10 minutes, use: ``` */10 * * * * sudo bash /root/deleteebs.sh us-east-1 ``` ## Nginx Server Installation and Test 1. Install the Nginx server on your instance. 2. Access it and generate a simple HTML game: ```bash nano /var/www/html/index.html # Make your changes here ``` 3. Set up uptime monitoring with StatusCake: - Log in with Google. - Create a new uptime test with the URL and desired parameters. ## Calling Multiple Functions In your script, you can call multiple functions. At the end of your script, you might have: ```bash vpc $@ # Allows passing multiple regions ``` ## Interview Question Example **Question:** In one system, how can I find files larger than 10 MB? **Answer:** You can list files and check their sizes with `du`, but using the `find` command is more efficient: ```bash find / -size +50M -size -60M 2>/dev/null ``` ### Explanation: - `/`: The starting directory for the search (root). - `-size +50M`: Finds files larger than 50 MB. - `-size -60M`: Finds files smaller than 60 MB. - `2>/dev/null`: Redirects error messages (e.g., permission denied) to `/dev/null`. ## Log Rotation Script Log rotation helps manage log files by preventing them from growing indefinitely. When log files reach a certain size, the rotation script will execute to keep things organized. --- ================================================ FILE: Day 06 Functions/docker.sh ================================================ #!/bin/sh set -e # Docker Engine for Linux installation script. # # This script is intended as a convenient way to configure docker's package # repositories and to install Docker Engine, This script is not recommended # for production environments. Before running this script, make yourself familiar # with potential risks and limitations, and refer to the installation manual # at https://docs.docker.com/engine/install/ for alternative installation methods. # # The script: # # - Requires `root` or `sudo` privileges to run. # - Attempts to detect your Linux distribution and version and configure your # package management system for you. # - Doesn't allow you to customize most installation parameters. # - Installs dependencies and recommendations without asking for confirmation. # - Installs the latest stable release (by default) of Docker CLI, Docker Engine, # Docker Buildx, Docker Compose, containerd, and runc. When using this script # to provision a machine, this may result in unexpected major version upgrades # of these packages. Always test upgrades in a test environment before # deploying to your production systems. # - Isn't designed to upgrade an existing Docker installation. When using the # script to update an existing installation, dependencies may not be updated # to the expected version, resulting in outdated versions. # # Source code is available at https://github.com/docker/docker-install/ # # Usage # ============================================================================== # # To install the latest stable versions of Docker CLI, Docker Engine, and their # dependencies: # # 1. download the script # # $ curl -fsSL https://get.docker.com -o install-docker.sh # # 2. verify the script's content # # $ cat install-docker.sh # # 3. run the script with --dry-run to verify the steps it executes # # $ sh install-docker.sh --dry-run # # 4. run the script either as root, or using sudo to perform the installation. # # $ sudo sh install-docker.sh # # Command-line options # ============================================================================== # # --version # Use the --version option to install a specific version, for example: # # $ sudo sh install-docker.sh --version 23.0 # # --channel # # Use the --channel option to install from an alternative installation channel. # The following example installs the latest versions from the "test" channel, # which includes pre-releases (alpha, beta, rc): # # $ sudo sh install-docker.sh --channel test # # Alternatively, use the script at https://test.docker.com, which uses the test # channel as default. # # --mirror # # Use the --mirror option to install from a mirror supported by this script. # Available mirrors are "Aliyun" (https://mirrors.aliyun.com/docker-ce), and # "AzureChinaCloud" (https://mirror.azure.cn/docker-ce), for example: # # $ sudo sh install-docker.sh --mirror AzureChinaCloud # # ============================================================================== # Git commit from https://github.com/docker/docker-install when # the script was uploaded (Should only be modified by upload job): SCRIPT_COMMIT_SHA="39040d838e8bcc48c23a0cc4117475dd15189976" # strip "v" prefix if present VERSION="${VERSION#v}" # The channel to install from: # * stable # * test DEFAULT_CHANNEL_VALUE="stable" if [ -z "$CHANNEL" ]; then CHANNEL=$DEFAULT_CHANNEL_VALUE fi DEFAULT_DOWNLOAD_URL="https://download.docker.com" if [ -z "$DOWNLOAD_URL" ]; then DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URL fi DEFAULT_REPO_FILE="docker-ce.repo" if [ -z "$REPO_FILE" ]; then REPO_FILE="$DEFAULT_REPO_FILE" fi mirror='' DRY_RUN=${DRY_RUN:-} while [ $# -gt 0 ]; do case "$1" in --channel) CHANNEL="$2" shift ;; --dry-run) DRY_RUN=1 ;; --mirror) mirror="$2" shift ;; --version) VERSION="${2#v}" shift ;; --*) echo "Illegal option $1" ;; esac shift $(($# > 0 ? 1 : 0)) done case "$mirror" in Aliyun) DOWNLOAD_URL="https://mirrors.aliyun.com/docker-ce" ;; AzureChinaCloud) DOWNLOAD_URL="https://mirror.azure.cn/docker-ce" ;; "") ;; *) echo >&2 "unknown mirror '$mirror': use either 'Aliyun', or 'AzureChinaCloud'." exit 1 ;; esac case "$CHANNEL" in stable | test) ;; *) echo >&2 "unknown CHANNEL '$CHANNEL': use either stable or test." exit 1 ;; esac command_exists() { command -v "$@" >/dev/null 2>&1 } # version_gte checks if the version specified in $VERSION is at least the given # SemVer (Maj.Minor[.Patch]), or CalVer (YY.MM) version.It returns 0 (success) # if $VERSION is either unset (=latest) or newer or equal than the specified # version, or returns 1 (fail) otherwise. # # examples: # # VERSION=23.0 # version_gte 23.0 // 0 (success) # version_gte 20.10 // 0 (success) # version_gte 19.03 // 0 (success) # version_gte 26.1 // 1 (fail) version_gte() { if [ -z "$VERSION" ]; then return 0 fi version_compare "$VERSION" "$1" } # version_compare compares two version strings (either SemVer (Major.Minor.Path), # or CalVer (YY.MM) version strings. It returns 0 (success) if version A is newer # or equal than version B, or 1 (fail) otherwise. Patch releases and pre-release # (-alpha/-beta) are not taken into account # # examples: # # version_compare 23.0.0 20.10 // 0 (success) # version_compare 23.0 20.10 // 0 (success) # version_compare 20.10 19.03 // 0 (success) # version_compare 20.10 20.10 // 0 (success) # version_compare 19.03 20.10 // 1 (fail) version_compare() ( set +x yy_a="$(echo "$1" | cut -d'.' -f1)" yy_b="$(echo "$2" | cut -d'.' -f1)" if [ "$yy_a" -lt "$yy_b" ]; then return 1 fi if [ "$yy_a" -gt "$yy_b" ]; then return 0 fi mm_a="$(echo "$1" | cut -d'.' -f2)" mm_b="$(echo "$2" | cut -d'.' -f2)" # trim leading zeros to accommodate CalVer mm_a="${mm_a#0}" mm_b="${mm_b#0}" if [ "${mm_a:-0}" -lt "${mm_b:-0}" ]; then return 1 fi return 0 ) is_dry_run() { if [ -z "$DRY_RUN" ]; then return 1 else return 0 fi } is_wsl() { case "$(uname -r)" in *microsoft*) true ;; # WSL 2 *Microsoft*) true ;; # WSL 1 *) false ;; esac } is_darwin() { case "$(uname -s)" in *darwin*) true ;; *Darwin*) true ;; *) false ;; esac } deprecation_notice() { distro=$1 distro_version=$2 echo printf "\033[91;1mDEPRECATION WARNING\033[0m\n" printf " This Linux distribution (\033[1m%s %s\033[0m) reached end-of-life and is no longer supported by this script.\n" "$distro" "$distro_version" echo " No updates or security fixes will be released for this distribution, and users are recommended" echo " to upgrade to a currently maintained version of $distro." echo printf "Press \033[1mCtrl+C\033[0m now to abort this script, or wait for the installation to continue." echo sleep 10 } get_distribution() { lsb_dist="" # Every system that we officially support has /etc/os-release if [ -r /etc/os-release ]; then lsb_dist="$(. /etc/os-release && echo "$ID")" fi # Returning an empty string here should be alright since the # case statements don't act unless you provide an actual value echo "$lsb_dist" } echo_docker_as_nonroot() { if is_dry_run; then return fi if command_exists docker && [ -e /var/run/docker.sock ]; then ( set -x $sh_c 'docker version' ) || true fi # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output echo echo "================================================================================" echo if version_gte "20.10"; then echo "To run Docker as a non-privileged user, consider setting up the" echo "Docker daemon in rootless mode for your user:" echo echo " dockerd-rootless-setuptool.sh install" echo echo "Visit https://docs.docker.com/go/rootless/ to learn about rootless mode." echo fi echo echo "To run the Docker daemon as a fully privileged service, but granting non-root" echo "users access, refer to https://docs.docker.com/go/daemon-access/" echo echo "WARNING: Access to the remote API on a privileged Docker daemon is equivalent" echo " to root access on the host. Refer to the 'Docker daemon attack surface'" echo " documentation for details: https://docs.docker.com/go/attack-surface/" echo echo "================================================================================" echo } # Check if this is a forked Linux distro check_forked() { # Check for lsb_release command existence, it usually exists in forked distros if command_exists lsb_release; then # Check if the `-u` option is supported set +e lsb_release -a -u >/dev/null 2>&1 lsb_release_exit_code=$? set -e # Check if the command has exited successfully, it means we're in a forked distro if [ "$lsb_release_exit_code" = "0" ]; then # Print info about current distro cat <<-EOF You're using '$lsb_dist' version '$dist_version'. EOF # Get the upstream release info lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]') dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]') # Print info about upstream distro cat <<-EOF Upstream release is '$lsb_dist' version '$dist_version'. EOF else if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ] && [ "$lsb_dist" != "raspbian" ]; then if [ "$lsb_dist" = "osmc" ]; then # OSMC runs Raspbian lsb_dist=raspbian else # We're Debian and don't even know it! lsb_dist=debian fi dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')" case "$dist_version" in 12) dist_version="bookworm" ;; 11) dist_version="bullseye" ;; 10) dist_version="buster" ;; 9) dist_version="stretch" ;; 8) dist_version="jessie" ;; esac fi fi fi } do_install() { echo "# Executing docker install script, commit: $SCRIPT_COMMIT_SHA" if command_exists docker; then cat >&2 <<-'EOF' Warning: the "docker" command appears to already exist on this system. If you already have Docker installed, this script can cause trouble, which is why we're displaying this warning and provide the opportunity to cancel the installation. If you installed the current Docker package using this script and are using it again to update Docker, you can safely ignore this message. You may press Ctrl+C now to abort this script. EOF ( set -x sleep 20 ) fi user="$(id -un 2>/dev/null || true)" sh_c='sh -c' if [ "$user" != 'root' ]; then if command_exists sudo; then sh_c='sudo -E sh -c' elif command_exists su; then sh_c='su -c' else cat >&2 <<-'EOF' Error: this installer needs the ability to run commands as root. We are unable to find either "sudo" or "su" available to make this happen. EOF exit 1 fi fi if is_dry_run; then sh_c="echo" fi # perform some very rudimentary platform detection lsb_dist=$(get_distribution) lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')" if is_wsl; then echo echo "WSL DETECTED: We recommend using Docker Desktop for Windows." echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop/" echo cat >&2 <<-'EOF' You may press Ctrl+C now to abort this script. EOF ( set -x sleep 20 ) fi case "$lsb_dist" in ubuntu) if command_exists lsb_release; then dist_version="$(lsb_release --codename | cut -f2)" fi if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")" fi ;; debian | raspbian) dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')" case "$dist_version" in 12) dist_version="bookworm" ;; 11) dist_version="bullseye" ;; 10) dist_version="buster" ;; 9) dist_version="stretch" ;; 8) dist_version="jessie" ;; esac ;; centos | rhel) if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then dist_version="$(. /etc/os-release && echo "$VERSION_ID")" fi ;; *) if command_exists lsb_release; then dist_version="$(lsb_release --release | cut -f2)" fi if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then dist_version="$(. /etc/os-release && echo "$VERSION_ID")" fi ;; esac # Check if this is a forked Linux distro check_forked # Print deprecation warnings for distro versions that recently reached EOL, # but may still be commonly used (especially LTS versions). case "$lsb_dist.$dist_version" in centos.8 | centos.7 | rhel.7) deprecation_notice "$lsb_dist" "$dist_version" ;; debian.buster | debian.stretch | debian.jessie) deprecation_notice "$lsb_dist" "$dist_version" ;; raspbian.buster | raspbian.stretch | raspbian.jessie) deprecation_notice "$lsb_dist" "$dist_version" ;; ubuntu.bionic | ubuntu.xenial | ubuntu.trusty) deprecation_notice "$lsb_dist" "$dist_version" ;; ubuntu.mantic | ubuntu.lunar | ubuntu.kinetic | ubuntu.impish | ubuntu.hirsute | ubuntu.groovy | ubuntu.eoan | ubuntu.disco | ubuntu.cosmic) deprecation_notice "$lsb_dist" "$dist_version" ;; fedora.*) if [ "$dist_version" -lt 39 ]; then deprecation_notice "$lsb_dist" "$dist_version" fi ;; esac # Run setup for each distro accordingly case "$lsb_dist" in ubuntu | debian | raspbian) pre_reqs="ca-certificates curl" apt_repo="deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL" ( if ! is_dry_run; then set -x fi $sh_c 'apt-get -qq update >/dev/null' $sh_c "DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pre_reqs >/dev/null" $sh_c 'install -m 0755 -d /etc/apt/keyrings' $sh_c "curl -fsSL \"$DOWNLOAD_URL/linux/$lsb_dist/gpg\" -o /etc/apt/keyrings/docker.asc" $sh_c "chmod a+r /etc/apt/keyrings/docker.asc" $sh_c "echo \"$apt_repo\" > /etc/apt/sources.list.d/docker.list" $sh_c 'apt-get -qq update >/dev/null' ) pkg_version="" if [ -n "$VERSION" ]; then if is_dry_run; then echo "# WARNING: VERSION pinning is not supported in DRY_RUN" else # Will work for incomplete versions IE (17.12), but may not actually grab the "latest" if in the test channel pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/~ce~.*/g' | sed 's/-/.*/g')" search_command="apt-cache madison docker-ce | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3" pkg_version="$($sh_c "$search_command")" echo "INFO: Searching repository for VERSION '$VERSION'" echo "INFO: $search_command" if [ -z "$pkg_version" ]; then echo echo "ERROR: '$VERSION' not found amongst apt-cache madison results" echo exit 1 fi if version_gte "18.09"; then search_command="apt-cache madison docker-ce-cli | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3" echo "INFO: $search_command" cli_pkg_version="=$($sh_c "$search_command")" fi pkg_version="=$pkg_version" fi fi ( pkgs="docker-ce${pkg_version%=}" if version_gte "18.09"; then # older versions didn't ship the cli and containerd as separate packages pkgs="$pkgs docker-ce-cli${cli_pkg_version%=} containerd.io" fi if version_gte "20.10"; then pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version" fi if version_gte "23.0"; then pkgs="$pkgs docker-buildx-plugin" fi if ! is_dry_run; then set -x fi $sh_c "DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pkgs >/dev/null" ) echo_docker_as_nonroot exit 0 ;; centos | fedora | rhel) repo_file_url="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE" ( if ! is_dry_run; then set -x fi if command_exists dnf5; then # $sh_c "dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core" # $sh_c "dnf5 config-manager addrepo --save-filename=docker-ce.repo --from-repofile='$repo_file_url'" $sh_c "dnf -y -q --setopt=install_weak_deps=False install curl dnf-plugins-core" # FIXME(thaJeztah); strip empty lines as workaround for https://github.com/rpm-software-management/dnf5/issues/1603 TMP_REPO_FILE="$(mktemp --dry-run)" $sh_c "curl -fsSL '$repo_file_url' | tr -s '\n' > '${TMP_REPO_FILE}'" $sh_c "dnf5 config-manager addrepo --save-filename=docker-ce.repo --overwrite --from-repofile='${TMP_REPO_FILE}'" $sh_c "rm -f '${TMP_REPO_FILE}'" if [ "$CHANNEL" != "stable" ]; then $sh_c "dnf5 config-manager setopt \"docker-ce-*.enabled=0\"" $sh_c "dnf5 config-manager setopt \"docker-ce-$CHANNEL.enabled=1\"" fi $sh_c "dnf makecache" elif command_exists dnf; then $sh_c "dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core" $sh_c "dnf config-manager --add-repo $repo_file_url" if [ "$CHANNEL" != "stable" ]; then $sh_c "dnf config-manager --set-disabled \"docker-ce-*\"" $sh_c "dnf config-manager --set-enabled \"docker-ce-$CHANNEL\"" fi $sh_c "dnf makecache" else $sh_c "yum -y -q install yum-utils" $sh_c "yum config-manager --add-repo $repo_file_url" if [ "$CHANNEL" != "stable" ]; then $sh_c "yum config-manager --disable \"docker-ce-*\"" $sh_c "yum config-manager --enable \"docker-ce-$CHANNEL\"" fi $sh_c "yum makecache" fi ) pkg_version="" if command_exists dnf; then pkg_manager="dnf" pkg_manager_flags="-y -q --best" else pkg_manager="yum" pkg_manager_flags="-y -q" fi if [ -n "$VERSION" ]; then if is_dry_run; then echo "# WARNING: VERSION pinning is not supported in DRY_RUN" else if [ "$lsb_dist" = "fedora" ]; then pkg_suffix="fc$dist_version" else pkg_suffix="el" fi pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/\\\\.ce.*/g' | sed 's/-/.*/g').*$pkg_suffix" search_command="$pkg_manager list --showduplicates docker-ce | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'" pkg_version="$($sh_c "$search_command")" echo "INFO: Searching repository for VERSION '$VERSION'" echo "INFO: $search_command" if [ -z "$pkg_version" ]; then echo echo "ERROR: '$VERSION' not found amongst $pkg_manager list results" echo exit 1 fi if version_gte "18.09"; then # older versions don't support a cli package search_command="$pkg_manager list --showduplicates docker-ce-cli | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'" cli_pkg_version="$($sh_c "$search_command" | cut -d':' -f 2)" fi # Cut out the epoch and prefix with a '-' pkg_version="-$(echo "$pkg_version" | cut -d':' -f 2)" fi fi ( pkgs="docker-ce$pkg_version" if version_gte "18.09"; then # older versions didn't ship the cli and containerd as separate packages if [ -n "$cli_pkg_version" ]; then pkgs="$pkgs docker-ce-cli-$cli_pkg_version containerd.io" else pkgs="$pkgs docker-ce-cli containerd.io" fi fi if version_gte "20.10"; then pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version" fi if version_gte "23.0"; then pkgs="$pkgs docker-buildx-plugin" fi if ! is_dry_run; then set -x fi $sh_c "$pkg_manager $pkg_manager_flags install $pkgs" ) echo_docker_as_nonroot exit 0 ;; sles) if [ "$(uname -m)" != "s390x" ]; then echo "Packages for SLES are currently only available for s390x" exit 1 fi repo_file_url="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE" pre_reqs="ca-certificates curl libseccomp2 awk" ( if ! is_dry_run; then set -x fi $sh_c "zypper install -y $pre_reqs" $sh_c "zypper addrepo $repo_file_url" if ! is_dry_run; then cat >&2 <<-'EOF' WARNING!! openSUSE repository (https://download.opensuse.org/repositories/security:/SELinux) will be enabled now. Do you wish to continue? You may press Ctrl+C now to abort this script. EOF ( set -x sleep 30 ) fi opensuse_repo="https://download.opensuse.org/repositories/security:/SELinux/openSUSE_Factory/security:SELinux.repo" $sh_c "zypper addrepo $opensuse_repo" $sh_c "zypper --gpg-auto-import-keys refresh" $sh_c "zypper lr -d" ) pkg_version="" if [ -n "$VERSION" ]; then if is_dry_run; then echo "# WARNING: VERSION pinning is not supported in DRY_RUN" else pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/\\\\.ce.*/g' | sed 's/-/.*/g')" search_command="zypper search -s --match-exact 'docker-ce' | grep '$pkg_pattern' | tail -1 | awk '{print \$6}'" pkg_version="$($sh_c "$search_command")" echo "INFO: Searching repository for VERSION '$VERSION'" echo "INFO: $search_command" if [ -z "$pkg_version" ]; then echo echo "ERROR: '$VERSION' not found amongst zypper list results" echo exit 1 fi search_command="zypper search -s --match-exact 'docker-ce-cli' | grep '$pkg_pattern' | tail -1 | awk '{print \$6}'" # It's okay for cli_pkg_version to be blank, since older versions don't support a cli package cli_pkg_version="$($sh_c "$search_command")" pkg_version="-$pkg_version" fi fi ( pkgs="docker-ce$pkg_version" if version_gte "18.09"; then if [ -n "$cli_pkg_version" ]; then # older versions didn't ship the cli and containerd as separate packages pkgs="$pkgs docker-ce-cli-$cli_pkg_version containerd.io" else pkgs="$pkgs docker-ce-cli containerd.io" fi fi if version_gte "20.10"; then pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version" fi if version_gte "23.0"; then pkgs="$pkgs docker-buildx-plugin" fi if ! is_dry_run; then set -x fi $sh_c "zypper -q install -y $pkgs" ) echo_docker_as_nonroot exit 0 ;; *) if [ -z "$lsb_dist" ]; then if is_darwin; then echo echo "ERROR: Unsupported operating system 'macOS'" echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop" echo exit 1 fi fi echo echo "ERROR: Unsupported distribution '$lsb_dist'" echo exit 1 ;; esac exit 1 } # wrapped up in a function so that we have some protection against only getting # half the file during "curl | sh" do_install ================================================ FILE: Day 06 Functions/ebs.sh ================================================ #!/bin/bash delete_vols() { # Fetch all volumes vols=$(aws ec2 describe-volumes | jq ".Volumes[].VolumeId" -r) for vol in $vols; do # Fetch volume details volume_info=$(aws ec2 describe-volumes --volume-ids $vol) size=$(echo "$volume_info" | jq ".Volumes[].Size") state=$(echo "$volume_info" | jq ".Volumes[].State" -r) # Check volume size and state if [ "$state" == "in-use" ]; then echo "$vol is attached to an instance. Skipping deletion." elif [ "$size" -gt 5 ]; then echo "$vol is larger than 5GB. Skipping deletion." else echo "Deleting Volume $vol" aws ec2 delete-volume --volume-id $vol fi done } # Call the function delete_vols ================================================ FILE: Day 06 Functions/log-rotation.sh ================================================ #!/bin/bash # Configuration LOG_FILE="/var/log/syslog" # Path to your log file MAX_SIZE=100000000 # Maximum size in bytes (100 MB) BACKUP_DIR="/var/log/myapp/backups" # Directory to store rotated logs TIMESTAMP=$(date +"%Y%m%d_%H%M%S") # Timestamp for backup filename # Create backup directory if it doesn't exist mkdir -p "$BACKUP_DIR" # Function to rotate log files rotate_logs() { if [ -f "$LOG_FILE" ]; then echo "Rotating log file: $LOG_FILE" mv "$LOG_FILE" "$BACKUP_DIR/myapp_$TIMESTAMP.log" # Rename the log file with a timestamp touch "$LOG_FILE" # Create a new empty log file echo "Log file rotated and stored as $BACKUP_DIR/myapp_$TIMESTAMP.log" else echo "Log file $LOG_FILE does not exist." fi } # Check if the log file size exceeds the maximum size if [ -f "$LOG_FILE" ]; then FILE_SIZE=$(stat -c%s "$LOG_FILE") # Get the size of the log file in bytes if [ "$FILE_SIZE" -gt "$MAX_SIZE" ]; then rotate_logs else echo "Log file size is under control: ${FILE_SIZE} bytes" fi else echo "Log file does not exist. No action taken." fi ================================================ FILE: Day 06 Functions/multi-function.sh ================================================ #!/bin/bash function subnets { echo "************************************************************" echo "**Getting SUBNETS Info VPC $VPC in region $REGION**" echo "************************************************************" aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPC" --region $REGION | jq ".Subnets[].SubnetId" echo "---------------------------------------------" } function sg { echo "********************************************************************" echo "**Getting Security Group Info VPC $VPC in region $REGION**" echo "********************************************************************" aws ec2 describe-security-groups --filters "Name=vpc-id,Values=$VPC" --region $REGION | jq ".SecurityGroups[].GroupName" echo "---------------------------------------------" } vpcs() { for REGION in $@; do echo "Getting VPC List For Regions $REGION..." vpcs=$(aws ec2 describe-vpcs --region "${REGION}" | jq ".Vpcs[].VpcId" | tr -d '"') echo $vpcs echo "--------------------------------------------------" for VPC in $vpcs; do subnets $VPC # sg $VPC done # for VPC in $vpcs; do # sg $VPC # done done } vpcs $@ ================================================ FILE: Day 07 Git-1/README.md ================================================ # Day 07 GIT Azure Terraform JIRA ![a-3d-scene-with-a-terraform-logo-on-one-side-and-a-UJgTFv-TSs-3jQkKJsSVGQ-TP18QzX3TRGEJlCl2aGlmA](https://github.com/user-attachments/assets/df80ecf8-a04e-45b1-9540-0759a6ea8fa2) ## Overview This project demonstrates using Git for version control while developing infrastructure with Terraform on Azure. We'll cover setting up Git, Terraform, and pushing infrastructure code to a remote GitHub repository. ## Table of Contents 1. [Git and Remote Repositories](#git-and-remote-repositories) 2. [Setting Up Environment](#setting-up-environment) 3. [Azure Service Principal](#azure-service-principal) 4. [Terraform Project](#terraform-project) 5. [Managing State and GitHub](#managing-state-and-github) 6. [Branching Strategy](#branching-strategy) ## Git and Remote Repositories Git is a tool that helps track changes in code and push it to a remote repository such as GitHub, GitLab, Bitbucket, or Azure DevOps. In a collaborative environment, all team members work on the same repository to manage changes effectively. For this project, we are using Terraform to create infrastructure on Azure, and Git to version control the Terraform code. ## Setting Up Environment ### Step 1: Install Git and Terraform - **Git Installation**: - Download Git and check the installation via PowerShell: ```sh git --version ``` - **Terraform Installation**: - Create a folder named `software` in C drive. - Download Terraform binary, save it in the folder, extract it, and add its path to the system environment variables: ```sh sysdm.cpl > Advanced > Environment Variables > Path > Edit > New (paste path) ``` ### Step 2: Create Project Folder - Create a folder named `Azure-Tera-Git`. - Inside, create a file called `Credentials` to store credentials. ## Azure Service Principal To authenticate between Azure and Terraform: 1. **Azure EntraID** > **App Registration** > **New Registration**. - Register an app named `DevSecOps-Saikiran` (Service Principal). - Collect `ClientID` and `TenantID`. 2. Go to **Certificates & Secrets** and create a new client secret. 3. Navigate to **Subscriptions**: - Create a subscription and copy the `SubscriptionID`. - Assign roles: - **IAM** > **Role Assignment** > **Privilege Admin Roles** > **Contributor** > **Select Members**. ## Terraform Project ### Resources to Create - Resource Groups (RG) - Virtual Network & Subnets - Network Security Group (NSG) and Rules - Random Passwords - Save Passwords in Key Vault - Deploy Virtual Machine using passwords from the Key Vault ### Code Structure - **provider.tf**: Configure Azure provider for Terraform. ```hcl provider "azurerm" { features {} } ``` ### Commands - **Initialize Terraform**: ```sh terraform init ``` (This downloads the Azure provider.) - **Deploy Resources**: - Create Resource Groups, Virtual Networks, etc., using the keyword `resource`. - The `resource` block is used for all resources, including security groups, VPCs, etc. - **Manage State File**: - Keep track of the infrastructure state. - Store the state file in an Azure Storage account to maintain consistency: - **Storage Accounts** > **Containers** > Create container (`tfstate`). - **Apply Configuration**: ```sh terraform init; terraform fmt; terraform validate; terraform plan; terraform apply ``` ## Managing State and GitHub ### Initialize Git Repository - Create a GitHub repository as **private**. - Set up SSH keys for authentication: ```sh ssh-keygen ``` Copy the `.pub` key and store it in GitHub. ### Version Control Steps 1. **Initialize Git**: ```sh git init ``` 2. **Create `.gitignore`** to exclude unnecessary files. 3. **Commit Changes**: ```sh git add . && git commit -m "terraform Azure Base Code" ``` 4. **Push to Remote Repository**: ```sh git branch -m master development git push origin development ``` ### Virtual Network Deployment - Add code for virtual networks, apply changes, and push the updated code to GitHub. ## Branching Strategy ### Create Branches - **Production Branch**: ```sh git branch -b production git push origin production ``` - **Feature Branch for Updates**: - Create new features in separate branches: ```sh git checkout -b feature/subnet ``` - Develop, test, and then create a Pull Request (PR) for merging changes into the **development** or **production** branch. ### Merging with Pull Request - Create a PR in GitHub to merge changes from development to production. - Add comments and request approval from reviewers. - Once approved, merge the code. ### Create JIRA Branch - Create a branch based on a JIRA ticket for tracking: ```sh git checkout -b JIRA-123 ``` - Implement Azure Storage account code, commit, and push to the JIRA branch. - Create a PR to merge the feature, add relevant comments, and ensure code review. ================================================ FILE: Day 08 Git-2/README.md ================================================ # Day 08 Git-2 ================================================ FILE: Day 09 Git-3/README.md ================================================ ![an-eye-catching-illustration-of-a-git-merge-and-gi-mich74xdR-iNzhh-DPdCaw-dDLWCUYQQtKBuum9wR-h7w](https://github.com/user-attachments/assets/affbf339-6c43-4fa4-a9e5-a3edf2961a33) # Git Basics: Rebase, Reset, Stash, and Git Secrets This repository provides practical examples and explanations on fundamental Git operations such as `rebase`, `reset`, `stash`, and securing sensitive information with `git-secrets`. ## Table of Contents - [Rebase](#rebase) - [Reset](#reset) - [Stash](#stash) - [Git Secrets](#git-secrets) --- ## Rebase ### What is Git Rebase? Rebasing in Git is used to take the changes from one branch (usually a development branch) and apply them on top of another branch (typically the master branch). This results in a linear commit history, providing a cleaner log. However, it rewrites commit history, which can cause issues in a collaborative environment. ### Example: 1. Create the master branch and commit changes: ```bash mkdir rebase-example && cd rebase-example git init I=1 while [ $I -lt 6 ] do echo "Master $I time" > MasterFile$I git add . && git commit -m "Master Commit $I" I=$((I+1)) done ``` 2. Create the development branch and add commits: ```bash git checkout -b development I=1 while [ $I -lt 6 ] do echo "Development $I time" > DevFile$I git add . && git commit -m "Development Commit $I" I=$((I+1)) done ``` 3. Now, rebase the `development` branch onto `master`: ```bash git checkout development git rebase master git log --oneline ``` ### Golden Rule of Rebase: According to Google’s and Bitbucket's guidelines, **never rebase commits that you’ve already pushed to a shared repository**. This can cause confusion for your collaborators as it rewrites the commit history. --- ## Reset ### Types of Git Reset: 1. **Soft Reset**: Only resets the commit history, files remain intact. 2. **Hard Reset**: Removes both commit history and files, reverting to a previous state. ### Example: 1. Create 20 commits in a repository: ```bash mkdir reset-example && cd reset-example git init I=1 while [ $I -lt 21 ] do echo "Commit $I content" > File$I git add . && git commit -m "Commit $I" I=$((I+1)) done ``` 2. Perform a hard reset to an earlier commit: ```bash git reset --hard git log --oneline ls -al ``` 3. Perform a soft reset: ```bash git reset --soft ls -al # Files will remain intact ``` 4. If changes were pushed to the remote repository, use the following command to force-push after a reset: ```bash git push origin master --force ``` --- ## Stash ### What is Git Stash? Git stash is used to temporarily save your uncommitted changes so that you can work on something else. Later, you can retrieve those changes using `git stash pop`. ### Example: 1. Modify `app.py`: ```bash nano app.py # Add some code, like: print("Hello Saikiran") ``` 2. If you need to switch to another task quickly without committing: ```bash git stash ``` 3. To retrieve the stashed changes: ```bash git stash pop ``` In interviews, mention that `stash` is primarily used for temporarily saving work without committing. --- ## Git Secrets ### Protect Sensitive Information Developers or DevOps engineers sometimes mistakenly commit sensitive information (API keys, PEM files, etc.) into repositories. To prevent this, we can use `git-secrets`. ### Example: 1. Install `git-secrets`: ```bash git clone https://github.com/awslabs/git-secrets.git cd git-secrets sudo apt install make -y make install git secrets --install ``` 2. Register AWS patterns: ```bash git secrets --register-aws ``` 3. Create a file containing sensitive information and attempt to commit it: ```bash nano keys # Add some AWS access keys git add . && git commit -m "AWS keys" ``` 4. `git-secrets` will block this commit if sensitive information is detected. --- ## Conclusion This repository covers essential Git operations: - **Rebase** for cleaner history but with caution. - **Reset** for undoing commits. - **Stash** for temporarily saving work. - **Git Secrets** for protecting sensitive information. These concepts are critical for anyone working with version control and especially useful in DevOps and development workflows. `` ================================================ FILE: Day 10 AWS-Terraform-Part-1/README.md ================================================ ![a-3d-render-of-a-youtube-thumbnail-with-the-text-d-6vFmUIlxRQ2-ERpv-XkPmg-98wY6FuxTTeyHEHWaD8X5w](https://github.com/user-attachments/assets/5ff94fd5-09ee-4fc9-87df-e16f87bab83c) # Terraform Day 01 Provider Block - Resource Block - S3 backend - Data Source - Remote Data Source Backend # Code used in video https://github.com/saikiranpi/Terraformsingleinstance.git # Infrastructure as Code (IaC) with Terraform and Cloud Native Tools (CNT) ## Overview In this repository, we explore Infrastructure as Code (IaC) using both Cloud Native Tools (CNT) and Terraform. We'll compare AWS CloudFormation (CFT), Azure Resource Manager (ARM), and GCP Deployment Manager with Terraform. Additionally, we'll cover practical Terraform code examples for AWS, including how to manage infrastructure with modules, data sources, and remote state management. ### Tools Overview: 1. **AWS**: CloudFormation (CFT) 2. **Azure**: Azure Resource Manager (ARM) 3. **GCP**: Deployment Manager ### Key Differences between CNT (CFT, ARM) & Terraform: | Feature | CFT & ARM | Terraform | |-----------------------------------|--------------------------------------|---------------------------------| | Language | JSON or YAML (All configs in one file) | HashiCorp Configuration Language (HCL) | | Complexity | Learning JSON/YAML is difficult | HCL is simpler and modular | | Cloud Compatibility | AWS (CFT), Azure (ARM) only | Multi-cloud (AWS, Azure, GCP) | | Module Support | No | Yes, with reusable modules | | Workspace Support | No | Yes, supports multiple workspaces | | Dry-Run Capability | Limited | `terraform plan` for effective dry-run | | Importing Resources | Complex in AWS, not available in ARM | Simple with `terraform import` | --- ## Terraform and Other HashiCorp Tools: Terraform is a HashiCorp tool that is cloud-agnostic, which means you can use the same logic to deploy resources across multiple clouds, including AWS, Azure, and GCP. Alongside Terraform, HashiCorp also provides: - **Packer**: For image automation - **Consul**: For service discovery and cluster management - **Vault**: For secure secrets management - **Nomad**: For workload orchestration (alternative to Kubernetes) --- ## Getting Started with Terraform ### 1. Main Configuration (`main.tf`): This is the main file where we define which cloud provider we will be deploying resources to, in this case, AWS. ```hcl provider "aws" { region = "us-west-2" } # Other resource definitions will follow... ``` You don't need to hard-code your AWS credentials in the code; instead, you can configure them using the `aws configure` command after installing the AWS CLI. --- ### 2. Create Your First VPC (`vpc.tf`): In Terraform, any service created is referred to as a **resource**. ```hcl resource "aws_vpc" "my_vpc" { cidr_block = "10.0.0.0/16" tags = { Name = "My-VPC" } } resource "aws_internet_gateway" "igw" { vpc_id = aws_vpc.my_vpc.id tags = { Name = "My-Internet-Gateway" } } ``` ### 3. Using Data Sources: Data sources are used to fetch information from existing resources in your cloud environment. For example, we can fetch an existing VPC using its tag name: ```hcl data "aws_vpc" "Test-Vpc" { filter { name = "tag:Name" values = ["Test-Vpc"] } } resource "aws_internet_gateway" "igw" { vpc_id = data.aws_vpc.Test-Vpc.id } ``` ### 4. Remote State Management: After deploying your resources, Terraform generates a state file. This state file can be reused to deploy the same infrastructure in another project. We can manage this using Terraform's remote state: ```hcl terraform { backend "s3" { bucket = "my-terraform-state-bucket" key = "project1/terraform.tfstate" region = "us-west-2" } } ``` After setting this up, initialize the backend: ```bash terraform init ``` --- ### Sample Workflow: 1. **Write Terraform Config**: Create resource files (`vpc.tf`, `ec2.tf`). 2. **Initialize**: Run `terraform init` to set up the environment. 3. **Plan**: Run `terraform plan` to perform a dry-run and check for any potential issues. 4. **Apply**: Run `terraform apply` to provision the resources. 5. **State Management**: Use remote state for managing large infrastructures and multiple environments. ### Additional Resources: - **AWS Resources**: VPC, Internet Gateway, Subnets, Security Groups, EC2 instances. - **Data Sources**: Reuse and reference existing resources. - **Remote State**: Manage infrastructure state across projects. --- ## Conclusion Terraform offers greater flexibility and multi-cloud support compared to cloud-native tools like CloudFormation (CFT) and Azure Resource Manager (ARM). It simplifies resource management through modules, reusable code, and a powerful state management system. This repository contains code examples and best practices for managing your cloud infrastructure using Terraform. ================================================ FILE: Day 11 AWS-Terraform-Part-2/README.md ================================================ ![Untitled design](https://github.com/user-attachments/assets/d7d9ad96-e14e-40d8-ac6f-93004fb69da0) # Terraform Day 02 - Dependencies, Variables, TFVars and Create Before Destroy Today, we'll dive into **dependencies in Terraform** and cover two main topics: 1. **Implicit and Explicit Dependencies** 2. **Variables and TFVars** ## Dependencies in Terraform Terraform automatically handles resource dependencies in two ways: ### 1. Implicit Dependencies An **implicit dependency** occurs when one resource refers to the attribute of another resource. For example, when creating a VPC and then an Internet Gateway, the Internet Gateway doesn't inherently know that it must wait for the VPC to be created. However, when you reference the VPC ID in the Internet Gateway resource, Terraform understands that the VPC must be created first. - **Example:** When you declare a VPC, its ID is generated only after it is created. Any resource, like a subnet or Internet Gateway, that references this VPC ID creates an implicit dependency. ### 2. Explicit Dependencies Sometimes, implicit dependencies aren’t enough. For example, if we want the **S3 bucket** to be created only after the VPC is created, we need to use explicit dependencies. This is done using the `depends_on` argument in Terraform. - **Example:** A **NAT Gateway** should only be created after a **Route Table** has been established. If the NAT Gateway is created before the route table, it won’t function as expected. This is where **explicit dependencies** come into play using `depends_on`. ### Task Example: VPC, Internet Gateway, and S3 Bucket - First, we’ll create a **VPC** and an **S3 bucket**. Since there's no direct dependency between the VPC and the S3 bucket, Terraform may create the S3 bucket first. - To enforce order, we’ll explore how to use `depends_on` to make sure that resources like the **NAT Gateway** and **S3 bucket** are created in the correct sequence. ### Create S3 Buckets 1. Create an `s3.tf` file. 2. In it, define three S3 buckets. 3. Observe that the S3 buckets and VPC will deploy in parallel because there is no dependency between them. To ensure that the S3 bucket is created **after** the VPC, we’ll add explicit dependencies using the `depends_on` argument. --- ## Variables and TFVars ### Variables Variables allow us to easily change values without editing the code directly. This makes managing infrastructure more flexible and reusable. ### TFVars Terraform variable values can be stored in separate `.tfvars` files, helping to: - Keep the code clean. - Manage sensitive data or multiple environments efficiently. ### Removing Lock Files Remember to clean up all `terraform.tfstate.lock` files before redeploying to avoid locking issues. --- ## Create Before Destroy When replacing resources, Terraform often follows the **create before destroy** pattern. This ensures minimal downtime by creating a replacement resource before destroying the original. - **Example:** When updating a resource like a **Key Pair** or upgrading a component, Terraform will first create the new key, then destroy the old one after the new one is functional. ### Task: Example Deployment 1. Deploy the resource. 2. Run `terraform plan` and observe the changes. (Copy the output to a Notepad for reference.) 3. Deploy the resource. 4. Add an additional name to the S3 bucket and reapply the changes to see how Terraform manages updates. --- ## Prevent Destroy Use `prevent_destroy` to safeguard critical resources. This is especially useful for resources like databases or sensitive buckets where destruction could cause significant issues. --- By the end of this session, you’ll have a deeper understanding of how Terraform handles dependencies, the flexibility of variables, and the best practices for managing infrastructure deployment and updates. --- ================================================ FILE: Day 12 AWS-Terraform-Part-3/README.md ================================================ ![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-cTa2tZAgR1ShW2UwqRQdcQ-fbR1bkc9RlC23TynNHoRhA](https://github.com/user-attachments/assets/13e11914-f6c0-409a-9c9c-a9ce08f926be) # Terraform Workspaces for Multi-Environment Infrastructure This repository demonstrates how to set up and manage multiple identical environments (Dev, UAT, and Prod) using Terraform Workspaces. Each environment will have 3 servers with unique naming conventions. The state management for each environment is handled separately using Terraform's state backend in S3 with DynamoDB for state locking. ## Prerequisites - Terraform installed on your local machine. - AWS CLI configured with proper permissions. - S3 bucket for state backend. - DynamoDB table for state file locking. ## Infrastructure Overview You will be deploying three environments: - **Dev**: 3 Servers - **UAT**: 3 Servers - **Prod**: 3 Servers Each environment will have its own Terraform `.tfvars` file to manage configuration differences like naming conventions. ## Step-by-Step Guide ### 1. Clone the Base Infrastructure Clone the base Terraform infrastructure and make the necessary changes to create multiple environments. ### 2. Setup State Backend Create an S3 bucket to store Terraform state files and configure it as a backend in your `main.tf`. Ensure that the bucket is set up before proceeding. ### 3. Create Environment-Specific `.tfvars` Files - Rename the existing `terraform.tfvars` to `dev.tfvars`. - Create `uat.tfvars` and `prod.tfvars` with environment-specific changes (like naming conventions for servers). ### 4. Initialize and Validate Terraform ```bash terraform init terraform validate terraform fmt ``` ### 5. Apply Terraform Configuration Deploy the infrastructure for each environment using the appropriate `.tfvars` file. #### For Dev Environment: ```bash terraform apply -var-file=dev.tfvars ``` #### For UAT Environment: ```bash terraform workspace new uat terraform apply -var-file=uat.tfvars ``` #### For Prod Environment: ```bash terraform workspace new prod terraform apply -var-file=prod.tfvars ``` ### 6. Managing State Files for Different Environments Each environment requires a separate state file. If you use the same state backend without separating the state files, Terraform will attempt to apply changes across environments. To manage state files for different environments, use Terraform workspaces: ```bash terraform workspace new dev terraform workspace new uat terraform workspace new prod ``` Each workspace will create a separate folder in the S3 bucket to store the respective environment’s state file. ### 7. Adding EC2 Instances Modify the `ec2.tf` file to add the EC2 instance configurations: - Use different AMI IDs for each environment. - Example of setting the server name: ```hcl server_name = "${var.env}-Server-1" ``` ### 8. User Data Configuration Add user data to the EC2 instances to update the web server’s index page: ```bash #!/bin/bash echo "Hello from ${var.env}" > /var/www/html/index.nginx-debian.html ``` ### 9. Switch Between Workspaces To switch between environments, use the `terraform workspace` commands: ```bash terraform workspace select dev terraform plan -var-file=dev.tfvars terraform apply -var-file=dev.tfvars ``` Repeat the process for UAT and Prod environments by selecting their respective workspaces. ### 10. Check Public IPs of All Servers After deployment, verify the public IP addresses of the servers in each environment. ### 11. Clean Up (Destroy Infrastructure) To destroy resources from each environment: ```bash terraform workspace select prod terraform destroy -var-file=prod.tfvars terraform workspace select dev terraform destroy -var-file=dev.tfvars terraform workspace select uat terraform destroy -var-file=uat.tfvars ``` ### 12. Delete Workspaces Once the environments are destroyed, delete the workspaces: ```bash terraform workspace delete dev terraform workspace delete uat terraform workspace delete prod ``` ### 13. DynamoDB for State Locking To avoid state file conflicts, implement state locking using DynamoDB. 1. Create a `dynamodb.tf` file: ```hcl resource "aws_dynamodb_table" "terraform_locks" { name = "terraform-state-lock" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" } } ``` 2. Apply the DynamoDB configuration: ```bash terraform apply ``` 3. Add the DynamoDB state locking configuration to your backend in `main.tf`: ```hcl backend "s3" { bucket = "your-s3-bucket" key = "path/to/terraform.tfstate" region = "us-west-2" dynamodb_table = "terraform-state-lock" } ``` ### 14. Excluding DynamoDB from Terraform State If you wish to manage DynamoDB outside of Terraform to prevent it from being destroyed, remove it from the state file: ```bash terraform state rm aws_dynamodb_table.terraform_locks ``` ### 15. Push Code to GitHub Once all the files are ready, push them to your GitHub repository: ```bash git init git add . git commit -m "Initial commit for Terraform multi-environment setup" git remote add origin https://github.com/your-username/terraform-multi-env.git git push -u origin main ``` ### 16. Deploying the Infrastructure from GitHub 1. Clone the repository onto your local machine or remote instance: ```bash git clone https://github.com/your-username/terraform-multi-env.git ``` 2. Run the Terraform commands to deploy the infrastructure: ```bash terraform init terraform plan -var-file=dev.tfvars terraform apply -var-file=dev.tfvars ``` --- ## Conclusion This project demonstrates how to manage multiple identical environments (Dev, UAT, Prod) using Terraform Workspaces, S3 for state management, and DynamoDB for state locking. Be sure to separate your environments' state files to avoid conflicts and manage infrastructure more effectively. Feel free to explore, modify, and extend this setup for your own infrastructure needs. --- ================================================ FILE: Day 13 AWS-Terraform-Part-4/README.md ================================================ ![Untitled design](https://github.com/user-attachments/assets/58f96a76-cbc0-4ba5-ae0c-41e6f85c9b2b) # Terraform Day 5: Enabling TF_LOG and Working with Sensitive Information ## Overview In this session, we explore how to enable logging in Terraform using environment variables, how to handle sensitive information such as passwords, and how to integrate AWS Secrets Manager for securely storing sensitive data. We also demonstrate deploying an RDS MySQL instance with Terraform. ## Topics Covered 1. **Enabling TF_LOG for Debugging** 2. **Working with Sensitive Information** 3. **Using AWS Secrets Manager with Terraform** 4. **Deploying RDS MySQL Instance** ## Enabling TF_LOG Terraform provides the `TF_LOG` environment variable for controlling log verbosity. You can choose from different levels like `TRACE`, `DEBUG`, `INFO`, `WARN`, and `ERROR`. ### Steps to Enable TF_LOG 1. **Set TF_LOG for detailed trace logs:** ```powershell $env:TF_LOG = "TRACE" terraform destroy ``` 2. **Set TF_LOG for error-level logging:** ```powershell $env:TF_LOG = "ERROR" terraform destroy ``` 3. **Write logs to a file:** ```powershell $env:TF_LOG = "TRACE" $env:TF_LOG_PATH = "./logs/terraform.log" terraform destroy ``` ## Handling Sensitive Information When working with sensitive data like usernames and passwords, it is important to avoid hardcoding them in the Terraform scripts. Instead, use variables marked as `sensitive`. ### Example In your `variables.tf`: ```hcl variable "username" { type = string sensitive = true } variable "password" { type = string sensitive = true } ``` ### Storing Passwords Securely with AWS Secrets Manager To securely store and retrieve sensitive information like passwords, you can use AWS Secrets Manager. 1. **Generate a random password:** ```hcl resource "random_password" "master" { length = 16 special = true override_special = "_!%^" } ``` 2. **Store the password in AWS Secrets Manager:** ```hcl resource "aws_secretsmanager_secret" "password" { name = "test-db-password" } resource "aws_secretsmanager_secret_version" "password" { secret_id = aws_secretsmanager_secret.password.id secret_string = random_password.master.result } ``` 3. **Retrieve the password when deploying RDS:** ```hcl data "aws_secretsmanager_secret_version" "password" { secret_id = aws_secretsmanager_secret.password.id } resource "aws_db_instance" "default" { identifier = "testdb" allocated_storage = 10 storage_type = "gp2" engine = "mysql" engine_version = "5.7" instance_class = "db.t2.medium" username = "dbadmin" password = data.aws_secretsmanager_secret_version.password.secret_string publicly_accessible = true db_subnet_group_name = aws_db_subnet_group.default.id } ``` ## Deploying RDS MySQL Instance ### Steps: 1. **Create a subnet group:** ```hcl resource "aws_db_subnet_group" "default" { name = "main" subnet_ids = [ aws_subnet.subnet1-public.id, aws_subnet.subnet2-public.id, ] tags = { Name = "My DB subnet group" } } ``` 2. **Deploy the RDS instance:** ```hcl resource "aws_db_instance" "default" { identifier = "testdb" allocated_storage = 10 engine = "mysql" engine_version = "5.7" instance_class = "db.t2.medium" name = "mydb" username = "dbadmin" password = data.aws_secretsmanager_secret_version.password.secret_string publicly_accessible = true db_subnet_group_name = aws_db_subnet_group.default.id } ``` ### Connecting to RDS via MySQL Workbench: 1. In AWS Console, go to **RDS > Databases > testdb** and copy the **endpoint**. 2. In **MySQL Workbench**, use: - Hostname: `` - Username: `dbadmin` - Password: Fetch from **AWS Secrets Manager**. ### Destroy the Infrastructure After testing, remember to clean up: ```bash terraform destroy ``` ## Interview Tip: Handling Sensitive Information When asked how to handle sensitive information in Terraform, you can explain that Terraform can integrate with AWS Secrets Manager to securely store and retrieve sensitive data. Sensitive variables should be defined in Terraform to avoid exposing sensitive information directly in the code. --- This README provides an overview of how to enable logging, securely manage sensitive information, and deploy an RDS MySQL instance using Terraform. ================================================ FILE: Day 14 AWS-Terraform-Functions-1/README.md ================================================ # Terraform Functions Part: 1 ![Thumb](https://github.com/user-attachments/assets/69bc2680-9ffe-4852-a7f0-f2b9ed8496c5) This repository demonstrates the efficient use of Terraform functions to manage infrastructure as code without duplicating resources. The focus is on creating modular, scalable, and maintainable Terraform configurations. ## Overview In this project, we will utilize Terraform functions and techniques to create a cloud infrastructure with multiple instances and subnets efficiently. We aim to minimize duplication in our code by using various Terraform functionalities such as `count`, `for_each`, `locals`, and dynamic blocks. ### Key Objectives - Clone the repository. - Streamline Terraform configuration files by removing unnecessary variables and resources. - Implement best practices for variable management and resource creation. ## Repository Structure - **main.tf**: Main configuration file containing resource definitions. - **variables.tf**: File for variable definitions. - **terraform.tfvars**: File for variable values. - **locals.tf**: File for local variables. - **subnet.tf**: File dedicated to managing subnet resources. - **routing_table.tf**: File for route table configurations. - **sg.tf**: File for security group configurations. ## Step-by-Step Tasks ### 1. Clone Repository Start by cloning the repository to your local environment. ### 2. Clean Up Terraform Files #### variables.tf - **Remove**: - Access Key and Secret Key - AMI - Internet Gateway (IGW) - All CIDR and Subnet entries - **Keep**: - Availability Zones (AZs) - Environment (ENV) - **Define Variables**: - Create a variable for `Public_cidr_block` to manage the creation of 6 subnets (3 private and 3 public). - Define `Private_cidr_block`. #### terraform.tfvars - Copy all relevant variables from `variables.tf` and paste them into `terraform.tfvars`. - **Remove** routing table configurations to let them inherit the VPC name. ### 3. Modify main.tf - **Remove** Access Key and Secret Key entries. - **Paste** remote backend configuration. - **Update VPC Tags**: Instead of passing values for each tag, utilize `locals` for common tag values. ### 4. Create locals.tf - Define local variables for common tag values. - Access local variables in the VPC configuration using the appropriate syntax. ### 5. Update Subnet Configurations #### Public Subnets - Remove additional public subnets (subnet 2 and 3). - Use `count = 3` to create the necessary number of public subnets. - Utilize the `element` function to reference specific CIDR blocks based on the count index. #### Private Subnets - Rename resources to reflect they are private. - Adjust tags accordingly. ### 6. Route Tables Configuration - Define separate route tables for public and private subnets. - **Comment Out** route table associations temporarily. - Use `terraform plan` to preview subnet configurations. ### 7. Organize Subnets into subnet.tf - Move all subnet resources to `subnet.tf`. - Use `count.index + 1` to manage subnet indexing dynamically. ### 8. Create routing_table.tf - Move all route table blocks to this file. - Address subnet ID issues by ensuring the correct variable references. - Introduce Splat syntax for managing multiple subnet associations. ### 9. Dynamic Security Group Management #### sg.tf - Copy necessary configurations from `main.tf` into `sg.tf`. - Add ports 443 and 22 to the security group. - Implement dynamic ingress rules by creating a `service_ports` variable. - Populate this variable with values for multiple ports: `["80", "8080", "443", "8443", "22", "3306", "1433"]`. ### 10. Finalization - Run `terraform fmt` to format the configuration files. - Execute `terraform plan` and `terraform apply` to validate and deploy the infrastructure. - Check inbound and outbound rules to ensure proper configuration. ## Conclusion By following these steps and utilizing Terraform functions, we can efficiently manage our cloud infrastructure with minimal duplication and improved scalability. This project serves as a template for creating robust Terraform configurations. --- ================================================ FILE: Day 14 AWS-Terraform-Functions-1/RTA.tf ================================================ resource "aws_route_table_association" "public-subnets" { # count = 3 count = length(var.public_cird_block) subnet_id = element(aws_subnet.public-subnet.*.id, count.index) route_table_id = aws_route_table.public-route-table.id } resource "aws_route_table_association" "private-subnets" { # count = 3 count = length(var.private_cird_block) subnet_id = element(aws_subnet.private-subnet.*.id, count.index) route_table_id = aws_route_table.private-route-table.id } ================================================ FILE: Day 14 AWS-Terraform-Functions-1/locals.tf ================================================ locals { Owner = "Prod-Team" costcenter = "Hyd-8080" TeamDL = "Saikiran.pinapathruni18@gmail.com" } ================================================ FILE: Day 14 AWS-Terraform-Functions-1/main.tf ================================================ #This Terraform Code Deploys Basic VPC Infra. provider "aws" { region = var.aws_region } terraform { backend "s3" { bucket = "workspacesbucket01" key = "function.tfstate" region = "us-east-1" } } resource "aws_vpc" "default" { cidr_block = var.vpc_cidr enable_dns_hostnames = true tags = { Name = "${var.vpc_name}" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = "${var.environment}" } } resource "aws_internet_gateway" "default" { vpc_id = aws_vpc.default.id tags = { Name = "${var.vpc_name}-IGW" } } resource "aws_route_table" "public-route-table" { vpc_id = aws_vpc.default.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.default.id } tags = { Name = "${var.vpc_name}-Public-RT" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = "${var.environment}" } } resource "aws_route_table" "private-route-table" { vpc_id = aws_vpc.default.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.default.id } tags = { Name = "${var.vpc_name}-private-RT" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = "${var.environment}" } } # data "aws_ami" "my_ami" { # most_recent = true # #name_regex = "^sai" # owners = ["232323232323232323"] # } # resource "aws_instance" "web-1" { # ami = "${data.aws_ami.my_ami.id}" # #ami = "ami-0d857ff0f5fc4e03b" # availability_zone = "us-east-1a" # instance_type = "t2.micro" # key_name = "LaptopKey" # subnet_id = "${aws_subnet.subnet1-public.id}" # vpc_security_group_ids = ["${aws_security_group.allow_all.id}"] # associate_public_ip_address = true # tags = { # Name = "Server-1" # Env = "Prod" # Owner = "sai" # CostCenter = "ABCD" # } # user_data = <<- EOF # #!/bin/bash # sudo apt-get update # sudo apt-get install -y nginx # echo "

${var.env}-Server-1

" | sudo tee /var/www/html/index.html # sudo systemctl start nginx # sudo systemctl enable nginx # EOF # } # resource "aws_dynamodb_table" "state_locking" { # hash_key = "LockID" # name = "dynamodb-state-locking" # attribute { # name = "LockID" # type = "S" # } # billing_mode = "PAY_PER_REQUEST" # } ##output "ami_id" { # value = "${data.aws_ami.my_ami.id}" #} #!/bin/bash # echo "Listing the files in the repo." # ls -al # echo "+++++++++++++++++++++++++++++++++++++++++++++++++++++" # echo "Running Packer Now...!!" # packer build -var=aws_access_key=AAAAAAAAAAAAAAAAAA -var=aws_secret_key=BBBBBBBBBBBBB packer.json # echo "+++++++++++++++++++++++++++++++++++++++++++++++++++++" # echo "Running Terraform Now...!!" # terraform init # terraform apply --var-file terraform.tfvars -var="aws_access_key=AAAAAAAAAAAAAAAAAA" -var="aws_secret_key=BBBBBBBBBBBBB" --auto-approve ================================================ FILE: Day 14 AWS-Terraform-Functions-1/sg.tf ================================================ resource "aws_security_group" "allow_all" { name = "${var.vpc_name}-allow-all" description = "Allow all Inbound traffic" vpc_id = aws_vpc.default.id # Ingress rule block with dynamic iteration over service_ports dynamic "ingress" { for_each = var.ingress_value content { from_port = ingress.value to_port = ingress.value protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] # Allow traffic from any IP } } # Egress rule block egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] # Allow outbound traffic to any IP } # Tags block tags = { Name = "${var.vpc_name}-allow-all" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = var.environment } } ================================================ FILE: Day 14 AWS-Terraform-Functions-1/subnet.tf ================================================ resource "aws_subnet" "public-subnet" { #count = 3 #012 count = length(var.public_cird_block) vpc_id = aws_vpc.default.id cidr_block = element(var.public_cird_block, count.index + 1) availability_zone = element(var.azs, count.index) tags = { Name = "${var.vpc_name}-public-subnet-${count.index + 1}" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = "${var.environment}" } } resource "aws_subnet" "private-subnet" { # count = 3 #012 count = length(var.private_cird_block) vpc_id = aws_vpc.default.id cidr_block = element(var.private_cird_block, count.index + 1) availability_zone = element(var.azs, count.index) tags = { Name = "${var.vpc_name}-private-subnet-${count.index + 1}" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = "${var.environment}" } } ================================================ FILE: Day 14 AWS-Terraform-Functions-1/terraform.tfvars ================================================ aws_region = "us-east-1" vpc_cidr = "172.18.0.0/16" vpc_name = "DevSecOps-Vpc" key_name = "SecOps-Key" azs = ["us-east-1a", "us-east-1b", "us-east-1c"] public_cird_block = ["172.18.1.0/24", "172.18.2.0/24", "172.18.3.0/24", "172.18.4.0/24", "172.18.5.0/24"] private_cird_block = ["172.18.10.0/24", "172.18.20.0/24", "172.18.30.0/24", "172.18.40.0/24", "172.18.50.0/24"] environment = "Prod" ingress_value = ["80", "8080", "443", "8443", "22", "3306", "1900", "1443"] ================================================ FILE: Day 14 AWS-Terraform-Functions-1/variables.tf ================================================ variable "aws_region" {} variable "vpc_cidr" {} variable "vpc_name" {} variable "key_name" {} variable "azs" {} variable "public_cird_block" {} variable "private_cird_block" {} variable "environment" {} variable "ingress_value" {} ================================================ FILE: Day 15 AWS-Terraform-Functions-2/README.md ================================================ ![a-futuristic-3d-scene-featuring-an-astronaut-sitti-JmnDsV37TdiaW1tmnfgktg-hPykpO-xSY6aYtvVHr0G_g](https://github.com/user-attachments/assets/5bd8031e-c1a2-4305-b371-b7551ad62055) # Terraform Functions - 2 This repository demonstrates the usage of various Terraform functions such as `lookup`, `count`, and `condition`, along with implementing file provisioners (`remote-exec`, `local-exec`). The goal is to dynamically manage infrastructure using variables, conditional logic, and provisioning tasks. ## Project Structure - **`ec2.tf`**: Main file to create EC2 instances. - **`variables.tf`**: Define variables such as AMIs, instance type, keyname, and environment. - **`terraform.tfvars`**: Assign values to variables such as AMI IDs for different regions and the environment. - **`null.tf`**: Implements `null_resource` to run scripts without recreating instances. - **`userdata.sh`**: Script to install software on EC2 instances after they are created. ## Terraform Functions Overview ### 1. AMI Lookup The `lookup` function helps dynamically retrieve AMI IDs based on the region. Example: ```hcl variable "amis" { type = map(string) } # In terraform.tfvars amis = { us-east-1 = "ami-0abcd1234efgh5678" us-east-2 = "ami-0wxyz1234mnop5678" } # In ec2.tf ami = lookup(var.amis, var.aws_region) ``` This setup allows us to deploy EC2 instances using region-specific AMIs. For example, AMIs in `us-east-1` may not work in `us-east-2`. ### 2. Instance Count with Subnet Mapping We declare three subnets, and each subnet must map to one EC2 instance. By using `count`, we can define how many instances to create based on the length of subnets. ```hcl count = length(var.public_cidr_block) subnet_id = element(var.subnets, count.index) ``` ### 3. Conditional Deployment Using a condition, we can decide how many instances to create based on the environment. ```hcl count = var.environment == "Prod" ? 3 : 1 ``` This means if the environment is `Prod`, 3 instances are created; otherwise, 1 instance is created. ## Provisioners ### File Provisioning with `remote-exec` We use provisioners to apply scripts after EC2 instances are created without recreating the instances. - **User Data**: Initially, the user data script is passed during instance creation. - **Provisioners**: To avoid recreating instances for every change, we use `null_resource` to run scripts or commands on existing instances. Example: ```hcl resource "null_resource" "cluster" { count = length(var.public_cidr_block) provisioner "remote-exec" { connection { type = "ssh" user = "ec2-user" private_key = file("path/to/key.pem") host = aws_instance.example.public_ip } inline = [ "sudo bash /tmp/script.sh" ] } } ``` ### Tainting Resources If we need to recreate a resource, we can use Terraform's `taint` feature. Marking a resource as "tainted" forces Terraform to recreate it during the next apply. Example: ```bash terraform taint null_resource.cluster ``` This marks the resource as needing recreation, allowing the new script to be applied without affecting the rest of the infrastructure. ## Commands ```bash terraform init # Initialize Terraform terraform fmt # Format the code terraform validate # Validate the configuration terraform apply # Apply the configuration ``` ### Taint Example ```bash terraform taint null_resource.cluster terraform apply ``` ## Next Steps - Explore **Terraform Modules** for better structuring and reuse of code. ## Interview Tips **What is taint in Terraform?** Taint marks a resource for recreation. You can manually taint a resource using the `terraform taint` command, causing Terraform to destroy and recreate it during the next `apply`. Conversely, you can "untaint" a resource to prevent it from being recreated. --- Stay tuned for the next session where we’ll dive into **Terraform Modules**! ================================================ FILE: Day 15 AWS-Terraform-Functions-2/private-ec2.tf ================================================ resource "aws_instance" "private-server" { # count = length(var.private_cird_block) count = var.environment == "Prod" ? 3 : 1 ami = lookup(var.amis, var.aws_region) instance_type = "t2.micro" key_name = var.key_name subnet_id = element(aws_subnet.private-subnet.*.id, count.index + 1) vpc_security_group_ids = ["${aws_security_group.allow_all.id}"] # associate_public_ip_address = true tags = { Name = "${var.vpc_name}-Private-Server-${count.index + 1}" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = "${var.environment}" } user_data = <<-EOF #!/bin/bash sudo apt update sudo apt install nginx -y sudo apt install git -y sudo git clone https://github.com/saikiranpi/SecOps-game.git sudo rm -rf /var/www/html/index.nginx-debian.html sudo cp SecOps-game/index.html /var/www/html/index.html echo "

${var.vpc_name}-public-Server-${count.index + 1}

" >> /var/www/html/index.html sudo systemctl start nginx sudo systemctl enable nginx EOF } ================================================ FILE: Day 15 AWS-Terraform-Functions-2/public-ec2.tf ================================================ resource "aws_instance" "public-server" { # count = length(var.public_cird_block) count = var.environment == "Prod" ? 3 : 1 ami = lookup(var.amis, var.aws_region) instance_type = "t2.micro" key_name = var.key_name subnet_id = element(aws_subnet.public-subnet.*.id, count.index + 1) vpc_security_group_ids = ["${aws_security_group.allow_all.id}"] associate_public_ip_address = true tags = { Name = "${var.vpc_name}-Public-Server-${count.index + 1}" Owner = local.Owner costcenter = local.costcenter TeamDL = local.TeamDL environment = "${var.environment}" } } ================================================ FILE: Day 15 AWS-Terraform-Functions-2/terraform.tfvars ================================================ aws_region = "us-east-1" vpc_cidr = "172.18.0.0/16" vpc_name = "DevSecOps-Vpc" key_name = "SecOps-Key" azs = ["us-east-1a", "us-east-1b", "us-east-1c"] public_cird_block = ["172.18.1.0/24", "172.18.2.0/24", "172.18.3.0/24"] private_cird_block = ["172.18.10.0/24", "172.18.20.0/24", "172.18.30.0/24"] environment = "Dev" ingress_value = ["80", "8080", "443", "8443", "22", "3306", "1900", "1443"] amis = { us-east-1 = "ami-0866a3c8686eaeeba" us-east-2 = "ami-0ea3c35c5c3284d82" } ================================================ FILE: Day 15 AWS-Terraform-Functions-2/txt.tf ================================================ # user_data = <<-EOF # #!/bin/bash # sudo apt update # sudo apt install nginx -y # sudo apt install git -y # sudo git clone https://github.com/saikiranpi/SecOps-game.git # sudo rm -rf /var/www/html/index.nginx-debian.html # sudo cp SecOps-game/index.html /var/www/html/index.html # echo "

${var.vpc_name}-private-Server-${count.index + 1}

" >> /var/www/html/index.html # sudo systemctl start nginx # sudo systemctl enable nginx # EOF # provisioner "file" { # source = "user_data.sh" # destination = "/tmp/user_data.sh" # connection { # type = "ssh" # user = "ubuntu" # private_key = file("LaptopKey.pem") # host = element(aws_instance.public-servers.*.public_ip, count.index) # } # } # provisioner "remote-exec" { # inline = [ # "sudo chmod 777 /tmp/userdata.sh", # "sudo /tmp/userdata.sh", # "sudo apt update", # "sudo apt install jq unzip -y", # ] # connection { # type = "ssh" # user = "ubuntu" # private_key = file("SecOps-Key.pem") # host = element(aws_instance.public-server.*.public_ip, count.index) # } # } ================================================ FILE: Day 15 AWS-Terraform-Functions-2/user-data.sh ================================================ #!/bin/bash sudo apt update sudo apt install nginx -y sudo apt install git -y sudo git clone https://github.com/saikiranpi/SecOps-game.git sudo rm -rf /var/www/html/index.nginx-debian.html sudo cp SecOps-game/index.html /var/www/html/index.html echo "

${var.vpc_name}-public-Server-${count.index + 1}

" >> /var/www/html/index.html sudo systemctl start nginx sudo systemctl enable nginx #testing #testing #restng ================================================ FILE: Day 15 AWS-Terraform-Functions-2/variable.sh ================================================ variable "aws_region" {} variable "vpc_cidr" {} variable "vpc_name" {} variable "key_name" {} variable "azs" {} variable "public_cird_block" {} variable "private_cird_block" {} variable "environment" {} variable "ingress_value" {} variable "amis" {} ================================================ FILE: Day 16 AWS-Terraform-Part-6 Modules-Part-1/README.md ================================================ # Terraform Project: Modularized Infrastructure Setup ![a-vibrant-and-energetic-youtube-thumbnail-with-a-s-giqGaHBwT7yCh792W1jUEQ-NkAg-GSlQvynsgO8mL7hAw](https://github.com/user-attachments/assets/ca2885eb-cae5-4a18-90c1-461c349a7fb1) This repository demonstrates how to modularize Terraform code for a scalable, manageable infrastructure deployment across multiple environments (e.g., dev, QA, production). The key idea is to break down the Terraform code into modules for various infrastructure components like networking, compute, security groups, load balancers, and NAT gateways. This modular approach minimizes manual changes and overhead when switching between environments. ## Problem Overview In typical infrastructure deployments, environments like dev, QA, and production might have different requirements (e.g., dev doesn’t need a load balancer or Route53). Managing these differences with a single Terraform codebase can lead to manual changes, which is inefficient. By breaking the code into modules, you can dynamically include/exclude components based on environment requirements, making the infrastructure easier to manage. ## Solution We break the infrastructure into the following modules: - **Network**: VPC, subnets, routing - **Compute**: EC2 instances (public and private) - **Security Groups (SG)**: For securing VPC resources - **NAT**: NAT gateway for private instance internet access - **ELB**: Elastic Load Balancers (optional) - **IAM**: Identity and Access Management ### Folder Structure ``` /modules ├── network ├── compute ├── sg ├── nat ├── elb ├── iam /development ├── main.tf ├── variables.tf ├── terraform.tfvars └── ec2.tf /production ├── infrastructure.tf ├── variables.tf ├── terraform.tfvars ``` ## Step-by-Step Setup ### 1. Create Network Module 1. **Files in `/modules/network`:** - `vpc.tf`: Defines the VPC and internet gateway. - `public_subnets.tf`: Public subnets configuration. - `private_subnets.tf`: Private subnets configuration. - `routing.tf`: Routing tables for public and private subnets. - `variables.tf`: Define necessary input variables. - `outputs.tf`: Export important values (e.g., VPC ID, subnet IDs). - `locals.tf`: Set local values for environment or naming conventions. 2. **Import Network Module in Development:** - In `/development/infra.tf`, import the network module: ```hcl module "dev_vpc_1" { source = "../modules/network" # Specify the necessary variables vpc_cidr = var.vpc_cidr ... } ``` 3. **Deploy the Network Module:** ```bash cd development terraform init terraform fmt terraform validate terraform apply ``` ### 2. Configure for Production - **Copy Files**: Copy the infrastructure setup from `development` to `production`. - Ensure variable values are updated (e.g., CIDR blocks should not overlap between environments). - **Customize Values**: Modify `terraform.tfvars` and `variables.tf` in the `production` folder to match production settings (e.g., CIDR range, environment = "production"). ```bash cd production terraform init terraform fmt terraform apply ``` ### 3. Add Security Groups Module 1. **Create `/modules/sg`:** - `sg.tf`: Security group configurations. - `variables.tf`: Define necessary input variables. - `outputs.tf`: Export security group IDs. 2. **Import in Development:** - Add the security group module to `development`'s `infra.tf`: ```hcl module "dev_sg_1" { source = "../modules/sg" vpc_id = module.dev_vpc_1.vpc_id ... } ``` 3. **Deploy SG Module:** ```bash cd development terraform get terraform apply ``` 4. **Replicate for Production**: Similarly, copy the security group module to `production`, making necessary adjustments. ### 4. EC2 (Compute) Module 1. **Create `/modules/compute`:** - `private_ec2.tf`: For private EC2 instances. - `public_ec2.tf`: For public EC2 instances. - `variables.tf`: Define EC2-related variables. - `outputs.tf`: Export EC2 instance IDs or other resources. 2. **Deploy in Development**: Add EC2 configuration in `development/ec2.tf`, referencing the module: ```hcl module "dev_compute_1" { source = "../modules/compute" vpc_id = module.dev_vpc_1.vpc_id ... } ``` 3. **Replicate for Production**: Follow the same process for production, customizing as needed. ### 5. NAT Gateway Module 1. **Create `/modules/nat`:** - `natgw.tf`: Defines the NAT gateway. - `variables.tf`: Input variables like subnet ID. - `outputs.tf`: Export NAT gateway ID. 2. **Deploy NAT in Development and Production**: - Ensure the NAT module is added in both environments, with appropriate changes in `terraform.tfvars`. ### Final Steps - **Destroy**: To clean up, run the following in both environments: ```bash cd production terraform destroy -auto-approve cd development terraform destroy -auto-approve ``` ## Key Terraform Commands - **Format and Validate**: ```bash terraform fmt terraform validate ``` - **Initialize**: ```bash terraform init ``` - **Apply Changes**: ```bash terraform apply ``` - **Check State**: ```bash terraform state list ``` ## Notes on Output Values The `output.tf` files in each module play a crucial role in passing data between modules. For example, the VPC module exports the `vpc_id`, which is consumed by the Security Group module and EC2 module. This modular approach helps ensure that all components are properly linked, and their dependencies are clear. ## Conclusion This repository demonstrates how to efficiently manage and deploy infrastructure across multiple environments using Terraform modules. By breaking infrastructure code into reusable modules, we reduce complexity, manual work, and potential errors, leading to a more scalable and maintainable solution. ================================================ FILE: Day 17 AWS-Terraform-Full-Course/README.md ================================================ ![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/user-attachments/assets/64a1a02f-c8c8-4248-876e-685505d76e4b) # Day 17 Terraform Full Course Link here : https://youtu.be/bqvdpa649nU?si=EQJNm-VPDgypTkwc ================================================ FILE: Day 18 AWS-Terraform-Part-8 TerraformCloud/README.md ================================================ ![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-f__YY0bwSie2OkYBNrSyeQ-GV6ykntrRNKLu-6yjr3VXg](https://github.com/user-attachments/assets/022eb8c9-67e4-4f71-b01c-2591e65ea62d) # Day 18 AWS-Terraform-Part-8 TerraformCloud - Covered under terraform full course. # TimeStamp Link : https://youtu.be/bqvdpa649nU?list=PLMj5OfHGyNU81vI77YRFg9WWvbGKqbyXD&t=23642 ================================================ FILE: Day 19 AWS-Terraform-Part-9 GitLab-Pipeline/README.md ================================================ # Day 19 Terraform Modules with GitLab ![a-3d-render-of-a-dynamic-and-energetic-thumbnail-w-MdC5XT42QNySa2zI6fo6Sw-mIHViexFR9C60umSgtcnBg](https://github.com/user-attachments/assets/aa7fdce1-98ee-448a-9c96-343b0fbdba0d) Complete Source file here : https://gitlab.com/saikiranpi1/modules-gitlab.git ```markdown # Terraform - GitLab Integration This repository contains instructions and YAML configurations for integrating Terraform with GitLab CI/CD, allowing for efficient infrastructure management and deployment. ## Table of Contents - [Overview](#overview) - [Getting Started](#getting-started) - [GitLab CI Configuration](#gitlab-ci-configuration) - [Using tfenv](#using-tfenv) - [Installing GitLab Runner](#installing-gitlab-runner) - [Deploying an Ubuntu Server](#deploying-an-ubuntu-server) - [Cleaning Up](#cleaning-up) - [Troubleshooting](#troubleshooting) - [Conclusion](#conclusion) ## Overview This project demonstrates how to set up Terraform with GitLab CI/CD using YAML for configuration. We will focus on tasks such as pushing code to GitLab, setting up CI/CD variables, and deploying infrastructure. ## Getting Started 1. **Create a new GitLab project**: - Go to your GitLab dashboard and click on "New Project." - Select "Public" and create the project. 2. **Push your Terraform code**: ```bash git init git add . git commit -m "Infra" git remote add origin git push origin master ``` ## GitLab CI Configuration 1. **Access CI/CD Settings**: - Navigate to your project, then go to `Settings` > `CI/CD`. 2. **Upload Secure Files**: - Under the "Secure Files" section, upload your PEM file. 3. **Add CI/CD Variables**: - Scroll to "Variables" and click "Add." - Add the following masked variables: - `AWS_ACCESS_KEY` - `AWS_SECRET_KEY` 4. **Set Up a New GitLab Runner**: - Navigate to `Runners` and select "New project runner." - Choose "Linux" and set the following: - **Tags**: `terraform,AWS` - **Description**: A brief description of your runner. - **Timeout**: 600 seconds. - Click "Create Runner." ## Using tfenv To manage different Terraform versions easily, we will use `tfenv`. Follow these steps: 1. **Install tfenv**: - Follow the instructions available on the [tfenv GitHub page](https://github.com/tfutils/tfenv). 2. **Install the Required Terraform Version**: ```bash sudo apt install unzip tfenv list-remote # Lists all available versions tfenv install 1.5.5 # Installs the specified version ``` ## Installing GitLab Runner 1. **Install GitLab Runner**: - Open your console and follow the installation commands provided on the [GitLab Runner page](https://docs.gitlab.com/runner/install/). 2. **Register the Runner**: - Enter the token and name for the runner, choose "shell" as the executor. 3. **Modify Your Code and Push**: - Make minor changes to your code and push it. This should trigger the CI/CD pipeline. 4. **Run Commands as gitlab-runner**: ```bash cat /etc/passwd sudo rm -r /home/gitlab-runner/.bash_logout su - gitlab-runner # Switch to gitlab-runner user ``` ## Deploying an Ubuntu Server Log into the server and deploy the necessary infrastructure using your Terraform scripts. ## Cleaning Up To destroy the infrastructure, run: ```bash terraform destroy -auto-approve ``` You can use **Checkov**, a free tool, to scan your Terraform code for security issues: ```bash apt install -y python3-pip ``` ## Troubleshooting If you encounter errors: - Check the GitLab CI/CD pipeline logs for error messages. - Google any error codes for potential solutions. ## Conclusion This setup provides a streamlined approach to managing infrastructure with Terraform in a GitLab CI/CD environment. Feel free to customize the configurations as needed to fit your specific requirements. For further assistance, refer to the [official Terraform documentation](https://www.terraform.io/docs/index.html) or [GitLab CI/CD documentation](https://docs.gitlab.com/ee/ci/). ``` Feel free to adjust any sections as needed or add more details specific to your project's requirements! ================================================ FILE: Day 20 AWS-Packer/README.md ================================================ # Day 20 AWS-Packer ![a-vibrant-and-eye-catching-youtube-thumbnail-with--CWD0OBoeRVO1Jw5QXUd3iw-PZaqUMYdQ0eS9Tv6GFm_VQ](https://github.com/user-attachments/assets/5cc2de07-938e-4197-8e07-c99bdcdd0180) Here's an outline to help you implement and visualize this process: ### 1. **Introduction to Packer and Ansible** - **Packer**: A tool to create images for multiple platforms from a single source configuration. - **Ansible**: A configuration management tool used for automation, specifically post-deployment configuration. ### 2. **Why Ansible?** After deploying infrastructure with tools like **Terraform**, configuration management is needed for more specific setups on the deployed resources. Here’s where **Ansible** comes in: - **Controller-Client Model**: - **Controller**: The machine where Ansible commands are run. - **Clients** (Nodes): Machines receiving configuration commands from the controller. - **No Client Software Needed**: Ansible only requires SSH and Python on the nodes, simplifying the setup. ### 3. **Diagram of Ansible Setup** For a visual, imagine: - A **controller node** communicating with **client nodes** using SSH. - Commands are sent from the controller, received by nodes, and executed without needing any additional software on the client side. ### 4. **Task: AMI Creation and Deployment** 1. **Create an AMI Image** using Packer for a base instance. 2. **Deploy an Instance** with this AMI. 3. Verify functionality, ensuring services like Node Exporter (on port 9100) are working. ### 5. **Steps to Install and Configure Ansible on Deployed Instances** - **Install Ansible**: - Refer to [Ansible documentation](https://docs.ansible.com/) for the latest installation steps. - **Configuration File**: - Run `sudo ansible-config init --disabled > ansible.cfg` in `/etc` to generate the config file. - **Update Ansible Configurations**: - Open the file with `ctrl+w` to search and configure: - Set `host_key_checking = false`. - Define the `remote_user` as `ansibleadmin`. - Define `private_key_file` as `/home/ansibleadmin/key.pem` (ensure key permissions are set to read-only, i.e., `chmod 444 key.pem`). Following these steps will provide a setup ready for deploying configurations across instances effectively using Ansible. ================================================ FILE: Day 21 AWS-Ansible-Part-1/.gitignore ================================================ .terraform.lock.hcl .terraform/* 6.ansible-playbook-nginx.yml invfile* ================================================ FILE: Day 21 AWS-Ansible-Part-1/1.provider.tf ================================================ provider "aws" { region = var.aws_region } terraform { required_version = "<= 1.8.5" #Forcing which version of Terraform needs to be used required_providers { aws = { version = "<= 6.0.0" #Forcing which version of plugin needs to be used. source = "hashicorp/aws" } } backend "s3" { bucket = "workspacesbucket01" key = "Ansible.tfstate" region = "us-east-1" # dynamodb_table = "-terraform-locks" encrypt = true } } ================================================ FILE: Day 21 AWS-Ansible-Part-1/10.locals.tf ================================================ #distinct takes a list and returns a new list with any duplicate elements removed. #toset takes a list will remove any duplicate elements and discard the ordering of the elements. locals { new_public_subnet_cidrs = distinct(var.public_subnet_cidrs) new_private_subnet_cidrs = distinct(var.private_subnet_cidrs) new_environment = lower(var.environment) projid = format("%s-%s", lower(var.vpc_name), lower(var.projid)) } ================================================ FILE: Day 21 AWS-Ansible-Part-1/11.localfile_ansible_inventory.tf ================================================ resource "local_file" "ansible-inventory-file" { content = templatefile("publicservers.tpl", { testserver01 = aws_instance.webservers.0.public_ip testserver02 = aws_instance.webservers.1.public_ip testserver03 = aws_instance.webservers.2.public_ip pvttestserver01 = aws_instance.webservers.0.private_ip pvttestserver02 = aws_instance.webservers.1.private_ip pvttestserver03 = aws_instance.webservers.2.private_ip } ) filename = "${path.module}/invfile" } ================================================ FILE: Day 21 AWS-Ansible-Part-1/12.localfile_ansible_inventory_yaml.tf ================================================ resource "local_file" "ansible-inventory-file-yaml" { content = templatefile("publicservers_yaml.tpl", { testserver01 = aws_instance.webservers.0.public_ip testserver02 = aws_instance.webservers.1.public_ip testserver03 = aws_instance.webservers.2.public_ip pvttestserver01 = aws_instance.webservers.0.private_ip pvttestserver02 = aws_instance.webservers.1.private_ip pvttestserver03 = aws_instance.webservers.2.private_ip } ) filename = "${path.module}/invfile.yaml" } ================================================ FILE: Day 21 AWS-Ansible-Part-1/13.null-local-exec.tf ================================================ resource "null_resource" "webservers" { provisioner "local-exec" { command = < PASTE UNDER [defaults] -- > It looks like this gathering = smart fact_caching_timeout = 86400 fact_caching = redis fact_caching_prefix = ansible_DevSecOps_Saikiran fact_caching_connection = PASTE-YOUR-CLIENT(TESTSERVER01)-PUBLICIP-HERE:6379:0 ![image](https://github.com/user-attachments/assets/5a3e46dd-2534-4b97-8fff-0a380c747433) CTLR+X --> Y > ENTER apt update apt install -y python3-pip pip3 install redis ON CONTROLLER --> ANSIBLE -I INVFILE PVT -M SETUP ON CLIENT(TESTSERVER01) --> REDIS-CLI --> KEYS * ================================================ FILE: Day 23 AWS-Ansible-Part-3/README.md ================================================ # Day 23 AWS-Ansible-Part-3 ![a-3d-render-of-a-glowing-ansible-logo-below-the-lo-4QgGoilXQ36n-8iyPqrNXQ-mBzRZGehQfeGRujEXVxpTQ](https://github.com/user-attachments/assets/240ba7fd-de4a-4f64-9e16-f36c61ca5720) # Complete Code here : https://github.com/saikiranpi/Ansible-Testing --- # Ansible Jinja2 Templating with MySQL and Nginx Playbooks This project demonstrates the use of Jinja2 templates in Ansible to deploy and configure services on multiple servers. It includes examples of pre- and post-tasks, as well as how to manage MySQL and Nginx configurations using Ansible playbooks. ## Project Setup 1. **Initialize Ansible Configuration** - Navigate to the Ansible directory: ```bash cd /etc/ansible/ ``` - Generate the default Ansible configuration: ```bash ansible-config init --disabled > ansible.cfg ``` - Modify `ansible.cfg` for common settings: ```bash nano ansible.cfg ``` - Update the following values: ```ini host_key_checking = False remote_user = ansibleadmin private_key_file = /home/ansibleadmin/key.pem ``` 2. **Initialize and Apply Terraform** - Ensure you are in the correct directory and apply the Terraform configuration to set up your infrastructure: ```bash terraform init terraform apply ``` ## Jinja2 Templating with Nginx The `nginx-jinja2.yml` playbook uses Jinja2 templates to configure Nginx. 1. Run the Nginx playbook: ```bash ansible-playbook -i invfile nginx-jinja2.yml -v ``` 2. Once the playbook is complete, check the public IP of the server to verify that Nginx is running. ## MySQL Setup with Jinja2 This section explains how to install and configure MySQL using Ansible and Jinja2 templates. All variable values are defined within the configuration file. 1. Run the MySQL playbook: ```bash ansible-playbook -i invfile playbooks/mysql-jinja2.yml ``` 2. Verify MySQL service status: ```bash ansible -i invfile pvt -m shell -a "service mysql status" ``` 3. Once the MySQL service is running, log in to the server and confirm that you can access MySQL databases: ```sql mysql> SHOW DATABASES; ``` 4. Add data to the `myflixdb` database: ```sql USE myflixdb; SHOW TABLES; SELECT * FROM movies; ``` ## Pre-Tasks and Post-Tasks Pre-tasks and post-tasks are used to prepare the system before the main tasks or clean up afterward. ### Example Task: Checking `/tmp` Folder 1. Run the playbook with pre-tasks and post-tasks: ```bash ansible-playbook -i invfile playbooks/pre_post_tasks.yml ``` ## Running the Playbooks on Multiple Servers If you need to run these playbooks across 100 or more servers, Ansible's inventory and parallel execution capabilities make this straightforward. Update your inventory file (`invfile`) with the list of servers, and then run the playbooks with the inventory specified. ## Git Commands for Version Control 1. To push any changes to your playbook repository: ```bash git push ``` 2. To pull the latest updates: ```bash git pull ``` ## File Structure ``` /etc/ansible/ ├── ansible.cfg # Ansible configuration file ├── invfile # Inventory file listing server IPs or hostnames ├── playbooks/ │ ├── nginx-jinja2.yml # Nginx playbook using Jinja2 template │ ├── mysql-jinja2.yml # MySQL playbook using Jinja2 template │ └── pre_post_tasks.yml # Playbook with pre-tasks and post-tasks └── templates/ ├── nginx.j2 # Nginx configuration template └── mysql.j2 # MySQL configuration template ``` ## Requirements - Ansible 2.9+ - Terraform (if using for infrastructure setup) - SSH access to the target servers ## Usage Notes This project is suitable for dynamic and scalable server setups. With Jinja2 templating, you can easily customize configurations for different environments or requirements, making it highly adaptable for both development and production needs. --- ================================================ FILE: Day 24 Ansible-Part-4 DynamicInventory_AWX/README.md ================================================ # Ansible Dynamic Inventory and Ansible Tower "Anible Dynamic Inventory" title with Attractive Font for youtube Thumbnail This guide explains how to use **Ansible Dynamic Inventory** for managing dynamic environments, such as those involving auto-scaling groups. Unlike static inventory, dynamic inventory adapts to infrastructure changes, such as scaling up or down during load variations. --- ## Overview ### Static vs Dynamic Use Case - **Static Use Case**: Targets predefined servers without HA (High Availability) or auto-scaling. Servers remain fixed, without scaling up or down. - **Dynamic Use Case**: Ideal for environments with auto-scaling groups. Servers scale automatically based on load, requiring a dynamic inventory for effective management. --- ## Prerequisites 1. **Install Required Tools**: ```bash sudo apt-get update sudo apt-get install python3-pip jq -y sudo pip3 install boto3 sudo apt install -y awscli aws --version ``` 2. **Configure Ansible**: - Navigate to the Ansible configuration directory: ```bash cd /etc/ansible ``` - Back up the `ansible.cfg` file: ```bash cp ansible.cfg ansible.cfg.bak ``` - Edit the `ansible.cfg` file and enable the **inventory plugins**: ```bash nano ansible.cfg ``` Locate `[inventory]` and update as needed. 3. **Create EC2 Plugin File**: - Create a new file for the EC2 plugin: ```bash nano aws_ec2.yaml ``` - Paste the following configuration: ```yaml plugin: aws_ec2 regions: - us-east-1 keyed_groups: - key: tags prefix: tag - prefix: instance_type key: instance_type - key: placement.region prefix: aws_region ``` --- ## Steps to Use Dynamic Inventory ### Deploy Infrastructure First 1. Validate the dynamic inventory: ```bash ansible-inventory -i /etc/ansible/aws_ec2.yaml --list ansible-inventory -i /etc/ansible/aws_ec2.yaml --list | jq ``` 2. Test connectivity using tags: ```bash ansible -i /etc/ansible/aws_ec2.yaml tag_terraform_managed_yes -m ping ``` ### Target Specific Resources - Set the dynamic inventory path: ```bash export dynamic='/etc/ansible/aws_ec2.yaml' ``` - Example command to run on specific instance types: ```bash ansible -i $dynamic instance_type_t2_small -m shell -a "df -h" ``` ### Run Playbooks 1. Create a playbook targeting specific tags: - Edit or create the playbook under the `dynamic_inventory` folder: ```bash nano dynamic_nginx-jinja2.yaml ``` - Update the `hosts` to: ```yaml hosts: tag_managedby_terraform ``` 2. Run the playbook: ```bash ansible-playbook -i $dynamic playbook/dynamic_inventory/dynamic_nginx.yaml ``` 3. Replace `nginx` with `mysql` or other playbooks as needed. --- ## Git Workflow for Dynamic Inventory 1. Create and switch to a new branch: ```bash git checkout -b dynamic_inventory ``` 2. Push changes to remote: ```bash git push origin dynamic_inventory ``` 3. Pull updates to the local repo: ```bash git pull ``` --- ## Auto-Scaling Integration When an auto-scaling group provisions instances, the dynamic inventory automatically updates to target the new resources. Verify using: ```bash ansible-inventory -i aws_ec2.yaml --list ansible-inventory -i aws_ec2.yaml --graph ``` --- ## Example Playbook Execution 1. Modify the number of instances in your Terraform configuration: ```bash terraform apply -var-file="vars.tfvars" -auto-approve ``` 2. Run the playbook: ```bash ansible-playbook -i /etc/ansible/aws_ec2.yaml playbook/dynamic_inventory/dynamic_nginx.yaml ``` 3. Validate with the updated inventory. --- ## Notes - Ensure that the `ansible.cfg` file is correctly configured for plugins. - Use `jq` to format and verify inventory JSON outputs. - Replace line endings with `LF` if issues arise during playbook execution. End of Dynamic Inventory. ================================================ FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/0-steps.sh ================================================ 1 sudo certbot certonly --manual --preferred-challenges=dns --key-type rsa \ --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \ --agree-tos -d *.cloudvishwakarma.in # Certificate is saved at: /etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem # Key is saved at: /etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem +++++IF ISSUE+++++ free -m top #DRY-RUN certbot certonly --dry-run --manual --preferred-challenges=dns --key-type rsa \ --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \ --agree-tos -d *.cloudvishwakarma.in +++++IF ISSUE+++++ 2 apt update && apt install -y unzip net-tools 3 wget https://releases.hashicorp.com/vault/1.13.2/vault_1.13.2_linux_amd64.zip unzip vault_1.13.2_linux_amd64.zip cp vault /usr/bin/vault mkdir -p /etc/vault mkdir -p /var/lib/vault/data vault version 4 nano config.hcl cp config.hcl /etc/vault/config.hcl 5 nano /etc/systemd/system/vault.service 6 sudo systemctl daemon-reload sudo systemctl stop vault sudo systemctl start vault sudo systemctl enable vault sudo systemctl status vault --no-pager 7 #VAULT STATUS FROM CLI ps -ef | grep -i vault | grep -v grep 8 export VAULT_ADDR=https://kmsvault.cloudvishwakarma.in:8200 echo "export VAULT_ADDR=https://kmsvault.cloudvishwakarma.in:8200" >>~/.bashrc vault status 9 vault operator init | tee -a /etc/vault/init.file 10 vault operator init | tee -a /etc/vault/init.file ================================================ FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/1-config.hcl ================================================ disable_cache = true disable_mlock = true ui = true listener "tcp" { address = "0.0.0.0:8200" tls_disable = 0 tls_cert_file = "/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem" tls_key_file = "/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem" tls_disable_client_certs = "true" } storage "file" { path = "/var/lib/vault/data" } api_addr = "https://kmsvault.cloudvishwakarma.in:8200" max_lease_ttl = "10h" default_lease_ttl = "10h" cluster_name = "vault" raw_storage_endpoint = true disable_sealwrap = true disable_printable_check = true ================================================ FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-config-kms.hcl ================================================ disable_cache = true disable_mlock = true ui = true listener "tcp" { address = "0.0.0.0:8200" tls_disable = 0 tls_cert_file = "/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem" tls_key_file = "/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem" tls_disable_client_certs = "true" } storage "s3" { bucket = "workspacesbucket01" } seal "awskms" { region = "us-east-1" kms_key_id = "KMSID here" endpoint = "kms.us-east-1.amazonaws.com" } api_addr = "https://kmsvault.cloudvishwakarma.in:8200" max_lease_ttl = "10h" default_lease_ttl = "10h" cluster_name = "vault" raw_storage_endpoint = true disable_sealwrap = true disable_printable_check = true ================================================ FILE: Day 25 HashicorpVault AWSIntegration/HashiCorp_Vault/2-vault.service ================================================ [Unit] Description=HashiCorp Vault - A tool for managing secrets Documentation=https://www.vaultproject.io/docs/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/vault/config.hcl [Service] ProtectSystem=full ProtectHome=read-only PrivateTmp=yes PrivateDevices=yes SecureBits=keep-caps AmbientCapabilities=CAP_IPC_LOCK NoNewPrivileges=yes ExecStart=/usr/bin/vault server -config=/etc/vault/config.hcl ExecReload=/bin/kill --signal HUP KillMode=process KillSignal=SIGINT Restart=on-failure RestartSec=5 TimeoutStopSec=30 StartLimitBurst=3 LimitNOFILE=65536 [Install] WantedBy=multi-user.target ================================================ FILE: Day 25 HashicorpVault AWSIntegration/README.md ================================================ # Day 27 HashicorpVault AWSIntegration Below is a structured GitHub repository content outline and README for the integration of HashiCorp Vault with Ansible, based on the provided instructions: --- ### Repository Structure ```plaintext HashiCorp-Vault-Ansible-Integration/ ├── README.md ├── terraform/ │ ├── main.tf │ ├── variables.tf │ ├── outputs.tf ├── vault/ │ ├── config.hcl │ ├── config-kms.hcl │ ├── init.file ├── ansible/ │ ├── playbook.yml │ ├── vault_secret_retrieve.yml ├── docs/ │ ├── installation_steps.md │ ├── troubleshooting.md └── scripts/ ├── setup_docker.sh ├── setup_ssl.sh ``` --- ### **README.md** ```markdown # HashiCorp Vault Integration with Ansible This repository demonstrates the integration of **HashiCorp Vault** with **Ansible** for managing secrets in real-world scenarios, specifically focusing on environments where servers need to retrieve sensitive information after unexpected reboots. The solution leverages Terraform for provisioning, AWS KMS for auto-unsealing, and Docker to host Vault. --- ## **Use Case** A Java application is running on a server. When the server reboots due to a disaster or maintenance: - The application must securely retrieve sensitive information (e.g., credentials) from a centralized Key Management System (KMS). - HashiCorp Vault is used for this purpose, ensuring compatibility with both on-premises and cloud environments. ### Why not Ansible Vault? - **Ansible Vault** is ideal for encrypting sensitive data like API keys or database credentials within playbooks. However, it cannot autonomously retrieve secrets from another server when triggered by events like server reboots. - **HashiCorp Vault**, combined with AWS KMS, provides auto-unsealing capabilities and centralized secret management. --- ## **Solution Overview** 1. **HashiCorp Vault Setup**: - Install Vault on a t2.medium instance. - Configure Vault with auto-unsealing using AWS KMS. - Store Vault initialization keys securely in S3. 2. **Terraform Configuration**: - Provisions Vault server. - Sets up IAM roles and S3 buckets for storing Vault keys. - Configures KMS for encryption and auto-unsealing. 3. **Ansible Integration**: - Demonstrates how to retrieve secrets stored in Vault using Ansible playbooks. --- ## **Setup Instructions** ### 1. Prerequisites - AWS Account with administrative access. - A t2.medium EC2 instance with Docker installed. - Terraform installed locally. - Ansible installed locally. ### 2. Vault Installation Follow the steps in `docs/installation_steps.md` to: 1. Start an EC2 instance. 2. Install Docker and SSL. 3. Configure Vault. ### 3. Configuring AWS KMS - Navigate to AWS Management Console > KMS. - Create a symmetric key with "Encrypt and Decrypt" permissions. - Add the IAM role of the EC2 instance to allow access. ### 4. Configuring Vault with KMS 1. Replace the Vault config file: ```bash sudo nano /etc/vault/config.hcl ``` Copy and paste the contents from `vault/config-kms.hcl`. 2. Ensure S3 bucket details are correctly updated. 3. Initialize Vault: ```bash vault operator init | tee -a /etc/vault/init.file ``` ### 5. Terraform Setup - Navigate to the `terraform/` directory. - Update variables in `variables.tf` for your environment. - Apply the configuration: ```bash terraform apply ``` ### 6. Reboot Handling - After rebooting the server: ```bash terraform apply ``` - Verify that Vault is accessible and unsealed automatically. --- ## **Ansible Playbook Example** Retrieve secrets from Vault after server reboot: ```yaml --- - name: Retrieve secrets from HashiCorp Vault hosts: localhost tasks: - name: Fetch secret from Vault uri: url: "http://:8200/v1/secret/data/my-secret" method: GET headers: X-Vault-Token: "{{ vault_token }}" register: secret_response - name: Debug retrieved secret debug: msg: "{{ secret_response.json }}" ``` --- ## **Troubleshooting** - Refer to `docs/troubleshooting.md` for common issues, such as: - Vault not unsealing after reboot. - KMS misconfiguration. - Terraform or Ansible errors. --- ## **License** This repository is licensed under the MIT License. See `LICENSE` for details. ``` --- ### Additional Notes 1. **Scripts**: - `setup_docker.sh`: Automates Docker installation. - `setup_ssl.sh`: Configures SSL for Vault. 2. **Documentation**: - `docs/installation_steps.md`: Step-by-step guide for setting up Vault and related components. - `docs/troubleshooting.md`: Solutions for potential issues during setup and execution. ================================================ FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/1-provider.tf ================================================ provider "aws" { } provider "vault" { address = var.vault_addr token = var.vault_token skip_tls_verify = true } ================================================ FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/2-random-passwords.tf ================================================ #Generating random password for Linux Machines resource "random_password" "linux-machine-passwords" { count = var.vm_count length = 16 special = true override_special = "!@#$%^" min_upper = 4 min_lower = 4 min_special = 4 min_numeric = 4 } ================================================ FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/3-hashi-vault-passwords.tf ================================================ resource "vault_mount" "java-app-dev" { path = "java-app-dev" type = "kv" options = { version = "1" } description = "KV Version 1 secret engine mount" } resource "vault_kv_secret" "linux-machine-1" { path = "${vault_mount.java-app-dev.path}/linux-machine-1" data_json = jsonencode( { linux-machine-1 = random_password.linux-machine-passwords.0.result } ) } resource "vault_kv_secret" "linux-machine-2" { path = "${vault_mount.java-app-dev.path}/linux-machine-2" data_json = jsonencode( { linux-machine-2 = random_password.linux-machine-passwords.1.result } ) } resource "vault_kv_secret" "linux-machine-3" { path = "${vault_mount.java-app-dev.path}/linux-machine-3" data_json = jsonencode( { linux-machine-3 = random_password.linux-machine-passwords.2.result } ) } ================================================ FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/policy.yaml ================================================ { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "kms:*", "Resource": "*" } } ================================================ FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/user.tf ================================================ resource "random_password" "vm-passwords" { count = 3 length = 16 special = true override_special = "!#$%&*()-_=+[]{}<>:?" } resource "vault_mount" "avinash" { path = "avinash" type = "kv-v2" description = "This Container avinash Family Secrets" } resource "vault_mount" "saikiran" { path = "saikiran" type = "kv-v2" description = "This Container saikiran Family Secrets" } resource "vault_kv_secret_v2" "Prod-secrets" { count = 3 mount = vault_mount.avinash.path name = "linux-machine-${count.index + 1}" cas = 1 delete_all_versions = true data_json = jsonencode( { username = "adminsai", password = element(random_password.vm-passwords.*.result, count.index) } ) custom_metadata { max_versions = 5 data = { foo = "vault@avinash.com" } } } #Creating saikiran Secrets resource "vault_kv_secret_v2" "super-secrets" { count = 3 mount = vault_mount.saikiran.path name = "super-linux-machine-${count.index + 1}" cas = 1 delete_all_versions = true data_json = jsonencode( { username = "adminsai", password = element(random_password.vm-passwords.*.result, count.index) } ) custom_metadata { max_versions = 5 data = { foo = "vault@saikiran.com" } } } ================================================ FILE: Day 25 HashicorpVault AWSIntegration/terraform-vault/variables.tf ================================================ variable "vault_addr" { default = "https://kmsvault.cloudvishwakarma.in:8200" } variable "vault_token" { default = "TOKEN-HERE" } variable "vm_count" { default = 3 } ================================================ FILE: Day 26 Docker-Full-Course/README.md ================================================ # Day 28 Docker-Full-Course ![00](https://github.com/user-attachments/assets/77c9bf84-ffca-478a-b288-058f5e28b9ab) https://youtu.be/5GhbkrMukmk?si=SqzutdvGZy-A8Hex ================================================ FILE: Day 27 Maven-JFrog-Sonarqube/README.md ================================================ ![Untitled design](https://github.com/user-attachments/assets/dfaf3392-9cfd-43b2-86c1-e1bdd956b3ee) # Maven-Jfrog Integration This repository showcases the integration of **Maven**, **JFrog**, and **SonarQube** to build, manage, and analyze a Java-based Spring Boot application. Below are the detailed steps to set up and deploy a sample application. --- ## Table of Contents 1. [Introduction](#introduction) 2. [Prerequisites](#prerequisites) 3. [Setup and Installation](#setup-and-installation) 4. [Maven Lifecycle](#maven-lifecycle) 5. [Integrating with JFrog](#integrating-with-jfrog) 6. [Pushing Artifacts to JFrog](#pushing-artifacts-to-jfrog) 7. [Version Management](#version-management) 8. [License](#license) --- ## Introduction This project demonstrates: - Building a Spring Boot application using Maven. - Managing dependencies with `pom.xml`. - Storing and managing build artifacts using JFrog Artifactory. - Incremental versioning of artifacts. - Deployment to a private repository for reuse in other projects. **Note:** While this project highlights all major steps, application-specific code and configurations will typically be managed by your development team. --- ## Prerequisites 1. **AWS EC2 Instance**: - Instance type: `T2.large` - Storage: `20 GB` - OS: Ubuntu 20.04+ 2. **Tools**: - **Maven**: Installed and configured. - **OpenJDK**: Version 17 or higher. - **JFrog Artifactory**: Installed and licensed. - **Git**: Configured with SSH authentication. 3. **Networking**: - Configure DNS using Route 53 (if applicable). --- ## Setup and Installation ### 1. Create EC2 Instance Launch an EC2 instance and install required tools: ```bash sudo apt update sudo apt install -y openjdk-17-jdk maven git jq net-tools ``` ### 2. Clone and Build the Application ```bash git clone https://github.com/spring-projects/spring-petclinic.git cd spring-petclinic mvn clean package ``` ### 3. Push Code to Azure DevOps 1. Initialize a new Git repository if needed: ```bash rm -rf .git git init ``` 2. Set up SSH authentication: - Generate an SSH key: `ssh-keygen` - Add the public key to Azure DevOps under **User Settings > SSH Public Keys**. - Clone the repository using the SSH link. 3. Push code: ```bash git add . git commit -m "Initial commit" git remote add origin git push -u origin master ``` --- ## Maven Lifecycle ### Maven Commands Overview 1. **Validate**: ```bash mvn validate ``` Ensures the `pom.xml` is valid. 2. **Compile**: ```bash mvn compile ``` Compiles Java files into `.class` files. 3. **Package**: ```bash mvn package ``` Packages the compiled code into `.jar` or `.war` artifacts. 4. **Run Application**: ```bash java -jar target/*.jar ``` 5. **Clean**: ```bash mvn clean ``` Deletes previous build artifacts. --- ## Integrating with JFrog 1. **Install JFrog**: ```bash wget -O jfrog-deb-installer.tar.gz "https://releases.jfrog.io/artifactory/jfrog-prox/org/artifactory/pro/deb/jfrog-platform-trial-prox/[RELEASE]/jfrog-platform-trial-prox-[RELEASE]-deb.tar.gz" tar -xvzf jfrog-deb-installer.tar.gz cd jfrog-platform-trial-pro* sudo ./install.sh sudo systemctl start artifactory.service ``` 2. **Configure JFrog**: - Access JFrog via `http://:8082`. - Apply the trial license. - Create a Maven repository (`libs-release-local`). 3. **Update Maven Configuration**: Add the following in your `settings.xml`: ```xml central YOUR_USERNAME YOUR_PASSWORD ``` --- ## Pushing Artifacts to JFrog 1. **Add Distribution Management to `pom.xml`**: ```xml central libs-release http://:8081/artifactory/libs-release-local snapshots libs-snapshot http://:8081/artifactory/libs-snapshot-local ``` 2. **Deploy Artifact**: ```bash mvn clean install deploy ``` 3. Verify the artifact in JFrog's repository. --- ## Version Management Update versions dynamically using Maven's version plugin: ```bash mvn versions:set -DnewVersion=1.0.0 mvn clean install deploy ``` Repeat for subsequent versions: ```bash mvn versions:set -DnewVersion=1.0.1 ``` --- ## License This project is licensed under the [MIT License](LICENSE). ================================================ FILE: Day 28 SAST-AzureDevOps-Part-1/0-maven.sh ================================================ Create T2-xl create Simplerecord for Jfrog with publicIP sudo apt update && apt install -y openjdk-17-jdk && sudo apt update && apt install -y maven clone same in local from powershell and push to azuredevops repo clone petclinicapp https://github.com/saikiranpi/springboot-petclinic.git on linux and make sure you ssh keys mvn clean install deploy ----- now lets deploy jfrog for storing our artifacts cd /usr/local/bin wget -O jfrog-deb-installer.tar.gz "https://releases.jfrog.io/artifactory/jfrog-prox/org/artifactory/pro/deb/jfrog-platform-trial-prox/[RELEASE]/jfrog-platform-trial-prox-[RELEASE]-deb.tar.gz" tar -xvzf jfrog-deb-installer.tar.gz sudo apt install jq -y && sudo apt install net-tools -y cd jfrog-platform-trial-pro* # sudo chown -R postgres:postgres /var/opt/jfrog/postgres/data # sudo chmod -R 700 /var/opt/jfrog/postgres/data sudo ./install.sh sudo systemctl start artifactory.service sudo systemctl start xray.service You need license trail license copy antifactory license and paste it the key  next  next next we need maven repo here - >jfrog >http://jfrog.cloudvishwakarma.in finish http://localhost:8082/ generate settings in the mainfile under settings.xml and change the jfrog username and password snapshot as true change the Jfrog URL accordingly paste the settings.yml under /root/.m2/settings/xml stay in petapp dir and run "mvn clean install deploy" ################################################################################ java -jar target/*.jar mvn versions:set -DnewVersion=1.0.0 mvn clean install deploy ================================================ FILE: Day 28 SAST-AzureDevOps-Part-1/0-sonarqube.sh ================================================ # 1. Set Up PostgreSQL Instance for SonarQube sudo mkdir -p /var/lib/postgresql/sonarqube sudo chown postgres:postgres /var/lib/postgresql/sonarqube sudo su - postgres /usr/lib/postgresql/15/bin/initdb -D /var/lib/postgresql/sonarqube # 2. Configure PostgreSQL Edit postgresql.conf: sudo nano /var/lib/postgresql/sonarqube/postgresql.conf Add: listen_addresses = 'localhost' port = 5433 unix_socket_directories = '/var/run/postgresql' Edit pg_hba.conf: sudo nano /var/lib/postgresql/sonarqube/pg_hba.conf Add: local all postgres trust local all all md5 host all all 127.0.0.1/32 md5 host all all ::1/128 md5 # 3. Create and Start PostgreSQL Service sudo nano /etc/systemd/system/postgresql-sonarqube.service Add service content: [Unit] Description=PostgreSQL for SonarQube After=network.target [Service] Type=forking User=postgres Group=postgres ExecStart=/usr/lib/postgresql/15/bin/pg_ctl -D /var/lib/postgresql/sonarqube -l /var/log/postgresql/postgresql-sonarqube.log start ExecStop=/usr/lib/postgresql/15/bin/pg_ctl -D /var/lib/postgresql/sonarqube stop TimeoutSec=300 [Install] WantedBy=multi-user.target Start service: sudo systemctl daemon-reload sudo systemctl start postgresql-sonarqube sudo systemctl enable postgresql-sonarqube # 4. Create Database and User psql -p 5433 -U postgres CREATE USER sonar WITH ENCRYPTED PASSWORD 'my_strong_password' CREATE DATABASE sonarqube OWNER sonar GRANT ALL PRIVILEGES ON DATABASE sonarqube to sonar \q # 5. Install SonarQube sudo apt-get install zip -y cd /opt sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-9.7.1.62043.zip sudo unzip sonarqube-9.7.1.62043.zip sudo mv sonarqube-9.7.1.62043 sonarqube rm -rf sonarqube-9.7.1.62043.zip # 6. Configure SonarQube User and Permissions sudo groupadd sonar sudo useradd -d /opt/sonarqube -g sonar sonar sudo chown sonar:sonar /opt/sonarqube -R Edit sonar.properties: sudo nano /opt/sonarqube/conf/sonar.properties Add: properties sonar.jdbc.username=sonar sonar.jdbc.password=my_strong_password sonar.jdbc.url=jdbc:postgresql://localhost:5433/sonarqube # 7. System Configuration Create SonarQube service: sudo nano /etc/systemd/system/sonar.service Add: [Unit] Description=SonarQube service After=syslog.target network.target [Service] Type=forking ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop User=sonar Group=sonar Restart=always LimitNOFILE=65536 LimitNPROC=4096 [Install] WantedBy=multi-user.target Configure system limits: sudo nano /etc/sysctl.conf Add: vm.max_map_count=262144 fs.file-max=65536 Configure user limits: sudo nano /etc/security/limits.conf Add: sonar soft nofile 65536 sonar hard nofile 65536 sonar soft nproc 4096 sonar hard nproc 4096 # 8. Start SonarQube sudo sysctl -p sudo systemctl daemon-reload sudo systemctl start sonar sudo systemctl enable sonar # 9. Access SonarQube - Wait 5 minutes - Access: http://your-server:9000 - Login: admin/admin ================================================ FILE: Day 28 SAST-AzureDevOps-Part-1/1-ado-tools.sh ================================================ sudo apt update && apt install -y unzip jq net-tools apt install openjdk-17-jdk -y apt install maven -y && curl https://get.docker.com | bash usermod -a -G docker adminsai # aws cli install curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install # azurecli ubuntu install curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash # terraform.io and packer.io copy the link and install in /usr/local/bin cd /usr/local/bin wget https://releases.hashicorp.com/terraform/1.10.3/terraform_1.10.3_linux_amd64.zip unzip # packer.io wget https://releases.hashicorp.com/packer/1.11.2/packer_1.11.2_linux_amd64.zip unzip # document.ansible.com  Select ubuntu and download the file accordingly sudo apt update sudo apt install software-properties-common sudo add-apt-repository --yes --update ppa:ansible/ansible sudo apt install ansible cd /etc/ansible cp ansible.cfg ansible.cfg_backup ansible-config init --disabled >ansible.cfg nano ansible.cfg ctrl w  host_key_checking = False # Install trivy https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb cd /usr/local/bin Wget https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb dpkg -i trivy file Trivy reboot the system for configurations. ================================================ FILE: Day 28 SAST-AzureDevOps-Part-1/1-pipeline.yml ================================================ trigger: - development - uat - production pool: name: LinuxAgentPool demands: - Java -equals Yes - Terraform -equals Yes - Agent.Name -equals ProdADO variables: global_version: "1.0.0" global_email: "pinapathruni.saikiran@gmail.com" # azure_dev_sub: "1e9d13b0-73fc-43eb-b04e-4b4f5a5ea96f" isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/development')] isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/production')] steps: - script: docker version && packer version && terraform version && aws --version && java -version && mvn --version displayName: "Testin A Newly Created Agent and rools" ================================================ FILE: Day 28 SAST-AzureDevOps-Part-1/2-pipeline.yml ================================================ trigger: branches: include: - development - uat - production exclude: ["master", "feature*", "README.md"] ================================================ FILE: Day 28 SAST-AzureDevOps-Part-1/README.md ================================================ ![escape](https://github.com/user-attachments/assets/63a9188f-edea-4fc4-9f94-c28b46c5bb37) # Day28 AzureDevOps_Part_1 ## CI-CD-CD (Continuous Integration, Continuous Delivery, Continuous Deployment) This project focuses on setting up a CI/CD pipeline in Azure DevOps to automate the processes of code integration, delivery, and deployment. The pipeline ensures secure, efficient, and seamless transitions from development to production. --- ### **Continuous Integration** 1. **Code Readiness:** - Code is committed and merged into the repository. - Static Application Security Testing (SAST) is performed to identify vulnerabilities. 2. **Build:** - Uses Maven to generate a JAR file. - Docker is employed to create an image using a `Dockerfile`. 3. **Artifacts Publishing:** - Built artifacts are stored for further stages in the CI/CD pipeline. **Example Release Strategy:** - A versioning system is employed: - Stable Version: `23.0.0` (Production-ready). - Release Candidates: `23.0.0.0-RC1`, `23.0.0.0-RC2`, etc., for testing. - Hotfix Versions: `23.0.0.1` for bug fixes post-release. --- ### **Continuous Delivery** - Automates deployment to development and staging environments after successful integration testing. - Focuses on delivering artifacts to lower environments for further testing. --- ### **Continuous Deployment** - Automates deployment to production after passing all previous stages. - Often skipped for production in many organizations due to additional manual checks. --- ### **Branching Strategy** 1. **Main/Master Branch:** - Represents production-ready code. 2. **Development Branch:** - Feature branches are created for changes and merged back into development after review. 3. **Staging/Functional Testing:** - Tracks and documents manual/automated test results in an organized manner. --- ### **CI/CD Tools** Common tools for CI/CD pipelines include: - Azure DevOps (primary focus) - Jenkins - GitLab - GitHub Actions - GoCD - TravisCI - CircleCI --- ## **Setting up Azure DevOps Pipeline** ### Task Overview: 1. **Create a Pipeline Agent:** - Use a self-hosted agent by creating a virtual machine (VM) with the necessary tools installed. - VM Configuration: - OS: Ubuntu 20.04 - Specs: 2 CPUs, 8GB RAM - Disk: Standard SSD 2. **Configure the Agent:** - Install required tools (e.g., Terraform, Packer). - Generate a Personal Access Token (PAT) for authentication. - Create an agent pool and register the agent in Azure DevOps. 3. **Pipeline Creation:** - Create a pipeline for the repository in Azure DevOps. - Use a `trigger` to specify branches for automatic execution. 4. **Clone Repository Locally:** - Use Git commands to clone the repository and manage changes. --- ### Step-by-Step Instructions: 1. **Create VM:** - Set up a virtual machine in Azure with the specified configuration. - Configure networking to allow necessary inbound and outbound rules. 2. **Install Required Tools:** - Access the VM via SSH (e.g., PuTTY). - Install dependencies and configure tools as the admin user. 3. **Generate PAT:** - Create a Personal Access Token in Azure DevOps with full access and save it securely. 4. **Setup Agent Pool:** - Create and configure an agent pool in Azure DevOps. - Register the VM as an agent using provided setup scripts. 5. **Pipeline Creation:** - Use Azure Pipelines to create a YAML-based pipeline. - Example configuration: ```yaml trigger: branches: include: - master pool: name: LinuxAgentPool steps: - script: echo "Hello, Azure DevOps!" ``` 6. **Test and Modify Pipeline:** - Push changes to trigger pipeline execution. - Use Visual Studio Code for editing pipeline configurations. 7. **Add Variables:** - Add SonarQube credentials or other required variables in the Azure DevOps pipeline UI. 8. **Service Connections:** - Connect the Azure DevOps pipeline to external tools like SonarQube for analysis. --- ## **Advanced Features** - Conditional expressions for environment-specific pipelines. - Integration with AWS instances or other external environments. - Dynamic agent capabilities for task-specific pipelines. --- ### **Additional Resources** - [Azure DevOps Documentation](https://learn.microsoft.com/en-us/azure/devops/) - [SonarQube Documentation](https://docs.sonarqube.org/) - [GitHub for Version Control](https://github.com/) --- **Contributors:** - Admin Kiran (Contact: `adminkiran`) **License:** - This project is licensed under the MIT License. See the LICENSE file for details. ================================================ FILE: Day 29 AzureDevOps-Part-2/README.md ================================================ # PLEASE COPY THE POM.XML AND PIPELINE SCRIPT FIRST AND DO THE PRACTICALS. REST ALL SAME. # Prod-SpringBoot-Pet-App This repository contains the production-ready Spring Boot application for the `Prod-ADO` instance. Follow the steps below to set up and run the CI/CD pipeline using Azure DevOps (ADO). ## Prerequisites - AWS and Azure instances must be up and running. - Proper IP addresses should be updated in Route 53. ## Steps to Set Up the Pipeline ### Stage 1: Initial Setup 1. **Start the agents** on AWS and Azure. 2. **Update the IPs** in Route 53. 3. **Clone the repository** and check the available branches. 4. **Add the SonarQube stage** and build the pipeline accordingly. 5. **Modify the `pom.xml` file** at lines 13 & 16: ```xml ado-spring-boot-app-dev ``` ### Stage 2: Connecting the Pipeline to the EC2 Instance 1. **Connect the pipeline to the EC2 instance** where SonarQube and Maven are installed using service connections: - Navigate to **Project Settings > Service Connections**. - Create a new service connection for the EC2 instance. 2. **Add the token** in the pipeline and push the code to the development branch. 3. **Run the pipeline** and push it to the development environment. 4. If the Maven build fails, skip tests by adding the following line: ```yaml options: '-DskipTests' ``` Add it above the `displayName` in your YAML file. 5. Push the changes again. 6. If you encounter issues with `sonar.branch.name`, set the development branch as the default branch. 7. Once the job completes, check the results on SonarQube. ### Stage 3: Building with Java and Copying Artifacts to JFrog 1. **Build the application** using Maven and copy the artifact to JFrog. 2. Ensure `settings.yaml` is securely managed: - Go to **Libraries > Add Secure Files**. - Browse and add the secure file. 3. **Create the necessary directories** on the Azure agent: ```bash sudo mkdir /artifacts sudo chown adminsai:adminsai /artifacts ``` This folder will store the copied artifact. 4. Save and push the changes. 5. If errors occur during the Maven build, log in to the server and debug using: ```bash grep -i "failure" *.txt ``` Example failure: ``` org.springframework.samples.petclinic.system.CrashControllerIntegrationTests.txt ``` Review and fix the `CrashControllerIntegrationTests` file accordingly. ### Stage 4: Copying Artifacts to Azure Blob Storage 1. **Create a storage account** in Azure: - Name: `artifacts` - Redundancy: Locally Redundant Storage (LRS) 2. **Create a container** named `artifacts`. 3. **Set up a service principal**: - Navigate to **Microsoft Entra ID > App Registration**. - Create a new service principal. - In **Project Settings > Service Connections**, create a new Azure Resource Manager connection manually. - Provide the following details: - Tenant ID - Client ID (Service Principal ID) - Subscription ID - Client Secret (Create a new secret under Certificates & Secrets). 4. **Create a new pipeline variable**: - Name: `STORAGE_ACCOUNT_KEY` - Secret: Yes - Value: Copy the access key from the storage account. 5. Push the changes and run the pipeline. ### Stage 5: Adding an S3 Bucket 1. **Create an S3 bucket** with the name specified in the YAML file. 2. **Grant S3 access**: - Navigate to **IAM > Users** and grant S3 full access. 3. **Create a new AWS service connection** in ADO: - Use the access key and secret key. - Connection name: `saikiransecops-s3` 4. Push the changes and verify the artifacts in the S3 bucket. ### Stage 6: Building a Docker Image and Scanning with Trivy 1. **Create a template folder** in VSCode: ```bash mkdir template cd template touch junit.tpl ``` Paste the required content into `junit.tpl`. 2. **Create a Dockerfile** in VSCode and paste the necessary code. 3. Push the changes. 4. Test the pipeline step-by-step to ensure correctness. ## Final Notes - The pipeline may require multiple iterations to achieve perfection. Ensure that each step is tested and validated before proceeding to the next. - Use secure methods to manage sensitive information such as credentials and keys. ## Troubleshooting - For Maven build failures, use the following command: ```bash grep -i "failure" *.txt ``` - If issues are found in `CrashControllerIntegrationTests`, review the file and make the necessary changes without altering unrelated parts. - **SonarQube Upgrade Steps**: 1. **Stop SonarQube**: ```bash sudo systemctl stop sonar ``` 2. **Download and install a newer version (10.3)**: ```bash cd /opt sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-10.3.0.82913.zip sudo unzip sonarqube-10.3.0.82913.zip sudo rm -rf sonarqube sudo mv sonarqube-10.3.0.82913 sonarqube ``` 3. **Fix permissions**: ```bash sudo chown -R sonar:sonar /opt/sonarqube sudo chmod -R 755 /opt/sonarqube ``` 4. **Update `sonar.properties` to configure JDK 17 module path**: ```bash sudo nano /opt/sonarqube/conf/sonar.properties ``` Add the following line: ```properties sonar.web.javaAdditionalOpts=--add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED ``` 5. **Restart SonarQube**: ```bash sudo systemctl restart sonar ``` This newer version has better compatibility with Java 17. Let the DevOps team know if further errors occur. ## Acknowledgments Special thanks to the team for their support in setting up and validating the pipeline. --- For further assistance, please contact the DevOps team. --- ================================================ FILE: Day 29 AzureDevOps-Part-2/azure-pipelines.yml ================================================ trigger: - development - uat - production pool: name: LinuxAgentPool demands: - JDK -equals 17 - Terraform -equals Yes - Agent.Name -equals ProdADO variables: global_version: "1.0.0" global_email: "saikiran@gmail.com" # azure_dev_sub: "9ce91e05-4b9e-4a42-95c1-4385c54920c6" # azure_prod_sub: "298f2c19-014b-4195-b821-e3d8fc25c2a8" isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/development')] isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/production')] stages: - stage: CheckingTheAgent condition: and(succeeded(), eq(variables.isDev, true)) pool: name: LinuxAgentPool demands: - Terraform -equals Yes variables: stage_version: "2.0.0" stage_email: "saikiran.pinapathruni18@gmail.com" jobs: - job: CheckingTerraformAndPacker variables: job_version: "3.0.0" job_email: "saiaws@gmail.com" timeoutInMinutes: 5 steps: - script: echo $(Build.BuildId) displayName: "Display The Build-ID" - script: terraform version && packer version displayName: "Display Terraform & Packer Version" - script: docker version && docker ps && docker images && docker ps -a displayName: "Display Docker Version" - script: pwd && ls -al displayName: "List Folder & Files" - stage: SASTWithSonarQube condition: and(succeeded(), eq(variables.isDev, true)) pool: name: LinuxAgentPool demands: - JDK -equals 17 jobs: - job: RunningSASTWithSonarqube timeoutInMinutes: 10 steps: #SonarQube User Token need to be generated and used in the ServiceConnection. #Also change name of the project and artifactId(line 6 & 14) to ado-spring-boot-app-dev in POM. #No need to create a project in sonarqube as its created automatically. - task: SonarQubePrepare@7 inputs: SonarQube: "SonarTestToken" scannerMode: "Other" projectVersion: "$(Build.BuildId)" displayName: "Preparing SonarQube Config" - task: Maven@4 inputs: mavenPomFile: "pom.xml" publishJUnitResults: false javaHomeOption: "JDKVersion" mavenVersionOption: "Default" mavenAuthenticateFeed: false effectivePomSkip: false sonarQubeRunAnalysis: true sqMavenPluginVersionChoice: "latest" options: "-DskipTests" displayName: "Running SonarQube Maven Analysis" - task: sonar-buildbreaker@8 inputs: SonarQube: "SonarTestToken" displayName: "SAST Job Fail or Pass" - stage: BuildingJavaCodeWithMavenCopyToJFrog condition: and(succeeded(), eq(variables.isDev, true)) #condition: always() pool: name: LinuxAgentPool demands: - Terraform -equals Yes jobs: - job: BuildingJavaCodeJob timeoutInMinutes: 5 steps: - script: ls -al && pwd && rm -rf /home/adminsai/.m2/settings.xml displayName: "List Files & Current Working Directory" - task: DownloadSecureFile@1 inputs: secureFile: "settings.xml" - task: CopyFiles@2 inputs: SourceFolder: "$(Agent.TempDirectory)" Contents: "**" TargetFolder: "/home/adminsai/.m2" - script: mvn versions:set -DnewVersion=Dev-2.0.$(Build.BuildId) displayName: "Set Maven Build Version" - script: mvn clean package install && ls -al displayName: "Run the maven build and install" - script: mvn deploy && ls -al displayName: "Run the maven deploy" continueOnError: true - script: ls -al && cp /home/adminsai/myagent/_work/1/s/target/ado-spring-boot-app-dev-Dev-2.0.$(Build.BuildId).jar ROOT$(Build.BuildId).jar && ls -al displayName: "List Files & Rename ROOT.jar" - script: rm -rf /artifacts/*.jar && cp ROOT$(Build.BuildId).jar /artifacts && ls -al /artifacts displayName: "Copy Artifact To Folder" - task: CopyFiles@2 inputs: Contents: "ROOT$(Build.BuildId).jar" TargetFolder: "$(Build.ArtifactStagingDirectory)" OverWrite: true displayName: "Copying JAR file to ArtifactStagingDirector" - task: PublishBuildArtifacts@1 inputs: PathtoPublish: "$(Build.ArtifactStagingDirectory)" ArtifactName: "ROOT$(Build.BuildId).jar" publishLocation: "Container" displayName: "Publishing JAR Artifact." - stage: CopyingArtifactsToAzureAndAws condition: and(succeeded(), eq(variables.isDev, true)) jobs: - job: CopyFilesToAzureBlob timeoutInMinutes: 5 steps: - checkout: none - task: AzureCLI@2 inputs: azureSubscription: "saikiransecops-subscription" scriptType: "bash" scriptLocation: "inlineScript" inlineScript: | az storage blob upload-batch --account-name saikiransecopsprod --account-key $(STORAGE_ACCOUNT_KEY) --destination artifacts --source /artifacts/ displayName: "Azure Upload artifacts to Azure Blob" continueOnError: true - job: CopyFilesToAWSS3Bucket dependsOn: CopyFilesToAzureBlob condition: always() # succeededOrFailed() or always() or failed() or succeeded()-default timeoutInMinutes: 5 steps: - checkout: none - task: S3Upload@1 inputs: awsCredentials: "saikiransecops-s3" regionName: "us-east-1" bucketName: "saikiransecopss3uploadprodartifacts" sourceFolder: "/artifacts/" globExpressions: "ROOT$(Build.BuildId).jar" displayName: "AWS Upload artifacts to AWS S3 Bucket" continueOnError: true - stage: DockerBuildAndTrivyScan condition: and(succeeded(), eq(variables.isDev, true)) pool: name: LinuxAgentPool jobs: - job: BuildingContainerImageAndSecurityScanning timeoutInMinutes: 10 steps: - checkout: none - script: docker build -t kiran2361993/myapp:$(Build.BuildId) . displayName: "Create Docker Image" #- script: trivy image --severity HIGH,CRITICAL --format template --template "@template/junit.tpl" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId) - script: | trivy image --exit-code 0 --severity LOW,MEDIUM --format template --template "@template/junit.tpl" -o junit-report-low-med.xml kiran2361993/myapp:$(Build.BuildId) trivy image --exit-code 0 --severity HIGH,CRITICAL --format template --template "@template/junit.tpl" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId) displayName: "Scan Image and Create Report" - task: PublishTestResults@2 inputs: testResultsFormat: "JUnit" testResultsFiles: "**/junit-report-low-med.xml" mergeTestResults: true failTaskOnFailedTests: false testRunTitle: "Trivy - Low and Medium Vulnerabilities" displayName: "Trivy - Low and Medium Vulnerabilities" condition: "always()" - task: PublishTestResults@2 inputs: testResultsFormat: "JUnit" testResultsFiles: "**/junit-report-high-crit.xml" mergeTestResults: true failTaskOnFailedTests: false testRunTitle: "Trivy - High and Critical Vulnerabilities" displayName: "Trivy - High and Critical Vulnerabilities" condition: "always()" ================================================ FILE: Day 29 AzureDevOps-Part-2/pom.xml ================================================ 4.0.0 org.springframework.boot spring-boot-starter-parent 3.4.0 org.springframework.samples ado-spring-boot-app-dev 3.4.0-SNAPSHOT ado-spring-boot-app-dev 17 UTF-8 UTF-8 2024-11-28T14:37:52Z 1.0.1 5.3.3 4.7.0 10.20.1 0.8.12 0.2.29 1.0.0 3.6.0 0.0.11 0.0.43 central libs-release http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-release-local snapshots libs-snapshot http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-snapshot-local org.springframework.boot spring-boot-starter-actuator org.springframework.boot spring-boot-starter-cache org.springframework.boot spring-boot-starter-data-jpa org.springframework.boot spring-boot-starter-web org.springframework.boot spring-boot-starter-validation org.springframework.boot spring-boot-starter-thymeleaf org.springframework.boot spring-boot-starter-test test io.projectreactor reactor-core com.h2database h2 runtime com.mysql mysql-connector-j runtime org.postgresql postgresql runtime javax.cache cache-api com.github.ben-manes.caffeine caffeine org.webjars webjars-locator-lite ${webjars-locator.version} org.webjars.npm bootstrap ${webjars-bootstrap.version} org.webjars.npm font-awesome ${webjars-font-awesome.version} org.springframework.boot spring-boot-devtools test org.springframework.boot spring-boot-testcontainers test org.springframework.boot spring-boot-docker-compose test org.testcontainers junit-jupiter test org.testcontainers mysql test jakarta.xml.bind jakarta.xml.bind-api org.apache.maven.plugins maven-enforcer-plugin enforce-java enforce This build requires at least Java ${java.version}, update your JVM, and run the build again ${java.version} io.spring.javaformat spring-javaformat-maven-plugin ${spring-format.version} validate validate org.apache.maven.plugins maven-checkstyle-plugin ${maven-checkstyle.version} com.puppycrawl.tools checkstyle ${checkstyle.version} io.spring.nohttp nohttp-checkstyle ${nohttp-checkstyle.version} org.graalvm.buildtools native-maven-plugin org.springframework.boot spring-boot-maven-plugin build-info ${project.build.sourceEncoding} ${project.reporting.outputEncoding} ${java.version} ${java.version} org.jacoco jacoco-maven-plugin ${jacoco.version} prepare-agent report report prepare-package io.github.git-commit-id git-commit-id-maven-plugin false false org.cyclonedx cyclonedx-maven-plugin org.codehaus.mojo build-helper-maven-plugin 3.2.0 org.codehaus.mojo versions-maven-plugin 2.8.1 Apache License, Version 2.0 https://www.apache.org/licenses/LICENSE-2.0 true spring-snapshots Spring Snapshots https://repo.spring.io/snapshot false spring-milestones Spring Milestones https://repo.spring.io/milestone true spring-snapshots Spring Snapshots https://repo.spring.io/snapshot false spring-milestones Spring Milestones https://repo.spring.io/milestone css org.apache.maven.plugins maven-dependency-plugin unpack unpack generate-resources org.webjars.npm bootstrap ${webjars-bootstrap.version} ${project.build.directory}/webjars com.gitlab.haynes libsass-maven-plugin ${libsass.version} ${basedir}/src/main/scss/ ${basedir}/src/main/resources/static/resources/css/ ${project.build.directory}/webjars/META-INF/resources/webjars/bootstrap/${webjars-bootstrap.version}/scss/ compile generate-resources m2e m2e.version org.eclipse.m2e lifecycle-mapping ${lifecycle-mapping} org.apache.maven.plugins maven-checkstyle-plugin [1,) check org.springframework.boot spring-boot-maven-plugin [1,) build-info io.spring.javaformat spring-javaformat-maven-plugin [0,) validate ================================================ FILE: Day 30 AzureDevOps-Part-3/README.md ================================================ # DevSecOps Pipeline Tutorial ![Day 02 (1)](https://github.com/user-attachments/assets/ae4bd8bb-3988-45c9-887d-cb14531c40e5) ### Start the Instances 1. Start all the instances on **AWS** and **Azure**. 2. Copy the IP addresses of the instances to **Route53**. 3. Clone the production code and switch to the development branch. ### Recap of Previous Session In the previous session, we created a Docker image and analyzed it with **Trivy** for any security vulnerabilities. ### Today's Session We will complete the next stages by pushing the Docker image to: - **Azure Container Registry (ACR)** - **Docker Hub (Private)** ### Steps for Azure Container Registry 1. Copy the code to production. 2. Go to **Azure** and create a container registry: - **Container Registry** > **Create** - Resource Group: `devSecOps` - Registry Name: `devsecopsacr` - Region: `East US` - Click **Review and Create**. 3. After creating the registry: - Go to the resources > **Access Keys**. - Enable the **Admin user** checkbox. 4. In the pipeline: - Edit the pipeline > **Variables** > **Add**: - Name: `acrpassword` - Value: Copy and paste the password (keep it secret). - Save the changes. ### Steps for Docker Hub 1. Go to **Project Settings** > **Service Connections** > **New** > **Docker Registry** > **Docker Hub**. 2. Enter the following details: - Docker ID: `kiran2361993` - Password or Token. - Service Connection Name: `devops-dockerhub-connection`. - Grant access and click **Save**. 3. Push the changes to Git and monitor the pipeline. ## Step 09: Fixing Errors and Deploying to Azure Container Instance (ACI) ### Java Version Change - Show the error and update the Java version in the Dockerfile from **11** to **17**. ### Deploy to Azure Container Instance 1. Copy the code. 2. Deploy it to **Azure Container Instance (ACI)**: - ACI automatically creates an instance without manual provisioning. ### Creating Environments on AWS 1. Create two different Ubuntu servers on **AWS**: - One for **Staging** and another for **Production**. 2. Deploy two **t2.medium** instances with the following settings: - **Tag**: `Name: Staging` (Rename one instance to `Production` after creation). - **Advanced Details** > **User Data**: ```bash #!bin/bash apt update apt install -y openjdk-17-jdk ``` - Launch the instances. 3. Once deployed, rename one instance to `Production`. ### Route53 Records 1. Create two records in **Route53**: - Record Name: `staging` and its IP address. - Record Name: `prod` and its IP address. 2. Create the records. ### Configuring Environments in the Pipeline 1. Go to **Pipeline** > **Edit** > **Environment** > **Create**: - **Staging**: - Select **Virtual Machines** > **Linux**. - Log in to the staging EC2 instance and verify Java version using: ```bash java -version ``` - Copy the register script from Azure and run it in the instance. - **Production**: - Select **Virtual Machines** > **Linux**. - Log in to the production EC2 instance and verify Java version. - Copy the register script from Azure and run it in the instance. 2. Verify the following: - **ACI**, **Docker Hub**, and **ACR** for images. - Go to **Azure** > **Container Instances** > Check the **FQDN** and access it on port **8080**. ## Step 10: Adding Deployment Code and Running DAST Testing 1. Add deployment code and run **ZAP** for security testing. 2. Go to the pipeline: - **Edit** > **Variables** > Add Docker login variables. 3. Push the changes to Git. ### Handling Pipeline Issues - If you see an orange status in the pipeline, it’s not an issue. - Explanation: If a JAR file is already found, the process will stop; otherwise, it will continue. ### Break Time Let the pipeline complete. After the break: 1. Access the application via **ACI FQDN** and `http://staging.cloudvishwakarma.in:8080`. 2. Run **DAST** testing and show the results. ### Fixing Maven Build Configuration - Since the pipeline was configured for **Dev**, update it for **Prod** by commenting out the previous condition: ```yaml # condition: or(eq(variables.isProd, true), eq(variables.isDev, true)) ``` - Push the changes to the `production` branch instead of `development`. - Monitor the pipeline; most tasks should be skipped. ## Next Session Preview In the next session, we will cover: 1. **Infrastructure Pipeline** using **Terraform**. 2. **SAST** directly on the code. 3. **DAST** after deploying the application. ================================================ FILE: Day 30 AzureDevOps-Part-3/azure-pipelines.yml ================================================ trigger: - development - uat - production pool: name: ProdAgentPool demands: - JDK -equals 17 - Terraform -equals Yes - Agent.Name -equals ADO-Testing_Env variables: global_version: "1.0.0" global_email: "mavrick202@gmail.com" azure_dev_sub: "9ce91e05-4b9e-4a42-95c1-4385c54920c6" azure_prod_sub: "298f2c19-014b-4195-b821-e3d8fc25c2a8" isDev: $[eq(variables['Build.SourceBranch'], 'refs/heads/development')] isProd: $[eq(variables['Build.SourceBranch'], 'refs/heads/production')] stages: - stage: CheckingTheAgent condition: and(succeeded(), eq(variables.isDev, true)) pool: name: ProdAgentPool demands: - Terraform -equals Yes variables: stage_version: "2.0.0" stage_email: "saikiran.pinapathruni18@gmail.com" jobs: - job: CheckingTerraformAndPacker variables: job_version: "3.0.0" job_email: "saiaws@gmail.com" timeoutInMinutes: 5 steps: - script: echo $(Build.BuildId) displayName: "Display The Build-ID" - script: terraform version && packer version displayName: "Display Terraform & Packer Version" - script: docker version && docker ps && docker images && docker ps -a displayName: "Display Docker Version" - script: pwd && ls -al displayName: "List Folder & Files" - stage: SASTWithSonarQube condition: and(succeeded(), eq(variables.isDev, true)) pool: name: ProdAgentPool demands: - JDK -equals 17 jobs: - job: RunningSASTWithSonarqube timeoutInMinutes: 10 steps: #SonarQube User Token need to be generated and used in the ServiceConnection. #Also change name of the project and artifactId(line 6 & 14) to ado-spring-boot-app-dev in POM. #No need to create a project in sonarqube as its created automatically. - task: SonarQubePrepare@7 inputs: SonarQube: "SonarTestToken" scannerMode: "Other" #projectKey: 'sqp_63da7bac31bd4496f2ee1170156659ea8c782c28'-NotNeeded #projectName: 'ado-spring-boot-app-dev'-NotNeeded projectVersion: "$(Build.BuildId)" displayName: "Preparing SonarQube Config" - task: Maven@4 inputs: mavenPomFile: "pom.xml" publishJUnitResults: false javaHomeOption: "JDKVersion" mavenVersionOption: "Default" mavenAuthenticateFeed: false effectivePomSkip: false sonarQubeRunAnalysis: true sqMavenPluginVersionChoice: "latest" options: "-DskipTests" displayName: "Running SonarQube Maven Analysis" - task: sonar-buildbreaker@8 inputs: SonarQube: "SonarTestToken" displayName: "SAST Job Fail or Pass" - stage: BuildingJavaCodeWithMavenCopyToJFrog condition: or(eq(variables.isProd, true), eq(variables.isDev, true)) # condition: and(succeeded(), eq(variables.isDev, true)) #condition: always() pool: name: ProdAgentPool demands: - Terraform -equals Yes jobs: - job: BuildingJavaCodeJob timeoutInMinutes: 5 steps: - script: ls -al && pwd && rm -rf /home/adminsai/.m2/settings.xml displayName: "List Files & Current Working Directory" - task: DownloadSecureFile@1 inputs: secureFile: "settings.xml" - task: CopyFiles@2 inputs: SourceFolder: "$(Agent.TempDirectory)" Contents: "**" TargetFolder: "/home/adminsai/.m2" - script: mvn versions:set -DnewVersion=Dev-2.0.$(Build.BuildId) displayName: "Set Maven Build Version" - script: mvn clean package install && ls -al displayName: "Run the maven build and install" - script: mvn deploy && ls -al displayName: "Run the maven deploy" continueOnError: true - script: ls -al && cp /home/adminsai/myagent/_work/1/s/target/ado-spring-boot-app-dev-Dev-2.0.$(Build.BuildId).jar ROOT$(Build.BuildId).jar && ls -al displayName: "List Files & Rename ROOT.jar" - script: rm -rf /artifacts/*.jar && cp ROOT$(Build.BuildId).jar /artifacts && ls -al /artifacts displayName: "Copy Artifact To Folder" - task: CopyFiles@2 inputs: Contents: "ROOT$(Build.BuildId).jar" TargetFolder: "$(Build.ArtifactStagingDirectory)" OverWrite: true displayName: "Copying JAR file to ArtifactStagingDirector" - task: PublishBuildArtifacts@1 inputs: PathtoPublish: "$(Build.ArtifactStagingDirectory)" ArtifactName: "ROOT$(Build.BuildId).jar" publishLocation: "Container" displayName: "Publishing JAR Artifact." - stage: CopyingArtifactsToAzureAndAws condition: and(succeeded(), eq(variables.isDev, true)) jobs: - job: CopyFilesToAzureBlob timeoutInMinutes: 5 steps: - checkout: none - script: | echo "Debugging STORAGE_ACCOUNT_KEY..." echo "Key length: ${#STORAGE_ACCOUNT_KEY}" echo "Key value (partial): ${STORAGE_ACCOUNT_KEY:0:5}*****" displayName: "Debug STORAGE_ACCOUNT_KEY" - task: AzureCLI@2 inputs: azureSubscription: "saikiransecops-subscription" scriptType: "bash" scriptLocation: "inlineScript" inlineScript: | az storage blob upload-batch --account-name saikiransecops \ --account-key $(STORAGE_ACCOUNT_KEY) \ --destination artifacts --source /artifacts/ displayName: "Azure Upload artifacts to Azure Blob" continueOnError: true # Fallback hardcoded key for testing purposes - task: AzureCLI@2 condition: failed() inputs: azureSubscription: "saikiransecops-subscription" scriptType: "bash" scriptLocation: "inlineScript" inlineScript: | echo "Using hardcoded key for testing..." az storage blob upload-batch --account-name saikiransecops \ --account-key "yDO5lCm7ud6VRLjHkjikceT3ysgEYeDUn5SRC8jIU3PcNe/ZIocl+90BfRAUl3QkF6CLfARX8IRA+AStA/NlOA==" \ --destination artifacts --source /artifacts/ displayName: "Azure Upload artifacts with hardcoded key" continueOnError: true - job: CopyFilesToAWSS3Bucket dependsOn: CopyFilesToAzureBlob condition: always() # succeededOrFailed() or always() or failed() or succeeded()-default timeoutInMinutes: 5 steps: - checkout: none - task: S3Upload@1 inputs: awsCredentials: "saikiransecops-s3" regionName: "us-east-1" bucketName: "saikiransecopss3uploadartifacts" sourceFolder: "/artifacts/" globExpressions: "ROOT$(Build.BuildId).jar" displayName: "AWS Upload artifacts to AWS S3 Bucket" continueOnError: true - stage: DockerBuildAndTrivyScan condition: and(succeeded(), eq(variables.isDev, true)) pool: name: ProdAgentPool jobs: - job: BuildingContainerImageAndSecurityScanning timeoutInMinutes: 10 steps: - checkout: none - script: docker build -t kiran2361993/myapp:$(Build.BuildId) . displayName: "Create Docker Image" #- script: trivy image --severity HIGH,CRITICAL --format template --template "@template/junit.tpl" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId) - script: | trivy image --exit-code 0 --severity LOW,MEDIUM --format template --template "@template/junit.tpl" -o junit-report-low-med.xml kiran2361993/myapp:$(Build.BuildId) trivy image --exit-code 0 --severity HIGH,CRITICAL --format template --template "@template/junit.tpl" -o junit-report-high-crit.xml kiran2361993/myapp:$(Build.BuildId) displayName: "Scan Image and Create Report" - task: PublishTestResults@2 inputs: testResultsFormat: "JUnit" testResultsFiles: "**/junit-report-low-med.xml" mergeTestResults: true failTaskOnFailedTests: false testRunTitle: "Trivy - Low and Medium Vulnerabilities" displayName: "Trivy - Low and Medium Vulnerabilities" condition: "always()" - task: PublishTestResults@2 inputs: testResultsFormat: "JUnit" testResultsFiles: "**/junit-report-high-crit.xml" mergeTestResults: true failTaskOnFailedTests: false testRunTitle: "Trivy - High and Critical Vulnerabilities" displayName: "Trivy - High and Critical Vulnerabilities" condition: "always()" - stage: BuildDockerImagePushToAzureACRAndDockerHub condition: and(succeeded(), eq(variables.isDev, true)) jobs: - job: PushToAzureACR #dependsOn: DockerBuildAndTrivyScan condition: always() # succeededOrFailed() or always() or failed() timeoutInMinutes: 5 steps: - checkout: none - task: Bash@3 inputs: targetType: "inline" script: | docker login -u devsecopsacrtest -p $(acrpassword) devsecopsacrtest.azurecr.io docker tag kiran2361993/myapp:$(Build.BuildId) devsecopsacrtest.azurecr.io/devsecopsacrtest:$(Build.BuildId) docker push devsecopsacrtest.azurecr.io/devsecopsacrtest:$(Build.BuildId) displayName: "Creating & Pushing Docker Image To Azure ACR" # - job: PushToDockerHub # dependsOn: PushToAzureACR # condition: always() # succeededOrFailed() or always() or failed() # timeoutInMinutes: 5 # steps: # - checkout: none # - task: Docker@2 # inputs: # containerRegistry: "devops-dockerhub-connection" # command: "login" # displayName: "Login To Docker Hub" # - task: Bash@3 # inputs: # targetType: "inline" # script: | # docker tag kiran2361993/myapp:$(Build.BuildId) kiran2361993/devsecopsado:$(Build.BuildId) # docker push kiran2361993/devsecopsado:$(Build.BuildId) # displayName: "Pushing Docker Image To Docker Hub" - stage: DeployDockerImageToAzureACI condition: and(succeeded(), eq(variables.isDev, true)) pool: name: ProdAgentPool demands: - JDK -equals 17 jobs: - job: DeployAzureACI timeoutInMinutes: 10 steps: - checkout: none - task: AzureCLI@2 inputs: azureSubscription: "saikiransecops-subscription" scriptType: "bash" scriptLocation: "inlineScript" inlineScript: "az container create -g Prod-ADO-1 --name devsecopsado$(Build.BuildId) --image devsecopsacrtest.azurecr.io/devsecopsacrtest:$(Build.BuildId) --cpu 2 --memory 4 --ports 8080 --dns-name-label devsecopsado$(Build.BuildId) --registry-username devsecopsacrtest --registry-password $(acrpassword) --location eastus --os-type Linux" #inlineScript: az group list displayName: "Deploy Docker Image to Azure Container Instances" continueOnError: true - stage: "DeployingToStagingEnvironment" dependsOn: BuildingJavaCodeWithMavenCopyToJFrog condition: and(succeeded(), eq(variables.isDev, true)) pool: name: ProdAgentPool displayName: "Deploying To AWS Staging Environment" jobs: - deployment: "DeployJARtoStagingServer" environment: name: STAGING resourceType: VirtualMachine strategy: runOnce: deploy: steps: - script: | PROC=$(ps -ef | grep -i jar | grep -v grep | awk '{print $2}') if [ -n "$PROC" ]; then echo "Stopping process with PID: $PROC" sudo kill -9 $PROC || echo "Failed to stop process." else echo "No JAR process found. Nothing to stop." fi exit 0 # Force success status displayName: "Stop Existing JAR File" - script: | sudo java -jar /home/ubuntu/azagent/_work/1/ROOT$(Build.BuildId).jar/ROOT$(Build.BuildId).jar & echo "Application started successfully." exit 0 # Force success status displayName: "Running The Jar File" - stage: ZAPOWASPTestingStagingEnvironment condition: and(succeeded(), eq(variables.isDev, true)) jobs: - job: ZapTestingStaging timeoutInMinutes: 20 steps: - checkout: none # Pull the OWASP ZAP image and run the baseline scan - script: | docker pull ghcr.io/zaproxy/zaproxy:stable docker run -u 0 -v $(Pipeline.Workspace)/owaspzap:/zap/wrk/:rw ghcr.io/zaproxy/zaproxy:stable zap-baseline.py -t http://staging.cloudvishwakarma.in:8080/ -J report.json -r report.html -I -i displayName: "DAST Staging Environment" continueOnError: true # Publish the ZAP test results - task: PublishTestResults@2 displayName: "Publish Test Results For ZAP Testing" inputs: testResultsFormat: "NUnit" testResultsFiles: "$(Pipeline.Workspace)/owaspzap/report.html" - stage: "DeployingToProdEnvironment" dependsOn: BuildingJavaCodeWithMavenCopyToJFrog condition: and(succeeded('BuildingJavaCodeWithMavenCopyToJFrog'), eq(variables.isProd, true)) pool: name: ProdAgentPool displayName: "Deploying To AWS Prod Environment" jobs: - deployment: "DeployJARtoProdServer" environment: name: PROD resourceType: VirtualMachine strategy: runOnce: deploy: steps: - script: | PROC=$(ps -ef | grep -i jar | grep -v grep | awk '{print $2}') if [ -n "$PROC" ]; then echo "Stopping process with PID: $PROC" sudo kill -9 $PROC || echo "Failed to stop process." else echo "No JAR process found. Nothing to stop." fi displayName: "Stop Existing JAR File" continueOnError: true - script: | sudo java -jar /home/ubuntu/azagent/_work/1/ROOT$(Build.BuildId).jar/ROOT$(Build.BuildId).jar > /dev/null 2>&1 & echo "Application started successfully." displayName: "Running The Jar File" continueOnError: true ================================================ FILE: Day 30 AzureDevOps-Part-3/pom.xml ================================================ 4.0.0 org.springframework.boot spring-boot-starter-parent 3.4.0 org.springframework.samples ado-spring-boot-app-dev 3.4.0-SNAPSHOT ado-spring-boot-app-dev 17 UTF-8 UTF-8 2024-11-28T14:37:52Z 1.0.1 5.3.3 4.7.0 10.20.1 0.8.12 0.2.29 1.0.0 3.6.0 0.0.11 0.0.43 central libs-release http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-release-local snapshots libs-snapshot http://jfrog.cloudvishwakarma.in:8082/artifactory/libs-snapshot-local org.springframework.boot spring-boot-starter-actuator org.springframework.boot spring-boot-starter-cache org.springframework.boot spring-boot-starter-data-jpa org.springframework.boot spring-boot-starter-web org.springframework.boot spring-boot-starter-validation org.springframework.boot spring-boot-starter-thymeleaf org.springframework.boot spring-boot-starter-test test io.projectreactor reactor-core com.h2database h2 runtime com.mysql mysql-connector-j runtime org.postgresql postgresql runtime javax.cache cache-api com.github.ben-manes.caffeine caffeine org.webjars webjars-locator-lite ${webjars-locator.version} org.webjars.npm bootstrap ${webjars-bootstrap.version} org.webjars.npm font-awesome ${webjars-font-awesome.version} org.springframework.boot spring-boot-devtools test org.springframework.boot spring-boot-testcontainers test org.springframework.boot spring-boot-docker-compose test org.testcontainers junit-jupiter test org.testcontainers mysql test jakarta.xml.bind jakarta.xml.bind-api org.apache.maven.plugins maven-enforcer-plugin enforce-java enforce This build requires at least Java ${java.version}, update your JVM, and run the build again ${java.version} io.spring.javaformat spring-javaformat-maven-plugin ${spring-format.version} validate validate org.apache.maven.plugins maven-checkstyle-plugin ${maven-checkstyle.version} com.puppycrawl.tools checkstyle ${checkstyle.version} io.spring.nohttp nohttp-checkstyle ${nohttp-checkstyle.version} org.graalvm.buildtools native-maven-plugin org.springframework.boot spring-boot-maven-plugin build-info ${project.build.sourceEncoding} ${project.reporting.outputEncoding} ${java.version} ${java.version} org.jacoco jacoco-maven-plugin ${jacoco.version} prepare-agent report report prepare-package io.github.git-commit-id git-commit-id-maven-plugin false false org.cyclonedx cyclonedx-maven-plugin org.codehaus.mojo build-helper-maven-plugin 3.2.0 org.codehaus.mojo versions-maven-plugin 2.8.1 Apache License, Version 2.0 https://www.apache.org/licenses/LICENSE-2.0 true spring-snapshots Spring Snapshots https://repo.spring.io/snapshot false spring-milestones Spring Milestones https://repo.spring.io/milestone true spring-snapshots Spring Snapshots https://repo.spring.io/snapshot false spring-milestones Spring Milestones https://repo.spring.io/milestone css org.apache.maven.plugins maven-dependency-plugin unpack unpack generate-resources org.webjars.npm bootstrap ${webjars-bootstrap.version} ${project.build.directory}/webjars com.gitlab.haynes libsass-maven-plugin ${libsass.version} ${basedir}/src/main/scss/ ${basedir}/src/main/resources/static/resources/css/ ${project.build.directory}/webjars/META-INF/resources/webjars/bootstrap/${webjars-bootstrap.version}/scss/ compile generate-resources m2e m2e.version org.eclipse.m2e lifecycle-mapping ${lifecycle-mapping} org.apache.maven.plugins maven-checkstyle-plugin [1,) check org.springframework.boot spring-boot-maven-plugin [1,) build-info io.spring.javaformat spring-javaformat-maven-plugin [0,) validate ================================================ FILE: Day 31 AzureDevOps-Part-4/.gitignore ================================================ access.auto.tfvars backend.json packer-vars.json LaptopKey.pem ================================================ FILE: Day 31 AzureDevOps-Part-4/1-main.tf ================================================ provider "aws" { access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" region = "${var.aws_region}" } terraform { backend "s3" { bucket = "sais3bucket236" key = "sais3bucket236.tfstate" region = "us-east-1" } } resource "aws_vpc" "default" { cidr_block = "${var.vpc_cidr}" enable_dns_hostnames = true tags = { Name = "${var.vpc_name}" } } resource "aws_internet_gateway" "default" { vpc_id = "${aws_vpc.default.id}" tags = { Name = "${var.IGW_name}" } } resource "aws_subnet" "subnet1-public" { vpc_id = "${aws_vpc.default.id}" cidr_block = "${var.public_subnet1_cidr}" availability_zone = "us-east-1a" tags = { Name = "${var.public_subnet1_name}" } } resource "aws_subnet" "subnet2-public" { vpc_id = "${aws_vpc.default.id}" cidr_block = "${var.public_subnet2_cidr}" availability_zone = "us-east-1b" tags = { Name = "${var.public_subnet2_name}" } } resource "aws_subnet" "subnet3-public" { vpc_id = "${aws_vpc.default.id}" cidr_block = "${var.public_subnet3_cidr}" availability_zone = "us-east-1c" tags = { Name = "${var.public_subnet3_name}" } } resource "aws_route_table" "terraform-public" { vpc_id = "${aws_vpc.default.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.default.id}" } tags = { Name = "${var.Main_Routing_Table}" } } resource "aws_route_table_association" "terraform-public" { subnet_id = "${aws_subnet.subnet1-public.id}" route_table_id = "${aws_route_table.terraform-public.id}" } resource "aws_security_group" "allow_all" { name = "allow_all" description = "Allow all inbound traffic" vpc_id = "${aws_vpc.default.id}" ingress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } ================================================ FILE: Day 31 AzureDevOps-Part-4/2-ec2.tf ================================================ data "aws_ami" "my_ami" { most_recent = true name_regex = "^Saikiran" owners = ["211125710812"] } resource "aws_instance" "web-1" { count = 3 #ami = var.imagename #ami = "ami-0d857ff0f5fc4e03b" ami = "${data.aws_ami.my_ami.id}" availability_zone = "us-east-1a" instance_type = "t2.small" key_name = "SecOps-Key" subnet_id = "${aws_subnet.subnet1-public.id}" vpc_security_group_ids = ["${aws_security_group.allow_all.id}"] associate_public_ip_address = true tags = { Name = "Web-Server-0${count.index+1}" Env = "Prod" Owner = "saikiran" CostCenter = "ABCD" } } ================================================ FILE: Day 31 AzureDevOps-Part-4/3-alb.tf ================================================ resource "aws_lb" "alb" { name = "app-nlb" internal = false load_balancer_type = "application" security_groups = ["${aws_security_group.allow_all.id}"] subnets = [aws_subnet.subnet1-public.id,aws_subnet.subnet2-public.id,aws_subnet.subnet3-public.id] enable_deletion_protection = false tags = { Environment = "Production" } } resource "aws_lb_target_group" "albtest" { name = "app-tg" port = 80 protocol = "HTTP" vpc_id = aws_vpc.default.id } resource "aws_lb_target_group" "albtest-flask" { name = "app-tg-flask" port = 5000 protocol = "HTTP" vpc_id = aws_vpc.default.id } resource "aws_lb_target_group_attachment" "albtest" { count = 3 target_group_arn = aws_lb_target_group.albtest.arn target_id = "${element(aws_instance.web-1.*.id, count.index)}" port = 8000 } resource "aws_lb_target_group_attachment" "albflask" { count = 3 target_group_arn = aws_lb_target_group.albtest-flask.arn target_id = "${element(aws_instance.web-1.*.id, count.index)}" port = 5000 } ================================================ FILE: Day 31 AzureDevOps-Part-4/4-alb-listener.tf ================================================ resource "aws_lb_listener" "alb-https" { load_balancer_arn = aws_lb.alb.arn port = "443" protocol = "HTTPS" ssl_policy = "ELBSecurityPolicy-FS-1-2-Res-2020-10" certificate_arn = "arn:aws:acm:us-east-1:211125710812:certificate/13300e95-ddf9-40d0-b807-977f157d59d2" default_action { type = "forward" target_group_arn = aws_lb_target_group.albtest.arn } } resource "aws_lb_listener" "alb-https-redirect" { load_balancer_arn = aws_lb.alb.arn port = "80" protocol = "HTTP" default_action { type = "redirect" redirect { port = "443" protocol = "HTTPS" status_code = "HTTP_301" } } } resource "aws_lb_listener" "alb-flask" { load_balancer_arn = aws_lb.alb.arn port = "5000" protocol = "HTTPS" ssl_policy = "ELBSecurityPolicy-FS-1-2-Res-2020-10" certificate_arn = "arn:aws:acm:us-east-1:211125710812:certificate/13300e95-ddf9-40d0-b807-977f157d59d2" default_action { type = "forward" target_group_arn = aws_lb_target_group.albtest-flask.arn } } ================================================ FILE: Day 31 AzureDevOps-Part-4/5-route53.tf ================================================ data "aws_route53_zone" "selected" { name = "cloudvishwakarma.in" } resource "aws_route53_record" "nlb" { zone_id = data.aws_route53_zone.selected.zone_id name = "myapp.${data.aws_route53_zone.selected.name}" type = "A" alias { name = aws_lb.alb.dns_name zone_id = aws_lb.alb.zone_id evaluate_target_health = false } } ================================================ FILE: Day 31 AzureDevOps-Part-4/README.md ================================================ # Managing Infrastructure Pipelines - Session Notes ![Azure Devops](https://github.com/user-attachments/assets/a9b32e1f-1bea-48b9-ac03-5e75daa04d4a) From today’s session, we discussed how to effectively manage Infrastructure Pipelines, focusing on real-world scenarios like sharing agents across organizations, password version control, and handling secure files. ### Key Concepts: **Example Scenario:** - We have an Agent in one organization and need to share the same Agent with another organization. - Managing passwords in version control securely. - Handling sensitive security files. --- ### Start Agent - Example 01: - Create a Project in the PROD organization and demonstrate how to use the existing Agent Pool. **Task:** 1. Open the VSCode file and explain the "AZURE-PIPELINE" code. Add the `.pem` file. 2. Edit `packer-vars.json`. - The file wasn’t present earlier because it’s a secure file. If you check `.gitignore`, you’ll notice sensitive files are ignored. - Create the necessary secure files and add values later. 3. Update the following files: - `route53` - Certificate name under `prod-auto.tfvars` - Bucket name in `main.tf` and `prod-auto.tfvars` - VPC, IGW, and Subnet names. **CIDR Ranges:** - VPC CIDR: `10.37.0.0/16` - Public Subnet CIDRs: - `10.37.1.0/24` - `10.37.2.0/24` - `10.37.3.0/24` - Private Subnet CIDR: `10.37.20.0/24` - Remove the private subnet name. - Remove the AMI as it’s being taken from the Datasource. 4. Go to the Terraform code and highlight where the access key and secret key are specified. Now, these keys need to be referenced as variables: - Push the code first. - Navigate to Pipeline > Code > Edit > Variables. - **Name:** `aws_access_key` (Copy from IAM) - **Value:** `aws_secret_key` (Copy from IAM) 5. Go to the previous ADO Project > Service Connections > Azure Connections: - Options > Security > + Search and confirm. - This demonstrates that not only agents but also service connections can be shared across projects. 6. Configure the project: - **Library** - Create a Variable Group: `AWS_ACCESS_GROUP` - Add the access key and secret key. 7. Return to the Terraform code: - Copy-paste your `.pem` file. - Add access key and secret key in `access.auto.tfvars`. - Update `packer-vars.json`. - Apply the changes in `backend.json`. 8. Upload all four files as secure files under the pipeline. 9. Push the code to the repository. - Initially, everything was set to 'NO' except for `destroy` which was set to 'YES'. - Modify the code to set `destroy` to 'NO' and other parameters to 'YES'. - Run `git status` and push the changes to master. 10. Once done, change `Terraform Destroy` back to 'YES' and others to 'NO'. Push the changes. 11. Enable release pipelines: - Go to Org Settings > Pipelines > Settings > Disable the creation of classic release pipelines. - In Pipelines, you will see the Release option. **Purpose:** The reason for this hands-on demonstration is to prepare you for real-time environments. When you encounter these processes in a production setting, they should not feel overwhelming. These are simply release pipelines that you now understand. --- ### Next Session Preview: - Understanding pipeline licensing. - Exploring different types of Azure Boards and how agile delivery works. - Hosted vs. Self-hosted Pipelines. - Various types of integrations. Stay tuned for more insights in the next session! ================================================ FILE: Day 31 AzureDevOps-Part-4/azure-pipelines.yml ================================================ trigger: branches: include: - master exclude: - releases/old* - feature/*-working # resources: # pipelines: # - pipeline: running-secondary-pipeline # source: variable-group-testing # project: variable-group-testing # trigger: # branches: # include: # - main #For using single agent for all stages use below code. pool: name: LinuxAgentPool demands: - Terraform -equals Yes variables: - group: AWS_ACCESS_GROUP - name: PACKERBUILD value: "NO" - name: TERRAFORM_APPLY value: "NO" - name: ANSIBLEJOB value: "NO" - name: TERRAFORM_DESTROY value: "YES" #- DESTROY: 'NO' #- Without Variable Group. # PACKERBUILD: 'YES' - Without Variable Group. # We can pass variables between stages by exporting then as outputs. Refernce below #https://www.reddit.com/r/azuredevops/comments/qlroi7/pass_variables_between_stages/ stages: - stage: "Packer_Validate_Build" displayName: "Packer Validate & Build" condition: eq(variables.PACKERBUILD, 'YES') jobs: - job: "Download_Secure_Files" displayName: "Download_Secure_Files" steps: - task: DownloadSecureFile@1 inputs: secureFile: "packer-vars.json" - task: CopyFiles@2 inputs: SourceFolder: "$(Agent.TempDirectory)" Contents: "**" TargetFolder: "/home/adminsai/myagent/_work/2/s" - script: pwd && ls -al displayName: "Files_Check" # Step to install the Amazon plugin - script: | echo "Installing Packer Amazon plugin..." packer plugins install github.com/hashicorp/amazon echo "Verifying installed plugins..." packer plugins installed displayName: "Install Packer Amazon Plugin" - script: packer validate -var-file packer-vars.json packer.json displayName: "Packer Validate" - script: packer build -var-file packer-vars.json packer.json displayName: "Packer Build" - stage: "Download_Secure_Files_and_Terraform_Validate" displayName: "Terraform Validate & Download Secure Files" condition: and(in(dependencies.Packer_Validate_Build.result, 'Succeeded', 'Skipped'), eq(variables.TERRAFORM_APPLY, 'YES')) jobs: - job: "Download_Secure_Files" displayName: "Download_Secure_Files" steps: - task: DownloadSecureFile@1 inputs: secureFile: "backend.json" - task: DownloadSecureFile@1 inputs: secureFile: "access.auto.tfvars" - task: CopyFiles@2 inputs: SourceFolder: "$(Agent.TempDirectory)" Contents: "**" TargetFolder: "/home/adminsai/myagent/_work/2/s" - script: pwd && ls -al && echo $COMMIT_MESG displayName: "Files_Check" - script: terraform init -backend-config=backend.json displayName: "Terraform_Initialize" - script: terraform validate displayName: "Terraform_Validate" - stage: "Download_Secure_Files_and_Terraform_Plan_and_Apply" displayName: "Terraform Plan & Apply & Download Secure Files" condition: and(in(dependencies.Packer_Validate_Build.result, 'Succeeded', 'Skipped'), eq(variables.TERRAFORM_DESTROY, 'NO'), eq(variables.TERRAFORM_APPLY, 'YES')) jobs: - job: "Download_Secure_Files_And_Terraform_Apply" displayName: "Download_Secure_Files_And_Terraform_Apply" steps: - task: DownloadSecureFile@1 inputs: secureFile: "backend.json" - task: DownloadSecureFile@1 inputs: secureFile: "access.auto.tfvars" - task: CopyFiles@2 inputs: SourceFolder: "$(Agent.TempDirectory)" Contents: "**" TargetFolder: "/home/adminsai/myagent/_work/2/s" - script: pwd && ls -al displayName: "Files_Check" - script: terraform init -backend-config=backend.json displayName: "Terraform_Initialize" - script: terraform plan displayName: "Terraform_Plan" - script: terraform apply -var="aws_access_key=$(aws-access-key)" -var="aws_secret_key=$(aws-secret-key)" --auto-approve displayName: "Terraform_Apply" - script: pwd && ls -al && cat invfile displayName: "Files_Check" #Make sure ansible is installed on the ADO Agent and disable host_key_checking. - stage: "Run_Ansible_Setup" displayName: "Run Ansible Setup Module" condition: and(in(dependencies.Download_Secure_Files_and_Terraform_Plan_and_Apply.result, 'Succeeded', 'Skipped'), eq(variables.TERRAFORM_DESTROY, 'NO'), eq(variables.ANSIBLEJOB, 'YES')) jobs: - job: "Download_Secure_Files" displayName: "Download_Secure_Files" timeoutInMinutes: 5 steps: - checkout: none - task: DownloadSecureFile@1 inputs: secureFile: "SecOps-Key.pem" - task: CopyFiles@2 inputs: SourceFolder: "$(Agent.TempDirectory)" Contents: "**" TargetFolder: "/home/adminsai/myagent/_work/2/s" - script: pwd && ls -al && chmod 400 SecOps-Key.pem displayName: "Files_Check" - script: ansible -i invfile all -m ping -u ubuntu displayName: "Ansible_Setup" timeoutInMinutes: 1 - script: ansible-playbook -i invfile docker-swarm.yml -u ubuntu --syntax-check displayName: "Ansible_Docker_Swarm_Syntax_Check" timeoutInMinutes: 1 - script: ansible-playbook -i invfile docker-swarm.yml -u ubuntu --check displayName: "Ansible_Docker_Swarm_Dry_Run" timeoutInMinutes: 2 - script: ansible-playbook -i invfile docker-swarm.yml -u ubuntu -vv displayName: "Ansible_Docker_Swarm_Apply" timeoutInMinutes: 5 - stage: "Download_Secure_Files_and_Terraform_Destroy_Variable" displayName: "Terraform Destroy & Download Secure Files" condition: and(eq(variables.TERRAFORM_DESTROY, 'YES'), eq(variables.TERRAFORM_APPLY, 'NO'), eq(variables.ANSIBLEJOB, 'NO')) jobs: - job: "Terraform_Destroy" displayName: "Terraform_Destroy" timeoutInMinutes: 5 steps: - task: DownloadSecureFile@1 inputs: secureFile: "backend.json" - task: DownloadSecureFile@1 inputs: secureFile: "access.auto.tfvars" - task: CopyFiles@2 inputs: SourceFolder: "$(Agent.TempDirectory)" Contents: "**" TargetFolder: "/home/adminsai/myagent/_work/2/s" - script: pwd && ls -al displayName: "Files_Check" - script: terraform init -backend-config=backend.json displayName: "Terraform_Initialize" - script: terraform destroy -var="aws_access_key=$(aws-access-key)" -var="aws_secret_key=$(aws-secret-key)" --auto-approve displayName: "Terraform_Destroy" ================================================ FILE: Day 31 AzureDevOps-Part-4/details.tpl ================================================ [docker_servers] ${master01} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem ${master02} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem ${master03} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem [docker_master] ${master01} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem [docker_managers] ${master02} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem ${master03} ansible_ssh_private_key_file=/home/adminsai/myagent/_work/2/s/SecOps-Key.pem [docker_workers] ================================================ FILE: Day 31 AzureDevOps-Part-4/docker-swarm.yml ================================================ --- - name: Install Docker and Configure Docker Swarm hosts: docker_servers become: yes become_user: root tasks: - name: Install Docker on all docker_servers shell: curl https://get.docker.com | bash - name: Check Docker Version shell: docker version | grep -w Version | head -1 register: version - debug: var: version tags: - install - name: Enable Docker Swarm hosts: docker_master become: yes become_user: root tasks: - name: Enable Docker Swarm on Master docker_servers shell: docker swarm init ignore_errors: yes - name: Get Docker Worker Token shell: docker swarm join-token -q worker register: token - set_fact: swarm_token: "{{ token.stdout }}" - debug: var: token.stdout no_log: true - name: Get Docker Manager Token shell: docker swarm join-token -q manager register: managertoken - set_fact: swarmmanager_token: "{{ managertoken.stdout }}" - debug: var: swarmmanager_token.stdout no_log: true - name: Get Docker Master Private IP shell: curl http://169.254.169.254/latest/meta-data/local-ipv4/ register: private_ip - set_fact: swarm_ip: "{{ private_ip.stdout }}" - debug: var: private_ip.stdout - name: add variables to dummy host 1 add_host: name: "docker_master_node_token" shared_variable: "{{ swarm_token }}" - name: add variables to dummy host 3 add_host: name: "docker_master_node_ip" shared_variable: "{{ swarm_ip }}" - name: add variables to dummy host 4 add_host: name: "docker_master_managernode_token" shared_variable: "{{ swarmmanager_token }}" tags: - swarm - name: Add Workers to Swarm hosts: docker_workers become: yes become_user: root vars: private_ip: "{{ hostvars['docker_master_node_ip']['shared_variable'] }}" token: "{{ hostvars['docker_master_node_token']['shared_variable'] }}" tasks: - debug: var: token no_log: true - debug: var: private_ip - name: Add Workers to Swarm shell: docker swarm join --token "{{ token }}" "{{ private_ip }}":2377 ignore_errors: yes tags: - workers - name: Add Managers to Swarm hosts: docker_managers become: yes become_user: root vars: private_ip: "{{ hostvars['docker_master_node_ip']['shared_variable'] }}" token: "{{ hostvars['docker_master_managernode_token']['shared_variable'] }}" tasks: - debug: var: token no_log: true - debug: var: private_ip - name: Add Managers to Swarm shell: docker swarm join --token "{{ token }}" "{{ private_ip }}":2377 ignore_errors: yes tags: - managers - name: Deploy Test Application hosts: docker_master become: yes become_user: root vars: private_ip: "{{ hostvars['docker_master_node_ip']['shared_variable'] }}" token: "{{ hostvars['docker_master_managernode_token']['shared_variable'] }}" tasks: - debug: var: token no_log: true - debug: var: private_ip - name: Delete Docker Service nginx001 If Exists shell: docker service rm nginx001 ignore_errors: yes - name: Delete Docker Service flask If Exists shell: docker service rm flask ignore_errors: yes - name: Deploy Sample Application shell: docker service create --name nginx001 -p 8000:80 --replicas 3 kiran2361993/kubegame:v2 ignore_errors: yes - name: Deploy Sample Flask Application shell: docker service create --name flask -p 5000:5000 --replicas 3 kiran2361993/mydb:v1 ignore_errors: yes - name: Validate Deployment Nginx shell: sleep 10 && curl http://"{{ private_ip.stdout }}":8000 register: html ignore_errors: yes - name: Validate Deployment Flask shell: sleep 10 && curl http://"{{ private_ip.stdout }}":5000 register: html ignore_errors: yes - debug: var: html.stdout tags: - managers ================================================ FILE: Day 31 AzureDevOps-Part-4/docker.service ================================================ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker #ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375 -H fd:// --containerd=/run/containerd/containerd.sock #sudo systemctl daemon-reload #sudo systemctl restart docker ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not supports it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target ================================================ FILE: Day 31 AzureDevOps-Part-4/localfile.tf ================================================ resource "local_file" "foo" { content = templatefile("details.tpl", { master01 = aws_instance.web-1.0.public_ip master02 = aws_instance.web-1.1.public_ip master03 = aws_instance.web-1.2.public_ip #worker01 = aws_instance.worker-1.public_ip #worker02 = aws_instance.worker-2.public_ip #worker03 = aws_instance.worker-3.public_ip # worker04 = aws_instance.worker-4.public_ip # worker05 = aws_instance.worker-5.public_ip } ) filename = "invfile" } ================================================ FILE: Day 31 AzureDevOps-Part-4/packer.json ================================================ { "_comment": "Create a AWS AMI ith AMZ Linux 2018 with Java and Tomcat", "variables": { "aws_access_key": "", "aws_secret_key": "", "region": "", "source_ami": "", "instance_type": "", "vpc_id": "", "subnet_id": "" }, "_comment1": "packer build -var \"aws_secret_key=foo\" template.json", "_comment2": "packer build -var-file packer-vars.json template.json", "builders": [ { "access_key": "{{user `aws_access_key`}}", "secret_key": "{{user `aws_secret_key`}}", "type": "amazon-ebs", "region": "{{user `region`}}", "source_ami": "{{user `source_ami`}}", "instance_type": "{{user `instance_type`}}", "ssh_username": "ubuntu", "ami_name": "Saikiran-Pinapathruni-Build-{{isotime | clean_resource_name}}", "vpc_id": "{{user `vpc_id`}}", "subnet_id": "{{user `subnet_id`}}", "tags": { "Name": "Saikiran-Pinapathruni-Build-{{isotime | clean_resource_name}}" } } ], "provisioners": [ { "type": "shell", "inline": [ "sleep 30", "sudo apt update -y", "sudo apt install nginx -y", "sudo apt install git -y", "sudo git clone https://github.com/saikiranpi/webhooktesting.git", "sudo rm -rf /var/www/html/index.nginx-debian.html", "sudo cp webhooktesting/index.html /var/www/html/index.nginx-debian.html", "sudo cp webhooktesting/style.css /var/www/html/style.css", "sudo cp webhooktesting/scorekeeper.js /var/www/html/scorekeeper.js", "sudo service nginx start", "sudo systemctl enable nginx", "curl https://get.docker.com | bash" ] }, { "type": "file", "source": "docker.service", "destination": "/tmp/docker.service" }, { "type": "shell", "inline": [ "sudo cp /tmp/docker.service /lib/systemd/system/docker.service", "sudo usermod -a -G docker ubuntu", "sudo systemctl daemon-reload", "sudo service docker restart" ] } ] } ================================================ FILE: Day 31 AzureDevOps-Part-4/prod.auto.tfvars ================================================ aws_region = "us-east-1" vpc_cidr = "10.1.0.0/16" public_subnet1_cidr = "10.1.1.0/24" public_subnet2_cidr = "10.1.2.0/24" public_subnet3_cidr = "10.1.3.0/24" private_subnet_cidr = "10.1.20.0/24" vpc_name = "Staging-aws" IGW_name = "Staging-aws-igw" public_subnet1_name = "Staging_Public_Subnet1" public_subnet2_name = "Staging_Public_Subnet2" public_subnet3_name = "Staging_Public_Subnet3" Main_Routing_Table = "Staging_Main_table" key_name = "SecOps-Key" environment = "dev" ================================================ FILE: Day 31 AzureDevOps-Part-4/variables.tf ================================================ variable "aws_access_key" {} variable "aws_secret_key" {} variable "aws_region" {} variable "vpc_cidr" {} variable "vpc_name" {} variable "IGW_name" {} variable "key_name" {} variable "public_subnet1_cidr" {} variable "public_subnet2_cidr" {} variable "public_subnet3_cidr" {} variable "private_subnet_cidr" {} variable "public_subnet1_name" {} variable "public_subnet2_name" {} variable "public_subnet3_name" {} variable Main_Routing_Table {} variable "azs" { description = "Run the EC2 Instances in these Availability Zones" default = ["us-east-1a", "us-east-1b", "us-east-1c"] } variable "environment" { default = "dev" } ================================================ FILE: Day 32 AzureDevOps-Part-5/README.md ================================================ # Azure DevOps Project Management Repository This repository demonstrates project management practices and workflows in Azure DevOps (ADO), with a focus on Azure Boards, Sprints, Repos, Branching Strategies, and Artifacts. ## Repository Structure ``` ado-project-mgmt/ ├── azure-boards/ │ ├── sprint-planning.md │ ├── backlog-management.md │ ├── agile-process.md ├── azure-repos/ │ ├── branching-strategies.md │ ├── hotfix-branch.md │ ├── repo-setup.md ├── artifacts/ │ ├── package-management.md ├── gitlab-integration/ │ ├── gitlab-overview.md │ ├── repo-setup.md ├── README.md ``` ## Overview This repository serves as a reference for understanding and implementing project management concepts in Azure DevOps. Each section provides detailed explanations, examples, and best practices. --- ### Azure Boards #### Files: 1. **sprint-planning.md** - Explains sprint planning, creating sprints in Azure Boards, and managing sprint tasks. - Includes screenshots or console views of Azure Boards. 2. **backlog-management.md** - Discusses managing backlogs, sprint grooming sessions, and handling client requests. - Contains sample tasks and effort estimation examples. 3. **agile-process.md** - Outlines the Agile process flow, including Heads-up calls and backlog refinement. --- ### Azure Repos #### Files: 1. **branching-strategies.md** - Details common branching strategies: - **Master/Main**: Production-ready code. - **UAT/DEV/QA**: Environment-specific branches. - **Feature branches**: For new features. - **Hotfix branches**: For production issue fixes. 2. **hotfix-branch.md** - Describes the process of creating and merging a hotfix branch. 3. **repo-setup.md** - Guides setting up Azure Repos and integrating with GitLab. --- ### Artifacts #### Files: 1. **package-management.md** - Describes how to use Azure Artifacts for managing and sharing packages. --- ### GitLab Integration #### Files: 1. **gitlab-overview.md** - Provides an overview of GitLab and its integration with Azure DevOps. 2. **repo-setup.md** - Details setting up repositories and managing workflows in GitLab. --- ### README.md The main README file provides: - A quick introduction to the repository. - Links to detailed documentation for each feature. - Best practices for using Azure Boards, Repos, and Artifacts. --- ### Contribution Feel free to fork the repository, submit issues, or create pull requests for enhancements. --- ### License This repository is licensed under the MIT License. ================================================ FILE: Day 33 Jenkins-Part-1/Jenkinsfile ================================================ // Declarative Pipeline def VERSION = '1.0.0' pipeline { agent none // tools { // maven 'apache-maven-3.6.3' // } environment { PROJECT = "WELCOME TO Jenkins Class" AZ_SUB_ID = "9ce91e05-4b9e-4a42-95c1-4385c54920c6" AZ_TEN_ID = "2b387c91-acd6-4c88-a6aa-c92a96cab8b1" } stages { stage("Dev Tools Verification") { when { branch 'development' } agent { label 'DEV' } steps { sh "mvn --version" sh "java -version" sh "terraform version" sh "packer version" sh "trivy --version" sh "trivy --version" } } //-----------------------------PRODUCTION--------------- stage("PROD Tools Verification") { when { branch 'production' } agent { label 'PROD' } steps { sh "mvn --version" sh "java -version" sh "terraform version" sh "packer version" } } } } // //Declarative Pipeline // def VERSION='1.0.0' // pipeline { // agent none // // tools { // // maven 'apache-maven-3.6.3' // // } // environment { // PROJECT = "WELCOME TO DEVOPS jenkins" // AZ_SUB_ID = "9ce91e05-4b9e-4a42-95c1-4385c54920c6" // AZ_TEN_ID = "2b387c91-acd6-4c88-a6aa-c92a96cab8b1" // BATCH = "B36" // } // stages { // stage("Dev Tools Verification") { // when { // branch 'development' // } // agent { label 'DEV' } // steps { // sh "mvn --version" // sh "java -version" // sh "terraform version" // sh "packer version" // sh "trivy --version" // } // } // // stage('Dev Sonarqube SAST') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // withSonarQubeEnv('SonarQube-Dev'){ // // sh "mvn clean verify sonar:sonar \ // // -Dsonar.projectKey=spring-boot-app-dev \ // // -Dsonar.projectName=spring-boot-app-dev \ // // -Dsonar.host.url=http://sonarqube.cloudvishwakarma.in:9000" // // } // // } // // } // // stage("Dev Quality gate") { // // when { // // branch 'development' // // } // // steps { // // waitForQualityGate abortPipeline: true // // } // // } // // stage('Dev mvn clean') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // sh "mvn clean" // // // exit 1 // // } // // } // // stage('Dev mvn test') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // sh "mvn test" // // } // // } // // stage('Dev mvn package & install') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // sh "mvn versions:set -DnewVersion=Dev-1.0.${BUILD_NUMBER}" // // sh "mvn package install" // // sh "rm -rf /home/ubuntu/.m2/settings.xml" // // sh "cp dev-settings.xml /home/ubuntu/.m2/settings.xml" // // } // // } // // stage('Dev mvn package') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // sh "mvn versions:set -DnewVersion=Dev-1.0.${BUILD_NUMBER}" // // sh "mvn clean package" // // } // // } // // stage('Dev docker build') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // sh "sudo docker build -t kiran2361993/jenkinsimage:$BUILD_NUMBER ." // // } // // } // // stage('Dev Trivy Scan') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // sh 'curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/html.tpl > html.tpl' // // sh 'mkdir -p reports && rm -rf reports/dev_trivy_report.html' // // sh """sudo trivy image kiran2361993/jenkinsimage:$BUILD_NUMBER --security-checks vuln --exit-code 0 --severity CRITICAL --timeout 15m --format template --template \"@html.tpl\" --output reports/dev_trivy_report.html """ // // sh 'aws s3 cp reports/dev_trivy_report.html s3://sais3bucket236/dev_trivy_report.html' // // } // // } // // stage('Publish Trivy Report') { // // when { // // branch 'development' // // } // // agent { label 'MASTER' } // // steps { // // sh 'aws s3 cp s3://sais3bucket236/dev_trivy_report.html dev_trivy_report.html' // // // Publishing Dev trivy HTML findings Report // // publishHTML (target : [ // // allowMissing: true, // // alwaysLinkToLastBuild: true, // // keepAll: true, // // reportDir: '.', // // reportFiles: 'dev_trivy_report.html', // // reportName: 'Dev Trivy Scan', // // reportTitles: 'Dev Trivy Scan' // // ]) // // } // // } // // stage('Dev Deploy Docker Image') { // // when { // // branch 'development' // // } // // agent { label 'DEV' } // // steps { // // sh "sudo docker stop springbootapp || sudo docker ps" // // sh "sudo docker run --rm -dit --name springbootapp -p 8080:8080 kiran2361993/jenkinsimage:$BUILD_NUMBER" // // } // // } // // stage('Dev Validate Deployment') { // // when { // // branch 'development' // // } // // options { // // timeout(time: 3, unit: 'MINUTES') // // } // // agent { label 'DEV' } // // steps { // // sh "sleep 30 && curl http://dev.awsb49.xyz:8080 || exit 1" // // } // // } // // stage ('Dev DAST') { // // when { // // branch 'development' // // } // // options { // // timeout(time: 5, unit: 'MINUTES') // // } // // agent { label 'DEV' } // // steps { // // sh 'sudo docker run -t owasp/zap2docker-stable zap-baseline.py -t http://dev.awsb49.xyz:8080 || true' // // } // // } // //-----------------------------PRODUCTION--------------- // stage("PROD Tools Verification") { // when { // branch 'production' // } // agent { label 'PROD' } // steps { // sh "mvn --version" // sh "java -version" // sh "terraform version" // sh "packer version" // } // } // // stage('PROD Sonarqube SAST') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // withSonarQubeEnv('SonarQube-PROD'){ // // sh "mvn clean verify sonar:sonar \ // // -Dsonar.projectKey=spring-boot-app-prod \ // // -Dsonar.projectName=spring-boot-app-prod \ // // -Dsonar.host.url=http://sonarqube.cloudvishwakarma.in:9000" // // } // // } // // } // // stage("PROD Quality gate") { // // when { // // branch 'production' // // } // // steps { // // waitForQualityGate abortPipeline: true // // } // // } // // stage('PROD mvn clean') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // sh "mvn clean" // // // exit 1 // // } // // } // // stage('PROD mvn test') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // sh "mvn test" // // } // // } // // stage('PROD mvn package & install') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // sh "mvn versions:set -DnewVersion=Prod-${BUILD_NUMBER}" // // sh "mvn package install" // // sh "rm -rf /home/ubuntu/.m2/settings.xml" // // sh "cp dev-settings.xml /home/ubuntu/.m2/settings.xml" // // } // // } // // stage('PROD mvn package & deploy') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // sh "mvn versions:set -DnewVersion=Prod-${BUILD_NUMBER}" // // sh "mvn package deploy" // // } // // } // // stage('PROD docker build') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // sh "sudo docker build -t kiran2361993/jenkinsimageprod:$BUILD_NUMBER ." // // } // // } // // stage('Prod Trivy Scan') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // sh 'curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/html.tpl > html.tpl' // // sh 'mkdir -p reports && rm -rf reports/prod_trivy_report.html' // // sh """sudo trivy image kiran2361993/jenkinsimageprod:$BUILD_NUMBER --security-checks vuln --exit-code 0 --severity CRITICAL --timeout 15m --format template --template \"@html.tpl\" --output reports/prod_trivy_report.html """ // // sh 'aws s3 cp reports/prod_trivy_report.html s3://sais3bucket236/prod_trivy_report.html' // // } // // } // // stage('Publish Prod Trivy Report') { // // when { // // branch 'production' // // } // // agent { label 'MASTER' } // // steps { // // sh 'aws s3 cp s3://sais3bucket236/prod_trivy_report.html prod_trivy_report.html' // // // Publish prod trivy HTML findings Report // // publishHTML (target : [ // // allowMissing: true, // // alwaysLinkToLastBuild: true, // // keepAll: true, // // reportDir: '.', // // reportFiles: 'prod_trivy_report.html', // // reportName: 'Prod Trivy Scan', // // reportTitles: 'Prod Trivy Scan' // // ]) // // } // // } // // stage('PROD Deploy Docker Image') { // // when { // // branch 'production' // // } // // agent { label 'PROD' } // // steps { // // sh "sudo docker stop springbootapp || sudo docker ps" // // sh "sudo docker run --rm -dit --name springbootapp -p 8080:8080 kiran2361993/jenkinsimageprod:$BUILD_NUMBER" // // } // // } // // stage('PROD Validate Deployment') { // // when { // // branch 'production' // // } // // options { // // timeout(time: 3, unit: 'MINUTES') // // } // // agent { label 'PROD' } // // steps { // // sh "sleep 30 && curl http://prod.awsb49.xyz:8080 || exit 1" // // } // // } // // stage ('PROD DAST') { // // when { // // branch 'production' // // } // // options { // // timeout(time: 5, unit: 'MINUTES') // // } // // agent { label 'PROD' } // // steps { // // sh 'sudo docker run -t owasp/zap2docker-stable zap-baseline.py -t http://prodslave.awsb49.xyz:8080 || true' // // } // // } // // } // // post { // // success { // // slackSend(color: 'good', message: "Pipeline Successfull: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}") // // } // // failure { // // slackSend(color: 'danger', message: "Pipeline Failed: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}") // // } // // aborted { // // slackSend(color: 'warning', message: "Pipeline Aborted: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}") // // } // // always { // // echo "I always run." // // } // // } // } ================================================ FILE: Day 33 Jenkins-Part-1/README.md ================================================ # Day 36 Jenkins-Part-1 jenkins # Jenkins Setup and Configuration Guide This guide provides a step-by-step process to set up and configure Jenkins with slave nodes, GitHub integration, SonarQube, and Slack notifications. By following this guide, you will establish a fully functional Jenkins environment capable of managing development and production pipelines. --- ## 1. Deploy Jenkins Master Instance 1. Launch an EC2 instance with the following specifications: - **Instance Type:** t2.medium - **OS:** Ubuntu 2. Install Jenkins on the instance and start the service. 3. Install VSCode and run the necessary commands to configure Jenkins. --- ## 2. Configure Jenkins Slaves ### Step 1: Add Global Credentials - Navigate to **Manage Jenkins > Credentials > System > Global Credentials**. - Add new credentials: - **ID:** slave-access - **Description:** slave-access - **Username:** ubuntu - **Password:** Use your `.pem` file. ### Step 2: Deploy Slave Instances - Launch two t2.medium EC2 instances from the same AMI used for the master instance. - Name the instances: - `Jenkins-Slave-Dev` - `Jenkins-Slave-Prod` ### Step 3: Add Nodes to Jenkins 1. Navigate to **Manage Jenkins > Nodes > Add Node**. 2. Configure **Dev-Slave**: - **Permanent Agent:** Yes - **No. of Executors:** 2 - **Remote Root Directory:** `/home/ubuntu` - **Labels:** DEV - **Usage:** Only build jobs with label. - **Launch Method:** Launch Agents via SSH - **Host:** Dev slave private IP or DNS - **Credentials:** ubuntu (slave-access) - **Host Key Verification Strategy:** Non-verifying strategy - **Port:** 22 3. Repeat the same steps for **Prod-Slave**, copying settings from **Dev-Slave** but updating the names and IP/DNS accordingly. --- ## 3. Configure GitHub Access 1. Switch to the Jenkins user: ```bash su - jenkins ssh-keygen ``` 2. Add the private key to Jenkins: - Navigate to **Manage Jenkins > Credentials > System > Global Credentials**. - Add SSH Username with Private Key: - **ID:** GitHubAccess - **Username:** jenkins - **Private Key:** Paste the generated private key. 3. Add the public key to your GitHub repository under **Deploy Keys**. --- ## 4. Configure SonarQube 1. Generate a token from SonarQube: - Navigate to **SonarQube > My Account > Security**. - Generate a token and copy it. 2. Add the token to Jenkins: - Navigate to **Manage Jenkins > Credentials > System > Global Credentials**. - Add Secret Text: - **ID:** sonarqube-token - **Scope:** Global - **Secret:** Paste the token. 3. Configure SonarQube in Jenkins: - Navigate to **Manage Jenkins > System > Configure System**. - Add SonarQube Server: - **Name:** As per your script - **URL:** Your SonarQube URL (remove trailing slash) - **Credentials:** Select the token you just created. 4. Create a webhook in SonarQube: - Navigate to **Administrator > Webhooks > Create**. - **Name:** Jenkins-Webhook - **URL:** `http://:8080/sonarqube-webhook/` --- ## 5. Configure GitHub Webhooks 1. Push your development code to a private GitHub repository. 2. Navigate to **Repository Settings > Webhooks > Add Webhook**. - **Content Type:** `application/json` - **URL:** As per your Jenkins pipeline token. - Add the webhook and authenticate it. --- ## 6. Create a Multibranch Pipeline 1. Create a new item in Jenkins: - **Name:** Your pipeline name - **Type:** Multibranch Pipeline 2. Configure the pipeline: - **Branch Source:** - **Type:** Git - **Credentials:** Jenkins (GitHubAccess) - **Repository URL:** Your GitHub repository URL - **Build Configuration:** - **Script Path:** `Jenkinsfile` - **Scan by Webhook:** Use the same token as the GitHub webhook. 3. Add the public SSH key generated earlier to **GitHub Deploy Keys**. --- ## 7. Configure Slack Notifications 1. Create a Slack channel and add the Jenkins app: - **Channel:** Your desired Slack channel. - **Token:** Copy the integration token. 2. Add the Slack token to Jenkins: - Navigate to **Manage Jenkins > Credentials > System > Global Credentials**. - Add Secret Text: - **ID:** slack-token - **Secret:** Paste the token. 3. Configure Slack in Jenkins: - Navigate to **Manage Jenkins > System**. - Add Slack configuration: - **Workspace:** Your Slack workspace name - **Credentials:** Select slack-token - **Channel:** Your Slack channel name --- ## 8. Additional Steps - Update the `settings.xml` file with the correct JFrog URL. - Assign an IAM role with admin access to Jenkins for pushing reports. - Configure labels for nodes: - **Manage Jenkins > Nodes and Clouds > Built-in Node**: - **Labels:** MASTER --- ## 9. Test the Setup 1. Create a new branch (`development`) in GitHub. 2. Push a commit to the branch. 3. Check if the pipeline triggers and runs successfully in Blue Ocean. 4. Create a `prod` branch and run the job on the **Prod-Slave** node. --- ## 10. Stopping Instances - Stop all instances when not in use but do not terminate them to preserve configurations. --- ================================================ FILE: Day 34 Jenkins-Part-2/0-jenkins_install.sh ================================================ sudo apt update && apt install -y unzip jq net-tools apt install openjdk-17-jdk -y apt install maven -y && curl https://get.docker.com | bash useradd -G docker adminsai usermod -aG docker adminsai # aws cli install curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install # # azurecli ubuntu install # curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash # terraform.io and packer.io copy the link and install in /usr/local/bin cd /usr/local/bin wget https://releases.hashicorp.com/terraform/1.10.3/terraform_1.10.3_linux_amd64.zip unzip # packer.io wget https://releases.hashicorp.com/packer/1.11.2/packer_1.11.2_linux_amd64.zip unzip # document.ansible.com  Select ubuntu and download the file accordingly sudo apt update sudo apt install software-properties-common sudo add-apt-repository --yes --update ppa:ansible/ansible sudo apt install ansible cd /etc/ansible cp ansible.cfg ansible.cfg_backup ansible-config init --disabled >ansible.cfg nano ansible.cfg ctrl w  host_key_checking = False # Create one ansible user. sudo useradd -m -s /bin/bash ansibleadmin sudo mkdir -p /home/ansibleadmin/.ssh sudo chown -R ansibleadmin:ansibleadmin /home/ansibleadmin/.ssh sudo chmod 700 /home/ansibleadmin/.ssh sudo touch /home/ansibleadmin/.ssh/authorized_keys sudo chown ansibleadmin:ansibleadmin /home/ansibleadmin/.ssh/authorized_keys sudo chmod 600 /home/ansibleadmin/.ssh/authorized_keys sudo usermod -aG sudo ansibleadmin echo 'ansibleadmin ALL=(ALL) NOPASSWD: ALL' | sudo tee -a /etc/sudoers echo 'ssh-rsa key here' | sudo tee /home/ansibleadmin/.ssh/authorized_keys usermod -aG root ansibleadmin usermod -aG docker ansibleadmin # Install trivy https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb cd /usr/local/bin Wget https://github.com/aquasecurity/trivy/releases/download/v0.41.0/trivy_0.41.0_Linux-64bit.deb dpkg -i trivy file Trivy ################################# # 1 reboot the system for configurations, Once it is up then take AMI image and wait till the image has been created. Then install jenkins. # 2 Create DNS Record for Jenkins Jfrog and Sonarqube, Turn the sonar jfrog instance. ################################# #jenkins installation # Add Jenkins GPG key curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc >/dev/null # Add Jenkins repository to sources list echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/" | sudo tee /etc/apt/sources.list.d/jenkins.list >/dev/null # Update package list sudo apt-get update # (Optional) Check available Jenkins versions sudo apt-cache madison jenkins | grep -i 2.426.2 # Install the specific Jenkins version sudo apt-get install jenkins=2.426.2 -y ###################################################################################################################### Login and install all neccessary plugins PLugins - AWS Steps Plugin - Docker Plugin - SonarQube Scanner Version 2.15 and configure it in Jenins Configure System. - Blue Ocean - Multibranch Scan Webhook Trigger - Slack Notification - Ansible Once done, reboot the jenkins server Then update the SSL certificate following below. ####################################################################################################################### # SSL Certificate snap install --classic certbot certbot certonly --manual --preferred-challenges=dns --key-type rsa \ --email pinapathruni.saikiran@gmail.com --server https://acme-v02.api.letsencrypt.org/directory \ --agree-tos -d "*.cloudvishwakarma.in" #get into the /etc/letsencrypt/live/clodvishwakarma.in/ Then run below, Because it needs to pick the crts. openssl pkcs12 -inkey privkey.pem -in cert.pem -export -out certificate.p12 # password : India@123 #Now convert into JKS certificate, keytool -importkeystore -srckeystore certificate.p12 -srcstoretype pkcs12 \ -destkeystore jenkinsserver.jks -deststoretype JKS # password : India@123 sudo cp jenkinsserver.jks /var/lib/jenkins/ sudo chown jenkins:jenkins /var/lib/jenkins/jenkinsserver.jks nano /lib/systemd/system/Jenkins.service Environment="JENKINS_PORT=8080" Environment="JENKINS_PORT=8080" Environment="JENKINS_HTTPS_PORT=8443" Environment="JENKINS_HTTPS_KEYSTORE=/var/lib/jenkins/jenkinsserver.jks" Environment="JENKINS_HTTPS_KEYSTORE_PASSWORD=India@123" AmbientCapabilities=CAP_NET_BIND_SERVICE echo 'JENKINS_ARGS="$JENKINS_ARGS --httpsPort=8443 --httpPort=-1 --httpsPrivateKey=/etc/letsencrypt/live/cloudvishwakarma.in/privkey.pem --httpsCertificate=/etc/letsencrypt/live/cloudvishwakarma.in/fullchain.pem"' >>/etc/default/jenkins sudo usermod -aG docker jenkins sudo usermod -aG root jenkins sudo systemctl daemon-reload && sudo systemctl restart jenkins && sudo systemctl status jenkins ================================================ FILE: Day 34 Jenkins-Part-2/README.md ================================================ # Day 37 Jenkins-Part-2 ![diagram-export-1-29-2025-8_53_17-PM](https://github.com/user-attachments/assets/123cd71f-a1ff-4263-a77f-13d00818363e) # Jenkins RBAC and Backup & Restore ## Jenkins Role-Based Access Control (RBAC) ### Overview Jenkins Role-Based Access Control (RBAC) allows administrators to define specific permissions for users and groups. This ensures proper access management and enhances security within Jenkins. ### Steps to Configure RBAC 1. **Login to Your Jenkins Server** - Open Jenkins on port `8443`. 2. **Navigate to Manage Jenkins** - You will not see any roles initially since a plugin needs to be installed. 3. **Install the Required Plugin** - Go to **Manage Jenkins > Plugins**. - Install the **Role-Based Authorization Strategy** plugin. - Wait for the installation to complete. 4. **Configure Security Settings** - Navigate to **Manage Jenkins > Security**. - Set **Security Realm** to: *Jenkins’ own user database*. - Set **Authorization** to: *Role-Based Strategy*. - Click **Save**. 5. **Assign Roles** - Go to **Manage Jenkins > Manage and Assign Roles**. - Click on **Assign Roles** and configure project-level roles as needed. - Save the changes. 6. **Create User Accounts** - Navigate to **Manage Jenkins > Users**. - Create a user named `saikiran` with a password and fill in the required details. - Similarly, create three additional users. 7. **Assign Users to Roles** - Go to **Manage Jenkins > Manage and Assign Roles > Assign Roles**. - Add users and assign them appropriate roles. - Scroll down to **Item Roles** and configure access levels. - Click **Save**. 8. **Create a Project** - Create a new project named `java-project`. - Scroll down, select **Execute Shell**, enter the necessary script. - Click **Save**. 9. **Create Additional Projects** - Create three more projects (e.g., Python, Java, etc.). - The code remains the same. 10. **Configure the Built-in Node** - Navigate to **Manage Jenkins > Nodes > Built-in Node**. - Remove existing labels and save. 11. **Test Role-Based Access Control** - Open Jenkins in a **private browser window**. - Log in with different users and observe the access differences. --- ## Jenkins Backup and Restore ### Overview Regular backups ensure that Jenkins configurations and jobs can be restored in case of failure. Jenkins provides multiple backup options, including local and cloud storage. ### Steps to Backup Jenkins Data 1. **Navigate to Jenkins Home Directory** ```sh cd /var/lib/jenkins du -h /var/lib/jenkins ``` - You can directly back up these files if required. 2. **Use ThinBackup Plugin for Backup** - Navigate to **Manage Jenkins > ThinBackup**. - Click on **Backup Now**. 3. **Create a Backup Directory on the Server** - Open **Putty** and run: ```sh mkdir /Jenkins-backup chown jenkins:jenkins /Jenkins-backup ``` 4. **Configure Automatic Backup Schedule** - Go to **Manage Jenkins > ThinBackup**. - Set the backup schedule to run at 9 PM from Monday to Friday: ``` 0 21 * * 1-5 ``` - Set the maximum retention period to **30 days**. - Restart Jenkins: ```sh sudo systemctl restart jenkins ``` - Enable backups and click **Save**. - Click **Backup Now** and verify the backup in **Putty**. 5. **Store Backups in the Cloud** - Since Jenkins could be completely deleted, it is advisable to store backups in **Amazon S3** or **Azure Blob Storage** for added security and redundancy. --- By following these steps, you can effectively manage user roles in Jenkins and implement a reliable backup strategy to protect your Jenkins environment. ================================================ FILE: Day 35 Jenkins-Part-3/Jenkinsfile ================================================ pipeline { agent none environment { PROJECT = "WELCOME TO Jenkins-Terraform Modules Pipeline" TERRAFORM_MODULE_REPO = "https://github.com/saikiranpi/Terraform_Modules.git" } stages { stage('For Parallel Stages') { parallel { stage('Deploy To Development') { agent { label 'DEV' } environment { DEV_AWS_ACCOUNT = "053490018989" DEVDEFAULTAMI = "ami-045d7ad26da8606ed" TERRAFORM_APPLY = "NO" // Set to YES to trigger apply TERRAFORM_DESTROY = "YES" // Set to YES if you want to destroyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy } when { branch 'development' } stages { stage('Clone Terraform Modules') { steps { sh 'pwd' sh 'rm -rf terraform-modules' sh 'ls -al' sh "git clone ${TERRAFORM_MODULE_REPO} terraform-modules" sh 'ls -al terraform-modules/development' sh 'find terraform-modules/development -name "*.tf"' } } stage('Terraform Init & Plan') { when { expression { "${env.TERRAFORM_APPLY}" == 'YES' } } steps { dir('terraform-modules/development') { // Navigate to development directory sh 'terraform init' sh 'terraform validate' sh 'terraform plan -var-file=terraform.tfvars' } } } stage('Terraform Apply') { when { expression { "${env.TERRAFORM_APPLY}" == 'YES' } } steps { dir('terraform-modules/development') { sh 'terraform apply -var-file=terraform.tfvars --auto-approve' } } } stage('Terraform Destroy') { when { expression { "${env.TERRAFORM_DESTROY}" == 'YES' } } steps { dir('terraform-modules/development') { sh 'terraform init' sh 'terraform validate' sh 'terraform destroy -var-file=terraform.tfvars --auto-approve' } } } } } stage('Deploy To Production') { agent { label 'PROD' } environment { PROD_AWS_ACCOUNT = "009412611595" PRODEFAULTAMI = "ami-0f45852828028bd50" TERRAFORM_APPLY = "YES" // Set to YES to trigger apply TERRAFORM_DESTROY = "NO" // Set to YES if you want to destroy } when { branch 'production' } stages { stage('Clone Terraform Modules') { steps { sh 'pwd' sh 'ls -al' sh "git clone ${TERRAFORM_MODULE_REPO} terraform-modules" sh 'ls -al terraform-modules/production' sh 'find terraform-modules/production -name "*.tf"' } } stage('Terraform Init & Plan') { when { expression { "${env.TERRAFORM_APPLY}" == 'YES' } } steps { dir('terraform-modules/production') { // Navigate to production directory sh 'terraform init' sh 'terraform validate' sh 'terraform plan -var-file=terraform.tfvars' } } } stage('Terraform Apply') { when { expression { "${env.TERRAFORM_APPLY}" == 'YES' } } steps { dir('terraform-modules/production') { sh 'terraform apply -var-file=terraform.tfvars --auto-approve' } } } stage('Terraform Destroy') { when { expression { "${env.TERRAFORM_DESTROY}" == 'YES' } } steps { dir('terraform-modules/production') { sh 'terraform destroy -var-file=terraform.tfvars --auto-approve' } } } } } } } } post { success { slackSend(color: 'good', message: "Pipeline Successful: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}") } failure { slackSend(color: 'danger', message: "Pipeline Failed: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}") } aborted { slackSend(color: 'warning', message: "Pipeline Aborted: ${env.JOB_NAME} ${env.BUILD_NUMBER} ${env.BUILD_URL}") } always { echo "I always run." } } } ================================================ FILE: Day 35 Jenkins-Part-3/README.md ================================================ Jenkins Pipeline # Jenkins Pipeline Setup with Multi-Branch Deployment for Infra Hanlding This repository details the process of setting up a Jenkins multi-branch pipeline for automated deployments using GitHub webhooks and IAM roles on AWS instances. ## **Steps to Follow** 1. **Instance Setup & DNS Configuration** - Turn on the AWS instances and configure DNS records accordingly. 2. **Code Explanation** - Review and understand the provided codebase for deployment automation. 3. **IAM Role Assignment** - Assign appropriate IAM roles to all three instances to manage permissions. 4. **DNS Update** - Update the DNS names post instance restart, as previous configurations might have changed. 5. **GitHub Repository Setup** - Create a **private GitHub repository**. - Push the code to a **development branch** for better version control. 6. **Webhook Configuration** - Go to **Repo Settings > Webhooks**. - Copy the webhook URL from the previous Spring Boot repository and apply it here. 7. **Deploy Key Setup** - Remove the deploy key from the Spring Boot app. - Create a new deploy key under the **infra-pipeline** section. - Run `su - jenkins && cat ~/.ssh/id_rsa.pub` and paste the key under GitHub deploy keys. 8. **Jenkins Pipeline Configuration** - Create a new pipeline in Jenkins: - Select **New Item** and choose **Multibranch Pipeline**. - Under **Branch Sources > Git**, paste the repository SSH URL (`git@...`). - Use **Jenkins GitHub access credentials** for authentication. - Enable **Webhook-triggered builds** by entering the token from the webhook settings. - Save the configuration. ================================================ FILE: Day 36 Jenkins-Part-4/README.md ================================================ # Day 39 Jenkins-Part-4 ================================================ FILE: README.md ================================================ ## MASTERING DEVSECOPS