Wednesday, January 18, 2023

Create Vpc, IAM and S3 Bucket with Terraform

 





Understanding AWS VPCs

The AWS cloud has dozens of various services from compute, storage, networking, and more. Each of these services can communicate with each other just like an on-prem datacenter’s services. But, each of these services is independent of one another, and they are not necessarily isolated from other services. The AWS VPC changes that.


An AWS VPC is a single network that allows you to launch AWS services within a single isolated network. Technically, an AWS VPC is almost the same as owning a datacenter but with built-in additional benefits of scalability, fault-tolerance, unlimited storage, etc







Building the Terraform Configuration for an AWS VPC

Enough talk, let’s get down to building!


1. To start, create a folder to store your Terraform configuration files in. This tutorial will create a folder called terraform-vpc in your home directory.

2. Open your favorite code editor and copy/paste the following configuration already created for you saving the file as vpc.tf inside of the ~/terraform-vpc directory. Information about each resource to create is inline.

The vpc.tf file contains all the resources which are required to be provisioned.

Terraform uses several different types of configuration files. Each file is written in either plain text format or JSON format. They have a specific naming convention of either .tf or .tfjson format.

The Terraform configuration below:


Creates a VPC

Creates an Internet Gateway and attaches it to the VPC to allow traffic within the VPC to be reachable by the outside world.

Creates a public and private subnet
Subnets are networks within networks. They are designed to help network traffic flow be more efficient and provide smaller, more manageable ‘chunks’ of IP addresses

Creates a route table for the public and private subnets and associates the table with both subnets

#Create AWS VPC
resource "aws_vpc" "YOUR_DESIRED_NAME" {
  cidr_block       = "10.0.0.0/16"
  instance_tenancy = "default"
  enable_dns_support   = "true"
  enable_dns_hostnames = "true"


  tags = {
    Name = "YOUR_DESIRED_NAME"
  }
}

# Public Subnets in Custom VPC
resource "aws_subnet" "YOUR_DESIRED_NAME-public-1" {
  vpc_id                  = aws_vpc.YOUR_DESIRED_NAME.id
  cidr_block              = "10.0.1.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "us-east-1a"

  tags = {
    Name = "YOUR_DESIRED_NAME-public-1"
  }
}

resource "aws_subnet" "YOUR_DESIRED_NAME-public-2" {
  vpc_id                  = aws_vpc.YOUR_DESIRED_NAME.id
  cidr_block              = "10.0.2.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "us-east-1b"

  tags = {
    Name = "YOUR_DESIRED_NAME-public-2"
  }
}

resource "aws_subnet" "YOUR_DESIRED_NAME-public-3" {
  vpc_id                  = aws_vpc.YOUR_DESIRED_NAME.id
  cidr_block              = "10.0.3.0/24"
  map_public_ip_on_launch = "true"
  availability_zone       = "us-east-1c"

  tags = {
    Name = "YOUR_DESIRED_NAME-public-3"
  }
}

# Private Subnets in Custom VPC
resource "aws_subnet" "YOUR_DESIRED_NAME-private-1" {
  vpc_id                  = aws_vpc.YOUR_DESIRED_NAME.id
  cidr_block              = "10.0.4.0/24"
  map_public_ip_on_launch = "false"
  availability_zone       = "us-east-1a"

  tags = {
    Name = "YOUR_DESIRED_NAME-private-1"
  }
}

resource "aws_subnet" "YOUR_DESIRED_NAME-private-2" {
  vpc_id                  = aws_vpc.YOUR_DESIRED_NAME.id
  cidr_block              = "10.0.5.0/24"
  map_public_ip_on_launch = "false"
  availability_zone       = "us-east-1b"

  tags = {
    Name = "YOUR_DESIRED_NAME-private-2"
  }
}

resource "aws_subnet" "YOUR_DESIRED_NAME-private-3" {
  vpc_id                  = aws_vpc.YOUR_DESIRED_NAME.id
  cidr_block              = "10.0.6.0/24"
  map_public_ip_on_launch = "false"
  availability_zone       = "us-east-1c"

  tags = {
    Name = "YOUR_DESIRED_NAME-private-3"
  }
}

# Custom internet Gateway
resource "aws_internet_gateway" "levelup-gw" {
  vpc_id = aws_vpc.YOUR_DESIRED_NAME.id

  tags = {
    Name = "levelup-gw"
  }
}

#Routing Table for the Custom VPC
resource "aws_route_table" "levelup-public" {
  vpc_id = aws_vpc.YOUR_DESIRED_NAME.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.levelup-gw.id
  }

  tags = {
    Name = "levelup-public-1"
  }
}

resource "aws_route_table_association" "levelup-public-1-a" {
  subnet_id      = aws_subnet.YOUR_DESIRED_NAME-public-1.id
  route_table_id = aws_route_table.levelup-public.id
}

resource "aws_route_table_association" "levelup-public-2-a" {
  subnet_id      = aws_subnet.YOUR_DESIRED_NAME-public-2.id
  route_table_id = aws_route_table.levelup-public.id
}

resource "aws_route_table_association" "levelup-public-3-a" {
  subnet_id      = aws_subnet.YOUR_DESIRED_NAME-public-3.id
  route_table_id = aws_route_table.levelup-public.id
}





3. Create one more file inside ~/terraform-vpc directory, paste in the following code, and name it as provider.tf to define the AWS provider. The tutorial will be creating resources in the us-east-2 region.


The providers file defines providers such as AWS, Oracle or Azure, etc., so that Terraform can connect with the correct cloud services. provider "aws" { region = "us-east-2" }

provider "aws" {
  region     = var.AWS_REGION
}


4. Create one more file inside ~/terraform-vpc directory, paste in the following code, and name it as nat.tf to define the AWS provider. The tutorial will be creating resources in the us-east-1 region.


#Define External IP
resource "aws_eip" "YOUR_DESIRED_NAME-nat" {
  vpc = true
}

resource "aws_nat_gateway" "YOUR_DESIRED_NAME-nat-gw" {
  allocation_id = aws_eip.YOUR_DESIRED_NAME-nat.id
  subnet_id     = aws_subnet.YOUR_DESIRED_NAME-public-1.id
  depends_on    = [aws_internet_gateway.YOUR_DESIRED_NAME-gw]

tags = {
    Name = "YOUR_DESIRED_NAME"
  }
}

resource "aws_route_table" "YOUR_DESIRED_NAME-private" {
  vpc_id = aws_vpc.YOUR_DESIRED_NAME.id
  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.YOUR_DESIRED_NAME-nat-gw.id
  }

  tags = {
    Name = "YOUR_DESIRED_NAME-private"
  }
}

# route associations private
resource "aws_route_table_association" "level-private-1-a" {
  subnet_id      = aws_subnet.YOUR_DESIRED_NAME-private-1.id
  route_table_id = aws_route_table.YOUR_DESIRED_NAME-private.id
}

resource "aws_route_table_association" "level-private-1-b" {
  subnet_id      = aws_subnet.YOUR_DESIRED_NAME-private-2.id
  route_table_id = aws_route_table.YOUR_DESIRED_NAME-private.id
}

resource "aws_route_table_association" "level-private-1-c" {
  subnet_id      = aws_subnet.YOUR_DESIRED_NAME-private-3.id
  route_table_id = aws_route_table.YOUR_DESIRED_NAME-private.id
}

Then Create another file called Variable.tf then copy the below command inside it

variable "AWS_REGION" {
  default = "us-east-1"
}

Running Terraform to Create the AWS VPC

Now that you have the Terraform configuration file and variables files ready to go, it’s time to initiate Terraform and create the VPC! To provision, a Terraform configuration, Terraform typically uses a three-stage approach terraform init → terraform plan → terraform apply. Let’s walk through each stage now. 1. Open a terminal on your vscode.

2. Run the terraform init command in the same directory. The terraform init command initializes the plugins and providers which are required to work with resources.

terraform init If all goes well, you should see the message Terraform has been successfully initialized in the output, as shown below. Terraform initialized successfully Terraform initialized successfully

3. Now, run the terraform plan command. This is an optional, yet recommended action to ensure your configuration’s syntax is correct and gives you an overview of which resources will be provisioned in your infrastructure. terraform plan terraform plan



If successful, you should see a message like Plan: "X" to add, "Y" to change, or "Z" to destroy in the output to indicate the command was successful. You will also see every AWS resource Terraform intends to create. Plan command execution Plan command execution 4. Next, tell Terraform actually to provision the AWS VPC and resources using terraform apply. When you invoke terraform apply, Terraform will read the configuration (main.tf) and the other files to compile a configuration. It will then send that configuration up to AWS as instructions to build the VPC and other components.


terraform apply Terraform apply command execution Terraform apply command execution




Steps to create an S3 bucket using Terraform

In this section, we will first discuss the S3 bucket and then the main Terraform configuration file.
We will also cover the AWS S3 object bucket in terraform.

1. Create S3 bucket module
Create a module that will have a basic S3 file configuration. For that, create one folder named “S3,” we will have two files: bucket.tf and var.tf.

2. Define bucket
Open bucket.tf and define bucket in that.

bucket.tf

resource "aws_s3_bucket" "demos3" {
    bucket = "${var.bucket_name}" 
    acl = "${var.acl_value}"   
}

Explanation

We have a block with the key name “resource” with resource type “aws_s3_bucket”– which we want to create.
It has a fixed value, and it depends on the provider.
Here we have an AWS S3 resource where AWS is our provider and S3 is our resource.
“Demos3” is the resource name that the user provides.
Bucket and ACL are the argument types for our resource. We can have different arguments according to our needs and their corresponding values.
Either we can provide value directly or use the var.tf file to declare the value of an argument.
3. Define variables
In var.tf, we will define variables for the bucket.tf

var.tf


variable "bucket_name" {}

variable "acl_value" {
    default = "private"
}


Explanation

As mentioned above, var.tf is used to declare values of variables.
We can either provide a default value to be used when needed or ask for value during execution.




The Next Config is to Use Terraform to provision Iam USers

Use the below config

create a folder for IAM_Users

and create a file named main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.42.0"
    }
  }
}

provider "aws" {
  region = var.region
}

resource "aws_iam_group" "developers" {
  name = "developers"
}

resource "aws_iam_user" "new_user" {
  name = "new_user"
}

resource "aws_iam_user_group_membership" "new_user" {
  user   = "${aws_iam_user.new_user.name}"
  groups = ["developers"]
}

resource "aws_iam_access_key" "my_access_key" {
  user = aws_iam_user.new_user.name
  pgp_key = var.pgp_key
}


resource "aws_iam_user_login_profile" "new_user" {
  user    = "${aws_iam_user.new_user.name}"
  pgp_key = "keybase:apprenticecto"
}

output "password" {
  value = "${aws_iam_user_login_profile.new_user.encrypted_password}"
  sensitive = false
}

output "secret" {
  value = aws_iam_access_key.my_access_key.encrypted_secret
  sensitive = true
}  
 

Create another file called variables.tf

variable "region" {
  default = "us-east-1"
}

variable "pgp_key" {
 description = "Either a base-64 encoded PGP public key, or a keybase username in the form keybase:username. Used to encrypt the password and the access key on output to the console."
 default     = ""
}


Run the necessary terraform commands which is Terraform init, Plan and Apply



Monday, January 16, 2023

Dockerizing Previously Built WebApp and Pushing it to DockerHub, Then Try it out on an Ec2 Instance

Dockerizing Previously Built WebApp and Pushing it to DockerHub, Then Try it out on an Ec2 Instance 


Requirements

  • A maven project to be dockerized for deployment. In this tutorial, We will be Using the projects we have in class
  • GitHub repository
  • Docker Hub repository
  • Linux machine where your container images will be Tested 


The GitHub workflow

Triggering GitHub actions for our project is as simple as having the correct configuration files in the correct place. Create two new YAML configuration files in the root of your project in the /.github/workflows folder. The first file will be used for the master branch and will run some tests to make sure every push is OK. The second one will be applied to release branches only and will not only test the new version but also create a docker image for it and trigger a redeploy.

The master workflow

name: Master - Testing

on:
push:
branches:
- 'main'

jobs:

artifact:

name: Test master branch - GitHub Packages
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- name: Set up JDK 11
uses: actions/setup-java@v1
with:
java-version: 11.0.4
- name: Maven Package
run: mvn -B clean package -DskipTests
- name: Maven Verify
run: mvn -B clean verify

This simple file is easily readable, it triggers the packaging and verifying phases of the maven build lifecycle in a Java 11 environment. I strongly suggest taking a look at the official documentation on workflows to understand these configuration files work. Now let’s take a look at the one we just created, step by step:

name: Master - Testing

on:
push:
branches:
- 'main'

This workflow will be triggered only when the master branch gets pushed. The name you give your workflow will be visible when checking your actions history on the GitHub page, as well as when you receive email notifications regarding failed tests, so always choose a meaningful one.

jobs:

test:

name: Build
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v1
- name: Set up JDK 11
uses: actions/setup-java@v3
with:
java-version: '11'
- name: Build With Maven
run: mvn -B package --file pom.xml
- name: Run the Maven Verify Phase
run: mvn --batch-mode --update-snapshots verify

We have only one job in this workflow. We define the type of machine in which the job will be running on, you can find a list of the possible values here.

A job always consists of steps. In this case, we set up a java environment with the desired version, and once it is ready we can run whatever maven commands we want. In this case, we package our app and then verify it.

Now that we have the master workflow ready, let’s turn our attention to the star of the show. Our release workflow should build and publish a docker image to our Docker Hub repository


Preparing the server

For this lab, we will use using a simple AWS EC2 Instance running Ubuntu 18.04. 

Install docker

Since our application will be turned into a container image, our server will need to have docker installed to always pull and run the latest version. You can find detailed information on how to install the latest version of docker here, but this simple list of commands will do the trick in most cases:

sudo apt-get update
sudo apt-get remove docker docker-engine docker.io
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker

You can then check if the installation was successful:

docker --version

Create Dockerfile

In order to turn our application into a container image, we need a configuration file describing the steps necessary to make it. Create the following file under the name of Dockerfile in the root directory of your project.

# Use an official maven runtime as a parent image FROM maven:3.8.3-jdk-11 AS build # Set the working directory to /app WORKDIR /app # Copy the pom.xml and source code to the container COPY pom.xml . COPY webapp/ ./src/ # Build the application RUN mvn clean package # Use an official Nginx image as a parent image for the final stage FROM nginx:1.21.3-alpine # Copy the built WAR file from the previous stage to the Nginx webapps directory COPY webapp /usr/share/nginx/html # Expose port 80 for Nginx EXPOSE 80 # Start the Nginx web server CMD ["nginx", "-g", "daemon off;"]


This will create a workspace containing the pom file and the source code of your maven project. It will then package it and expose the generated jar file as the entry point of your container.



…and back to the workflow

Let’s get back to business. Our release workflow needs to do this things  automatically, and here’s how it is done.

Create your secrets

Store the following variables as secrets in GitHub so that the workflow is able to access your docker account.

  • DOCKER_USER - your username
  • DOCKER_TOKEN - your password




Create your workflow file

Just like we did for the master branch, create another YAML file at “/.github/workflows ” for the master branch.



name: Docker Workflow on: push: branches: [main] jobs: build-and-publish: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Login to Docker Hub uses: docker/login-action@v1 with: username: ${{ secrets.DOCKER_USER }} password: ${{ secrets.DOCKER_TOKEN }} - name: Build and tag Docker image run: | IMAGE_NAME=$(echo "${{ github.repository }}" | tr '[:upper:]' '[:lower:]') IMAGE_TAG="${IMAGE_NAME}:latest" docker build -t "${IMAGE_TAG}" . - name: Push Docker image run: | IMAGE_NAME=$(echo "${{ github.repository }}" | tr '[:upper:]' '[:lower:]') IMAGE_TAG="${IMAGE_NAME}:latest" docker push "${IMAGE_TAG}"


Copy this command to your github work flow and name is whatever you want to name the build. This will build the application with maven, use the Docker file to run a docker image and push it to the docker hub.

After this is done. Go over to your Ec2 Instance and run this command to pull the image from your dockerhub to your instance

                              docker pull (username/reponame:latest)


This will pull down the image from your dockerhub into your Ec2 and then you can run 

                         docker run -d -p 8080:80 (username/reponame:latest)


The -d option, causes Docker to detach the container and have it run in the background. The -p argument establishes a port mapping, which defines that port 80 of the docker container (as specified in dockerfile), should be exposed to port 8080 of our host machine.

To check the details of our running container, type in the following command:

docker ps
view rawgistfile1.txt hosted with ❤ by GitHub

Output:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12d6293d6e19 devopsclass/appname:latest "/docker-entrypoint.…" 16 seconds ago Up 14 seconds 0.0.0.0:8080->80/tcp keen_khayyam
view rawgistfile1.txt hosted with ❤ by GitHub

As per the above output, we see that the container is up and running. If we now head to ipaddress:8080/ we can see the web application is successfully dockerized.

Jenkins Scripted Pipeline - Create Jenkins Pipeline for Automating Builds, Code quality checks, Deployments to Tomcat - How to build, deploy WARs using Jenkins Pipeline - Build pipelines integrate with github, Sonarqube, Slack, JaCoCo, Nexus, Tomcat

  Jenkins Scripted Pipeline - Create Jenkins Pipeline for Automating Builds, Code quality checks, Deployments to Tomcat - How to build, depl...