Netflix Replica DevSecOps Project: Comprehensive CI/CD Pipeline Guide

Using AWS Cloud with EC2 Instances

Using Jenkins on AWS Cloud with Azure Kubernetes Service

Tools and Technologies for Netflix Replica DevSecOps Project

Our Netflix Replica project utilizes a comprehensive set of tools and technologies to ensure efficient development, robust security, and seamless deployment. Here's an overview of our tech stack:

Infrastructure and Cloud Services

Container Orchestration and Management

Continuous Integration and Delivery

Security and Code Quality

Monitoring and Observability

External Services

This carefully selected stack enables us to build a scalable, secure, and high-performance Netflix Replica while maintaining best practices in DevSecOps.

Project Overview

This project showcases a comprehensive CI/CD pipeline for a Netflix clone. It outlines the steps for setting up the development environment, configuring essential tools, and implementing robust security and monitoring solutions.

NOTE: For advanced users familiar with Terraform, I recommend exploring the "aws_terraform_creation" directory. This section not only streamlines the process but also offers valuable insights into module separation techniques.

For those familiar with Terraform, you can streamline the process of initializing the CI/CD and Monitoring Server using the provided configuration in the "terraform_provision_aws" folder within the git repository. Here's a quick guide:

  1. Navigate to the "terraform_provision_aws" directory
  1. Run terraform init to initialize the Terraform working directory
  1. Modify the necessary values in the Terraform configuration files to suit your needs. Edit this file for your values “terraform.tfvars”
  1. Apply the Terraform configuration to provision your infrastructure

Using this Terraform setup allows you to skip Steps 1 through 6, as well as the Grafana, Prometheus, and node_exporter installation steps. This approach streamlines the infrastructure provisioning process.

Steps 1-6: VM Provisioning with UserData

If you don't have a VM and want to create a new one from AWS using this userdata, you can skip Steps 1 to 6. You can also skip installing Grafana.

NOTE: Some steps may require additional configuration.

#!/bin/bash
sudo apt update -y
wget -O - https://packages.adoptium.net/artifactory/api/gpg/key/public | tee /etc/apt/keyrings/adoptium.asc
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | tee /etc/apt/sources.list.d/adoptium.list
sudo apt update -y
sudo apt install temurin-17-jdk -y
/usr/bin/java --version
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt update -y
sudo apt install jenkins -y
#install docker
sudo apt update -y
sudo apt install docker.io docker-compose -y
sudo usermod -aG docker $(whoami)
newgrp docker
sudo chmod 777 /var/run/docker.sock
# Create a temporary directory
temp_dir="/tmp/docker_compose"
mkdir -p "$temp_dir"
cat > "$temp_dir/docker-compose.yml" <<EOF
version: "3"
services:
  sonarqube:
    image: sonarqube:latest
    ports:
      - "9000:9000"
    environment:
      - SONAR_JDBC_URL=jdbc:postgresql://db:5432/sonar
      - SONAR_JDBC_USERNAME=sonar
      - SONAR_JDBC_PASSWORD=sonar
    volumes:
      - sonarqube_data:/opt/sonarqube/data
      - sonarqube_extensions:/opt/sonarqube/extensions
      - sonarqube_logs:/opt/sonarqube/logs
    depends_on:
      - db

  db:
    image: postgres:12
    environment:
      - POSTGRES_USER=sonar
      - POSTGRES_PASSWORD=sonar
    volumes:
      - postgresql:/var/lib/postgresql
      - postgresql_data:/var/lib/postgresql/data

volumes:
  sonarqube_data:
  sonarqube_extensions:
  sonarqube_logs:
  postgresql:
  postgresql_data:
EOF
# Run Docker Compose
cd $temp_dir
docker-compose up -d
cd
# install trivy
sudo apt install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt update -y
sudo apt install trivy -y
#install Grafana on Jenkins Server
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt update -y
sudo apt install grafana -y
sudo systemctl start grafana-server
sudo systemctl enable grafana-server
#install Kubectl on Jenkins Server
sudo apt update -y
sudo apt install curl -y
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
#Create a system user for Prometheus
sudo useradd --system --no-create-home --shell /bin/false prometheus
#Create directories for Prometheus
sudo mkdir -p /data /etc/prometheus
#Download and install Prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.53.2/prometheus-2.53.2.linux-amd64.tar.gz
tar -xvf prometheus-2.53.2.linux-amd64.tar.gz
sudo mv prometheus-2.53.2.linux-amd64/prometheus /usr/local/bin/
sudo mv prometheus-2.53.2.linux-amd64/promtool /usr/local/bin/
sudo mv prometheus-2.53.2.linux-amd64/consoles/ /etc/prometheus/
sudo mv prometheus-2.53.2.linux-amd64/console_libraries/ /etc/prometheus/
sudo mv prometheus-2.53.2.linux-amd64/prometheus.yml /etc/prometheus/prometheus.yml
#Configure Prometheus
cat <<EOL | sudo tee /etc/prometheus/prometheus.yml
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:7718"]

  - job_name: node_export
    static_configs:
      - targets: ["localhost:9101"]

  - job_name: 'Jenkins'
    metrics_path: '/prometheus'
    static_configs:
      - targets: ['localhost:8080']
EOL
sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
rm -rf prometheus-2.53.2.linux-amd64
rm -f prometheus-2.53.2.linux-amd64.tar.gz
#Create a Prometheus service file
cat <<EOL | sudo tee /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

StartLimitIntervalSec=500
StartLimitBurst=5

[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/data \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.listen-address=0.0.0.0:7718 \
  --web.enable-lifecycle

[Install]
WantedBy=multi-user.target
EOL
sudo systemctl daemon-reload
sudo systemctl start prometheus
sudo systemctl enable prometheus
#Install and Configure Node Exporter
sudo useradd --system --no-create-home --shell /bin/false node_exporter
#Download and install Node Exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-amd64.tar.gz
tar xvf node_exporter-1.8.2.linux-amd64.tar.gz
sudo mv node_exporter-1.8.2.linux-amd64/node_exporter /usr/local/bin/
rm -rf node_exporter*
#Create a Node Exporter service file
cat <<EOL | sudo tee /etc/systemd/system/node_exporter.service
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter --web.listen-address=:9101

[Install]
WantedBy=multi-user.target
EOL
#Start and enable Node Exporter service
sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter
#Check Prometheus configuration and reload
promtool check config /etc/prometheus/prometheus.yml
curl -X POST http://localhost:7718/-/reload

Step 7: Access The Jenkins And SonarQube Servers

Jenkins Server

To access the Jenkins server:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Copy the password and paste it into the Jenkins setup wizard.

SonarQube Server

To access the SonarQube server:

Remember to change the default password after your first login for security reasons.

With both servers now accessible, you can proceed to configure your Jenkins pipeline to integrate with SonarQube for code quality analysis.

Step 8: Install Suggested Plugins and Create First Admin User

Install Suggested Plugins

After unlocking Jenkins, you'll be prompted to customize Jenkins. For a quick start:

Create First Admin User

Once the plugins are installed, you'll be prompted to create your first admin user:

After these steps, Jenkins will be ready for use with your admin account. You can now start configuring your CI/CD pipeline and integrating with other tools like SonarQube.

Step 9: Create SonarQube Token and Configure Jenkins

Create SonarQube Token

Create Jenkins Credentials for SonarQube

Configure SonarQube Server in Jenkins

Create Webhook in SonarQube for Jenkins

With these configurations in place, Jenkins will be able to communicate with SonarQube for code analysis, and SonarQube will be able to send analysis results back to Jenkins, enabling the Quality Gate stage in your pipeline.

Step 10: Create TMDB API Key

To use The Movie Database (TMDB) API in your Netflix Replica, you need to create an API key. Follow these steps:

  1. Go to the TMDB website (https://www.themoviedb.org/) and create an account if you don't have one.
  1. Once logged in, go to your account settings by clicking on your avatar in the top right corner.
  1. In the left sidebar, click on "API".
  1. Click on "Create" under "Request an API Key".
  1. Choose "Developer" as the type of API key.
  1. Fill out the form with your application details.
  1. Agree to the terms of use and click "Submit".
  1. You will now see your API key. Make sure to keep this key secure and never share it publicly.

Once you have your TMDB API key, you can use it in your application. In the Jenkins pipeline, you can see that the API key is being passed as a build argument to Docker:

docker build --build-arg TMDB_V3_API_KEY=8bbb70a96fc606461acd71f172703286 -t henops/netflix .

Replace the example API key with your actual TMDB API key. For security reasons, it's recommended to store this key as a Jenkins credential and reference it in your pipeline, rather than hardcoding it.

Steps 11 - 13: Create Monitoring For Jenkins Server

My user data Script also covers Steps 11 to 13.

Step 14: Configure Jenkins for Prometheus

Install the Prometheus plugin in Jenkins:

Step 15: Configure Grafana

1. Add Prometheus as a data source in Grafana

2. Import dashboards

Import pre-built dashboards for Node Exporter and Jenkins:

With these steps completed, you now have Prometheus collecting metrics from your system and Jenkins, with Grafana providing visualization of these metrics through pre-built dashboards.

Generate App Password for Gmail (Optional)

To use Gmail with Jenkins, you'll need to generate an App Password:

  1. Go to your Google Account settings (https://myaccount.google.com/)
  1. Navigate to "Security"
  1. Under "Signing in to Google," select "2-Step Verification"
  1. Scroll to the bottom and select "App passwords"
  1. Choose "Mail" and "Other (Custom name)" from the dropdowns
  1. If you can’t find, you can go directly from this link - https://myaccount.google.com/apppasswords
  1. Enter a name for the app (e.g., "Jenkins Email")
  1. Click "Generate"
  1. Copy the 16-character password that appears

Use this App Password instead of your regular Gmail password when configuring the email settings in Jenkins.

Step 16: Email Integration with Jenkins

To set up email integration with Jenkins, follow these steps:

1. Install Email Extension Plugin

2. Configure Jenkins System Settings

3. Configure Extended E-mail Notification

4. Update Jenkins Pipeline

Update your Jenkinsfile to include email notifications. Here's an example of how to modify the post section:

post {
    always {
        emailext attachLog: true,
            subject: "'${currentBuild.result}: Build #${env.BUILD_NUMBER} for ${env.JOB_NAME}'",
            body: """
                <h2>Build Notification</h2>
                <p><strong>Project:</strong> ${env.JOB_NAME}</p>
                <p><strong>Build Number:</strong> ${env.BUILD_NUMBER}</p>
                <p><strong>Status:</strong> ${currentBuild.result}</p>
                <p><strong>Build URL:</strong> <a href="${env.BUILD_URL}">${env.BUILD_URL}</a></p>
                <p>For more details, please check the attached logs.</p>
            """,
            to: '[email protected]',
            attachmentsPattern: 'trivyfs.txt,trivyimage.txt,trivyimage2.txt'
        }
    }

This configuration will send an email after each build, including build status, job name, build number, and a link to the console output. Adjust the recipient email and other details as needed for your project.

5. Test the Email Integration

Run a build of your pipeline to test if the email notifications are working correctly. You should receive an email with the build status and details after the pipeline completes.

By following these steps, you'll have successfully integrated email notifications into your Jenkins pipeline, keeping your team informed about build statuses and results.

Step 17: Install Required Jenkins Plugins

To enhance our Jenkins pipeline, we need to install several important plugins. Follow these steps to install the required plugins:

  1. Navigate to "Manage Jenkins" > "Manage Plugins"
  1. Click on the "Available" tab
  1. Use the search box to find and select the following plugins:
    • JDK Tool Plugin (jdk - 17, adoptium.net)
    • NodeJS Plugin (NodeJS - 16)
    • OWASP Dependency-Check Plugin
  1. Wait for the installation to complete
  1. Check the box that says "Restart Jenkins when installation is complete and no jobs are running"

After Jenkins restarts, log back in to configure the newly installed plugins.

Configure Installed Plugins

Once Jenkins has restarted, you'll need to configure the installed plugins:

  1. Go to "Manage Jenkins" > "Tools"
  1. Scroll down to find sections for JDK, NodeJS, and SonarQube Scanner
  1. For each tool, click "Add {Tool Name}" and provide the necessary information:
    • JDK: Provide a name and select the installation method (usually "Install automatically")
    • NodeJS: Give it a name and choose the version you want to use
  1. For OWASP Dependency-Check, go to "Manage Jenkins" > "Tools" and add a new OWASP Dependency-Check installation
  1. Click "Save" at the bottom of the page

With these plugins installed and configured, your Jenkins setup is now ready to support a more comprehensive CI/CD pipeline for the Netflix Replica project.

Step 18: Docker Image Build and Push

  1. We need to install the Docker tool in our system, Goto Dashboard → Manage Plugins → Available plugins → Search for Docker and install these plugins

Docker / Docker Commons / Docker Pipeline / Docker API / docker-build-step

  1. With Docker plugins installed, navigate to Jenkins Tools to configure Docker. Put docker under Name and select latest under Docker version, click Apply and Save.
  1. Let's add the DockerHub credentials to Jenkins. Navigate to 'Manage Jenkins' > 'Manage Credentials' > 'Stores scoped to Jenkins' > 'Global'.

NOTE: You can store any credentials you want for you docker image storage.

  1. Enter your DockerHub username and password. For both the ID and Description fields, use a memorable name like "docker". Click "Create" when finished. (For me, I use the ECR. So, i will input my aws Credentials.
  1. You've now successfully added your DockerHub credentials to Jenkins' Global credentials (unrestricted).
  1. The pipeline shown below is building with Dockerhub.

Next, let's incorporate the following pipeline script into our Jenkins pipeline:

stage("Docker Build & Push"){
            steps{
                script{
                   withDockerRegistry(credentialsId: 'docker', toolName: 'docker'){
                       sh "docker build --build-arg TMDB_V3_API_KEY=yourAPItokenfromTMDB -t netflix ."
                       sh "docker tag netflix techrepos/netflix:latest "
                       sh "docker push techrepos/netflix:latest "
                    }
                }
            }
        }
        stage("TRIVY"){
            steps{
                sh "trivy image techrepos/netflix:latest > trivyimage.txt"
            }
        }

Step 19: Set Up Kubernetes Cluster

To create a Kubernetes cluster with master and worker nodes, you have two options:

Option 1: Use Terraform (Recommended)

If you're comfortable with Terraform, use the provided configuration in the "k8s_creation" folder:

  1. Go to the "k8s_creation" directory
  1. Execute terraform init
  1. Adjust values in the Terraform files as needed
  1. Run terraform apply to create your infrastructure.
  1. This method automates node_exporter configuration.

Option 2: Manual EC2 Setup

If you prefer manual setup, launch EC2 instances for master and worker nodes using this user data script:

#!/bin/bash
# Update the package index
sudo apt update -y
sudo apt install snapd
# Install essential tools
sudo apt install -y curl wget git
# Install Docker
sudo apt install -y docker.io
sudo usermod -aG docker $(whoami)
newgrp docker
sudo chmod 777 /var/run/docker.sock
# Enable and start Docker service
sudo systemctl enable docker
sudo systemctl start docker
# Set up Kubernetes repository
sudo apt install -y apt-transport-https ca-certificates gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install Kubernetes components
sudo apt update -y
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
#Install and Configure Node Exporter
sudo useradd --system --no-create-home --shell /bin/false node_exporter
#Download and install Node Exporter
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-amd64.tar.gz
tar xvf node_exporter-1.8.2.linux-amd64.tar.gz
sudo mv node_exporter-1.8.2.linux-amd64/node_exporter /usr/local/bin/
rm -rf node_exporter*
#Create a Node Exporter service file
cat <<EOL | sudo tee /etc/systemd/system/node_exporter.service
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter --web.listen-address=:9101

[Install]
WantedBy=multi-user.target
EOL
#Start and enable Node Exporter service
sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter

To create the master and worker nodes:

  1. Launch an EC2 instance for the master node:
    • Use an Ubuntu AMI
    • Choose an instance type (e.g., t2.medium or larger)
    • In the "User data" section under "Advanced details", paste the script above
    • Make sure to include "Master" in the instance name
  1. Launch EC2 instance(s) for worker node(s):
    • Use the same process as the master node
    • Ensure the worker nodes are in the same VPC and security group as the master
    • Do not include "Master" in the worker node names

Step 20: After the master node is initialized, SSH into it and run:

As we use low CPU and MEMORY, i use this flag --ignore-preflight-errors=all. Depends on the instance type you choose, you can select the preferred one.

You can SKIP this steps if you use the terraform. terraform already applied for this.

sudo hostnamectl set-hostname K8s-Master
sudo kubeadm init --ignore-preflight-errors=all
sleep 60
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/calico.yaml

After terraform creation, you can print out the join command copy it from the Master Node.

kubeadm token create --print-join-command

Note: In some cases, you may need to restart the Master Node for the nodes to become fully operational. Ensure no processes are running in the background. Wait at least 5 minutes before restarting. The exact time may vary depending on your VM specifications and network speed.

Step 21: Worker Node Configuration and Get Config file for CD

With these steps, you'll have a functional Kubernetes cluster with a master node and worker node(s) ready for deploying your Netflix Replica application.

With the 'kubeadm join' command administered, follow these steps to obtain the Kubernetes configuration file:

sudo hostnamectl set-hostname K8s-Worker # (K8s-Worker-01) If you have 2 workers, please set the name behind.
#Copy the token from master node and apply 
kubeadm join 10.0.1.91:6443 — token w4j7yg.9mr663kzo0rb4efh \
 — discovery-token-ca-cert-hash sha256:69b3cda3cb9d019ef71457f05054aee74eeacce58e36ea39622e88f014ca8db0 --ignore-preflight-errors=all
  1. On the master node, change to the '/.kube' directory:
    cd .kube
  1. Open and view the contents of the config file:
    cat config
  1. Copy the entire contents of the config file
  1. Create a new file named 'secret-file.txt' on your local machine and paste the copied content into it.
  1. Store this 'secret-file.txt' in a secure location, as it will be used to set up Kubernetes credentials in Jenkins

This configuration file contains sensitive information that allows access to your Kubernetes cluster, so ensure it's kept secure and not shared publicly.

Step 22: Installing Kubernetes Plugins and Configuring Credentials

Let's install four essential Kubernetes plugins on Jenkins:

Kubernetes Client API, Kubernetes Credentials, Kubernetes, and Kubernetes CLI

After installation, we'll configure the Kubernetes credentials in Jenkins:

  1. Click "Add Credentials" in the Jenkins dashboard
  1. Select "Global Credentials"
  1. Under "File," choose the 'secret-file.txt' you created earlier
  1. Set both the ID and Description to a memorable name (e.g., "k8s")
  1. Click "Create" to save the new credential

With these steps completed, your Kubernetes credential is now set up in Jenkins.

Step 23: Install Node_exporter on Master and Worker Nodes (Skip if use terraform)

Step 24: Configure Prometheus to Scrape Node_exporter Metrics

Now that Node_exporter is installed on all nodes, we need to configure Prometheus to scrape these metrics. Follow these steps:

  1. SSH into the node where Prometheus is installed
  1. Edit the Prometheus configuration file:
    sudo vim /etc/prometheus/prometheus.yml
  1. Add the following job to the scrape_configs section:
    - job_name: 'node_exporter'
        static_configs:
          - targets: ['master-node-ip:9101', 'worker-node-ip:9101']

Replace master-node-ip, worker-node-1-ip, and worker-node-2-ip with the actual IP addresses of your Kubernetes nodes.

  1. Save the file and exit the editor
  1. Restart Prometheus to apply the changes:
    sudo systemctl restart prometheus
  1. Verify that Prometheus is scraping the new targets:
    • Open the Prometheus web interface (usually at http://prometheus-ip:9090)
      • Note: We use port 7718 for prometheus only and installed prometheus only on the CICD and Monitoring Server.
    • Go to Status > Targets to see if the new node_exporter targets are up

With these steps completed, Prometheus will now collect metrics from all your Kubernetes nodes, allowing you to monitor the health and performance of your cluster effectively.

Step 25: To set up the pipeline for Jenkins, follow these steps:

  1. Log in to your Jenkins dashboard
  1. Click on "New Item" to create a new pipeline job
  1. Enter a name for your pipeline (e.g., "Netflix-Replica-Pipeline") and select "Pipeline" as the job type
  1. Click "OK" to create the job
  1. In the job configuration page, scroll down to the "Pipeline" section
  1. Select "Pipeline script" from the "Definition" dropdown
  1. Copy and paste the complete pipeline script provided into the script text area
  1. Adjust any environment-specific variables or credentials as needed
  1. Click "Save" to save your pipeline configuration
  1. Click "Build Now" to run your pipeline for the first time

Remember to ensure that all necessary plugins (e.g., Docker, Kubernetes, SonarQube) are installed in Jenkins and properly configured before running the pipeline. Also, make sure that the required credentials (GitHub, Docker Hub, Kubernetes) are set up in Jenkins' credential store.

Possible Errors

NOTE: First time failed, because i didn’t set any permission on the folder we need to give permission to the workspace. So that Jenkins agents will be able to run.

Permission Error

If you find the same error, ssh into the Jenkins Server, then

sudo chown -R jenkins:jenkins /var/lib/jenkins/workspace
sudo chmod -R 755 /var/lib/jenkins/workspace
ls -ld /var/lib/jenkins/workspace/
sudo systemctl restart jenkins

Variable Errors

It's important to note that we must carefully define variable names in our pipeline. Sometimes, we might make mistakes when copying names and variables. Please be vigilant about these types of errors to ensure smooth pipeline execution.

Kubernetes Deployment Error

It may happen for building up bare metal kubernetes setup. If you want to use Azure Kubernetes Service, you can use this terraform folder. “terraform_provision_aks”

NOTE: Ensure you modify the values specified in the Terraform configuration to match your requirements.

As soon as you get the cluster config, make secret.txt and add it into the Jenkins Credentials as shown in Step 22.

NOTE: When using Terraform deployment on Azure Kubernetes Service (AKS), we don't need to install Prometheus and Node Exporter manually. Since Azure manages the Kubernetes infrastructure, we can skip the manual setup of Kubernetes infrastructure using Kubeadm and LoadBalancer.

Step 26: Complete CICD Pipeline for Jenkins

The Complete CICD Pipeline for Jenkins is a comprehensive set of stages that automate the process of building, testing, and deploying the Netflix replica application.

pipeline {
    agent any
    
    tools {
        jdk 'jdk17'
        nodejs 'node16'
    }
    
    environment {
        
        SCANNER_HOME= tool 'sonar-scanner'
    }

    stages {
        stage('Clean Workspace') {
            steps {
                cleanWs()
            }
        }
        
        stage('Checkout from GITHUB') {
            steps {
                git branch: 'main', url: 'https://github.com/TechThiha/Netflix.git', credentialsId: 'github'
            }
        }
        
        stage('SonarQube Scan Analysis') {
            steps {
                withSonarQubeEnv('sonar-server') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=Netflix \
                    -Dsonar.projectKey=Netflix '''
                }
           }
        }
        
        stage('Quality Gate') {
            steps {
                waitForQualityGate abortPipeline: false, credentialsId: 'Sonar-token'
            }
        }
        
        stage('Install NodeJs Dependencies') {
            steps {
                sh 'npm install'
            }
        }
        
        stage('OWASP FS SCAN') {
            steps {
                dependencyCheck additionalArguments: """
                  -o "./"
                  -s "./"
                  -f "ALL"
                  --prettyPrint
                """, odcInstallation: 'DP-Check'
                dependencyCheckPublisher pattern: 'dependency-check-report.xml'
            }
        }
        
        stage('TRIVY FS SCAN') {
            steps {
                sh "trivy fs . > trivyfs.txt"
            }
        }
        
        stage("Docker Build & Push"){
            steps{
                script{
                    withCredentials([usernamePassword(credentialsId: 'ecr-credentials', usernameVariable: 'AWS_ACCESS_KEY_ID', passwordVariable: 'AWS_SECRET_ACCESS_KEY')]) {
                       sh "aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws/s3e1v8a7"
                       sh "docker build --build-arg TMDB_V3_API_KEY=8bbb70a96fc606461acd71f172703286 -t netflix ."
                       sh "docker tag netflix:latest public.ecr.aws/s3e1v8a7/henops/netflix:latest"
                       sh "docker push public.ecr.aws/s3e1v8a7/henops/netflix:latest"
                    }
                }
            }
        }
        
        stage("TRIVY Docker Image Scan"){
            steps{
                sh "trivy image techrepos/netflix:latest > trivyimage.txt" 
            }
        }
        
        stage("TRIVY ECR Image Scan"){
            steps{
                sh "trivy image public.ecr.aws/s3e1v8a7/henops/netflix:latest > trivyimage2.txt" 
            }
        }
        
        stage('Deploy to container'){
            steps{
                sh 'docker rm -f netflix || true'
                sh 'docker pull public.ecr.aws/s3e1v8a7/henops/netflix:latest'
                sh 'docker run -d --name netflix -p 8079:80 public.ecr.aws/s3e1v8a7/henops/netflix:latest'
            }
        }
        
        stage('Deploy to kubernetes'){
            steps{
                script{
                    dir('Kubernetes') {
                        withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8s', namespace: '', restrictKubeConfigAccess: false, serverUrl: '') {
                                sh 'kubectl apply -f deployment.yml'
                                sh 'kubectl apply -f service.yml'
                        }   
                    }
                }
            }
        }
    }
    
    post {
    always {
        emailext attachLog: true,
            subject: "'${currentBuild.result}: Build #${env.BUILD_NUMBER} for ${env.JOB_NAME}'",
            body: """
                <h2>Build Notification</h2>
                <p><strong>Project:</strong> ${env.JOB_NAME}</p>
                <p><strong>Build Number:</strong> ${env.BUILD_NUMBER}</p>
                <p><strong>Status:</strong> ${currentBuild.result}</p>
                <p><strong>Build URL:</strong> <a href="${env.BUILD_URL}">${env.BUILD_URL}</a></p>
                <p>For more details, please check the attached logs.</p>
            """,
            to: '[email protected]',
            attachmentsPattern: 'trivyfs.txt,trivyimage.txt,trivyimage2.txt'
        }
    }
}

After the deployment, check the services to find the External-IP. Copy and paste this IP address into your browser to access the application.

You'll see the External-IP displayed as shown above. Use this to access the application in your browser.

Once the application loads, you can click the Play button to watch a movie. As I didn't develop this code myself, I won't be making any modifications or fixes to it. However, if you encounter any issues while following these implementation steps, please don't hesitate to reach out to me for assistance.

This project was created by HenOps but Special Thanks to Kwasi Twum-Ampofo (KTA) and Jason for generously sharing the code. Your contributions to this project are greatly appreciated! It showcases various DevOps practices including automated testing, security scanning, containerization, and deployment to Kubernetes. The pipeline ensures code quality, identifies vulnerabilities, and streamlines the deployment process, making it an excellent example of modern software development and operations practices.