Deploying a Kubernetes Cluster with KinD in a Docker Container running in a Ubuntu VM

Deploying a Kubernetes Cluster with KinD in a Docker Container running in a Ubuntu VM

A Step-by-Step Guide

Introduction (Kubernetes in Docker - KinD)

In this guide, we will walk you through the process of deploying a Kubernetes cluster using KinD within a Docker container running in an Ubuntu Virtual Machine. KinD is an extremely light weight tool for running local Kubernetes clusters using Docker container “nodes”. KinD was primarily designed for testing Kubernetes itself, but may also be used for local development or as part of your DevOps processes.

Deployment Steps for Running Kubernetes Locally

Prerequisites

I am running an Ubuntu LTS 22.04 Virtual Machine in bridged mode network with VMWARE Workstation 17.

Installing and Configuring Docker

Ensure you have Docker installed and configured to run the daemon with elevated privileges.

# Install Docker

curl -fsSL https://get.docker.com -o get-docker.sh

sudo sh get-docker.sh

# Configure the Docker daemon to RUN as a fully privileged service

sudo usermod -aG docker azureuser

echo "azureuser ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/azureuser

sudo reboot

docker ps

Installing and Configuring Docker

Ensure you have GIT installed

# Install GIT

sudo apt-get install git -y

Read more HERE, HERE and HERE

Installing and Configuring KinD

# Install and Configuring KIND for AMD64 / x86_64

[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.23.0/kind-linux-amd64

chmod +x ./kind

sudo mv ./kind /usr/local/bin/kind

Installing Kubectl to manage the Kubernetes Cluster

Kubectl is a command line utility to communicate with and manage Kubernetes Cluster

# Install Kubectl on Ubuntu AMD64 / x86_64 (DEBIAN based linux distributions) 

sudo apt-get update

# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list   # helps tools such as command-not-found to work correctly

# Installing Kubectl

sudo apt-get update

sudo apt-get install -y kubectl

kubectl version --client

Create our First Kubernetes Cluster inside a Docker Container

kind create cluster --image kindest/node:v1.26.15

Connecting to the Kubernetes Cluster

Once the cluster has been provisioned, KinD will automatically edit the kubeconfig file thereby allowing you to access your Kubernetes Cluster. You can see below that you have a single node Kubernetes Cluster up and running inside a Docker Container.

kubectl get nodes
docker ps

Creating a namespace for running our group of resources

In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Namespace is a way to group systems together.

# Creating a Namespace for our Cluster

kubectl create namespace ipl2024ns

We are now ready to run our containers and microservices applications. To deploy our applications, we are going to the Kubernetes Deployment as the building block to run containers. Kubernetes Deployment is a declarative construct in the form of a manifest configuration files that help us define how we want Kubernetes to run our containers as PODS. A POD is a sandbox where we can run processes.

Deploying your Ingress Controller

We must FIRST deploy an Ingress controller before deploying the ingress resource configuration.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml

Let check for the Ingress Service and the pods running the same service

kubectl get pods --namespace=ingress-nginx

kubectl get service ingress-nginx-controller --namespace=ingress-nginx

Since KinD doesnt support a Load Balancer, the EXTERNAL-IP shows as pending. To access the nginx ingress service we must use PORT FORWARDING as below

kubectl -n ingress-nginx --address 0.0.0.0 port-forward svc/ingress-nginx-controller 8080:80

Browsing to the localhost address localhost:8080 of the Docker Host Ubuntu VM will show you the NGINX 404 Not Found message as shown below. This is because nginx webserver is up and running, however we have not set up any routing rules for our services yet.

Deploy your multiple GoLang Applications to the Kubernetes cluster

Let us now deploy our THREE GoLang web applications and route traffic between them using NGINX ingress. We will use the GitHub Repository HERE.

To deploy the application, we use the respective manifest files to create all the objects required to run the GoLang web applications. A Kubernetes manifest file defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and services.

git clone https://github.com/mfkhan267/ingress2024.git

cd ingress2024/k8s 

kubectl apply -f kkr.yml --namespace ipl2024ns

kubectl apply -f srh.yml --namespace ipl2024ns

kubectl apply -f csk.yml --namespace ipl2024ns

Check for new pods running your THREE GoLang web applications

Deploy your Ingress Resource Configuration

The Ingress Resource Configuration is what allows the nginx controller to route traffic to your microservices applications running in the Kubernetes cluster.

kubectl apply -f ingress.yml --namespace ipl2024ns

Testing your Web Applications that should now be running as part of your Kubernetes Deployment

Since KinD doesnt support a Load Balancer, the EXTERNAL-IP shows as pending. To access the nginx ingress service we must use PORT FORWARDING as below

kubectl -n ingress-nginx --address 0.0.0.0 port-forward svc/ingress-nginx-controller 8080:80

To access the KKR Team Application, let us browse to the localhost:8080/kkr

When you open the localhost:8080/kkr you should see the screen below:

To access the SRH Team Application, let us browse to the localhost:8080/srh

When you open the localhost:8080/srh you should see the screen below:

To access the CSK Team Application, let us browse to the localhost:8080/csk

When you open the localhost:8080/csk you should see the screen below:

For all the other routes like localhost:8080/xyz or localhost:8080/123, the NGINX Controller should route traffic to the KKR Application by default.

That's all folks. You should now to be familiar with Deploying a Kubernetes Cluster with KinD in a Docker Container running in a local Ubuntu VM and running your containerised Web Applications exposed locally with the help of help of an nginx based ingress and Port Forwarding. That is how you should be running your containerised Applications as Containers inside your locally provisioned Kubernetes Cluster with KinD.

Note: Additional Commands for reference

kind get clusters

kind create cluster

kind create cluster --name abc

# Multi Node cluster

kind create cluster --name multi-node --config multi-node.yaml

kind create cluster --name multi-node --config multi-node.yaml --image kindest/node:v1.26.15

# To create a cluster with 1 control-plane nodes and 2 workers
# multi-node.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

kind delete cluster --name multi-node

Kindly share with the community. Until I see you next time. Cheers !