~ 4 min read

Simple steps to set up a 2 Node Kubernetes Cluster using Kubeadm

A quick tutorial on setting up a 2 node Ubuntu bare metal Kubernetes cluster using kubeadm on standalone local virtual machines, Google VM Instances, AWS instances or any other cloud provider’s compute service.

Introduction

Kubernetes is the most popular container orchestration tool you will encounter when speaking to developers and infrastructure administrators alike. It can be installed in various ways, on different types of hardware and all major cloud infrastructure providers have at least one version of a managed Kubernetes service.

This post will show you how you can set up a simple 2 node Kubernetes cluster using kubeadm on top of Ubuntu virtual machines running on Google Cloud. You can replicate the steps on any other cloud provider or even locally, as long as the pre-requisites are being met.

The 2 nodes will be

  • a control plane node (earlier called master node) and
  • a worker node

Pre-requisites

Two identical Ubuntu 20.04 (or above) virtual machines with

  1. At least 2 vCPUs
  2. At least 4 GB RAM each
  3. At least 10 GB of disk space
  4. Ability to SSH to these machines as a non root user
  5. Root access via a sudo user (more secure) or root credentials (less secure)

The pre-requisites can be local VMs in Oracle VirtualBox/VMWare running on your laptop or cloud compute instances on Google Cloud, Azure or AWS (insert any other provider here).

Setting up Kubernetes

Rename one of the machines as “control-plane” and the other as “worker-node” to easily distinguish between the two and run commands on the correct machine.

Preparing the control plane

  1. SSH to the “control-plane” machine and switch to root using sudo -i

  2. Disable swap space using the following command

    swapoff -a
  3. Comment the file system table entry in /etc/fstab for swap space. Using your favorite editor, edit the /etc/fstab file and add a hash to the beginning of the line where swap is mentioned to comment it out. This is required to ensure the swap is not turned on upon the next reboot. You will need to be root to edit and save the file.

  4. Configure iptables to receive bridged network traffic by editing the /etc/ufw/sysctl.conf file and adding the following to the end

    net/bridge/bridge-nf-call-iptables = 1
    net/bridge/bridge-nf-call-ip6tables = 1
    net/bridge/bridge-nf-call-arptables = 1
  5. (Optional) Install ebtables and ethtool. The cluster installation will complete without these tools, however, you may receive warnings during preflight checks.

    apt update && apt install ebtables ethtool -y
  6. Reboot the machine

  7. Add Kubernetes repo key to key manager (install curl with apt install curl, if absent).

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  8. Add Kubernetes repo to the machine

    sudo -i
    cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
    deb http://apt.kubernetes.io/ kubernetes-xenial main
    EOF
  9. Install docker, kubeadm, kubelet and kubectl using

    sudo apt-get update
    sudo apt-get install docker.io kubelet kubeadm kubectl -y

Preparing the worker node

  1. SSH to the “worker-node” machine and switch to root using sudo -i

  2. Repeat steps from Step 2 to Step 9 from the previous section

  3. Basically, the “worker-node” also must have

    • Swap turned off and commented out in /etc/fstab
    • Iptables configured to receive and send bridged traffic
    • Ebtables, ethtool, docker, kubeadm, kubelet and kubectl installed using apt package manager.

Creating the Kubernetes cluster

  1. Obtain the external IP address/IP address that you will connect to, of the control-plane machine. Our setup is in Google Cloud, so we work with the Public IP address of the VM that is reserved in Google Cloud.

  2. Initialize the cluster with the following command

    kubeadm init --apiserver-cert-extra-sans=<EXTERNAL-IP-ADDRESS-OF-CONTROL-PLANE>
  3. Make a note of the kubeadm join command. This will be used to connect the worker node to the cluster.

  4. Open another SSH session to the control-plane machine and prepare the system to add workloads. Do not switch to root user for this.

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  5. To check if the cluster is setup properly, run

    kubectl get pods -A

Connecting the worker node with the control plane

  1. SSH to the “worker-node” and switch to root user

  2. Run the kubeadm join command that was printed when the control-plane was set up earlier

    kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

    kubeadm command to join a worker node to cluster

Verifying if the set up is complete

Run the kubectl get nodes command to verify the both the nodes are in Ready state and version numbers are listed.

kubectl get nodes setup complete

References

;