~ 4 min read
Simple steps to set up a 2 Node Kubernetes Cluster using Kubeadm

Introduction
Kubernetes is the most popular container orchestration tool you will encounter when speaking to developers and infrastructure administrators alike. It can be installed in various ways, on different types of hardware and all major cloud infrastructure providers have at least one version of a managed Kubernetes service.
This post will show you how you can set up a simple 2 node Kubernetes cluster using kubeadm on top of Ubuntu virtual machines running on Google Cloud. You can replicate the steps on any other cloud provider or even locally, as long as the pre-requisites are being met.
The 2 nodes will be
- a control plane node (earlier called master node) and
- a worker node
Pre-requisites
Two identical Ubuntu 20.04 (or above) virtual machines with
- At least 2 vCPUs
- At least 4 GB RAM each
- At least 10 GB of disk space
- Ability to SSH to these machines as a non root user
- Root access via a sudo user (more secure) or root credentials (less secure)
The pre-requisites can be local VMs in Oracle VirtualBox/VMWare running on your laptop or cloud compute instances on Google Cloud, Azure or AWS (insert any other provider here).
Setting up Kubernetes
Rename one of the machines as “control-plane” and the other as “worker-node” to easily distinguish between the two and run commands on the correct machine.
Preparing the control plane
-
SSH to the “control-plane” machine and switch to root using sudo -i
-
Disable swap space using the following command
swapoff -a
-
Comment the file system table entry in
/etc/fstab
for swap space. Using your favorite editor, edit the/etc/fstab
file and add a hash to the beginning of the line where swap is mentioned to comment it out. This is required to ensure the swap is not turned on upon the next reboot. You will need to be root to edit and save the file. -
Configure iptables to receive bridged network traffic by editing the
/etc/ufw/sysctl.conf
file and adding the following to the endnet/bridge/bridge-nf-call-iptables = 1 net/bridge/bridge-nf-call-ip6tables = 1 net/bridge/bridge-nf-call-arptables = 1
-
(Optional) Install ebtables and ethtool. The cluster installation will complete without these tools, however, you may receive warnings during preflight checks.
apt update && apt install ebtables ethtool -y
-
Reboot the machine
-
Add Kubernetes repo key to key manager (install curl with apt install curl, if absent).
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
-
Add Kubernetes repo to the machine
sudo -i cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF
-
Install docker,
kubeadm
,kubelet
andkubectl
usingsudo apt-get update sudo apt-get install docker.io kubelet kubeadm kubectl -y
Preparing the worker node
-
SSH to the “worker-node” machine and switch to root using sudo -i
-
Repeat steps from Step 2 to Step 9 from the previous section
-
Basically, the “worker-node” also must have
- Swap turned off and commented out in /etc/fstab
- Iptables configured to receive and send bridged traffic
- Ebtables, ethtool, docker, kubeadm, kubelet and kubectl installed using apt package manager.
Creating the Kubernetes cluster
-
Obtain the external IP address/IP address that you will connect to, of the control-plane machine. Our setup is in Google Cloud, so we work with the Public IP address of the VM that is reserved in Google Cloud.
-
Initialize the cluster with the following command
kubeadm init --apiserver-cert-extra-sans=<EXTERNAL-IP-ADDRESS-OF-CONTROL-PLANE>
-
Make a note of the
kubeadm
join command. This will be used to connect the worker node to the cluster. -
Open another SSH session to the control-plane machine and prepare the system to add workloads. Do not switch to root user for this.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
To check if the cluster is setup properly, run
kubectl get pods -A
Connecting the worker node with the control plane
-
SSH to the “worker-node” and switch to root user
-
Run the
kubeadm
join command that was printed when the control-plane was set up earlierkubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
Verifying if the set up is complete
Run the kubectl get nodes
command to verify the both the nodes are in Ready state and version numbers are listed.