Kubernetes is the most popular container orchestration tool you will encounter when speaking to developers and infrastructure administrators alike. It can be installed in various ways, on different types of hardware and all major cloud infrastructure providers have at least one version of a managed Kubernetes service.
This post will show you how you can set up a simple 2 node Kubernetes cluster using kubeadm on top of Ubuntu virtual machines running on Google Cloud. You can replicate the steps on any other cloud provider or even locally, as long as the pre-requisites are being met.
The 2 nodes will be
Two identical Ubuntu 20.04 (or above) virtual machines with
The pre-requisites can be local VMs in Oracle VirtualBox/VMWare running on your laptop or cloud compute instances on Google Cloud, Azure or AWS (insert any other provider here).
Rename one of the machines as “control-plane” and the other as “worker-node” to easily distinguish between the two and run commands on the correct machine.
SSH to the “control-plane” machine and switch to root using sudo -i
Disable swap space using the following command
swapoff -a
Comment the reference for swap in fstab
vi /etc/fstab
vi /etc/ufw/sysctl.conf
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
apt-get install ebtables ethtool -y
/etc/modules-load.d/k8s.conf
and add the following linesoverlay
br_netfilter
modprobe overlay
modprobe br_netfilter
/etc/sysctl.d/k8s.conf
and add the following linesnet.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
Reboot the machine
Add Kubernetes repo key to key manager (install curl with apt install curl, if absent).
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
Add Kubernetes repo to the machine
sudo -i
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
Install kubeadm
, kubelet
and kubectl
using
sudo apt-get update
sudo apt-get install kubelet kubeadm kubectl -y
Kubernetes removed support for Docker via Dockershim from v1.20 onwards. We use the containerd runtime to set up our Container Runtime Interface (CRI)
/etc/modules-load.d/containerd.conf
and add the following linesoverlay
br_netfilter
Run sysctl --system
to reload sysctl configurations
Install containerd dependencies
apt install curl gnupg2 software-properties-common apt-transport-https ca-certificates -y
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable
apt update && apt install containerd.io -y
systemdCgroup
mkdir -p /etc/containerd
containerd config default>/etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
service containerd restart
service kubelet restart
systemctl restart containerd
systemctl enable containerd
SSH to the “worker-node” machine and switch to root using sudo -i
Repeat steps from Step 2 to Step 9 from the previous section
Basically, the “worker-node” also must have
Obtain the external IP address/IP address that you will connect to, of the control-plane machine. Our setup is in Google Cloud, so we work with the Public IP address of the VM that is reserved in Google Cloud. Skip this step if your cluster is internal.
Initialize the cluster with the following command
--apiserver-cert-extra-sans=<EXTERNAL-IP-ADDRESS-OF-CONTROL-PLANE>
if your cluster is supposed to be an internal cluster (not exposed to the Internet).--pod-network-cidr
switch as Cilium will be running in the clusterkubeadm init --pod-network-cidr=10.1.1.0/24 --apiserver-advertise-address IP_ADDRESS_OF_NODE_FROM_IFCONFIG --apiserver-cert-extra-sans=EXTERNAL-IP-ADDRESS-OF-CONTROL-PLANE
kubeadm join
command that was printed. This will be used to join other nodes to the cluster If no join command was printed then run the following to obtain the commandkubeadm token create --print-join-command
Open another SSH session to the control-plane machine and prepare the system to add workloads. Do not switch to root user for this.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check the status of the node to confirm that the node is not in a Ready state because the CNI has not been configured.
Download and install Cilium on the Control Plane node
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz
sudo cilium install
kubectl get nodes
to see node status. The node should be in Ready status.SSH to the “worker-node” and switch to root user
Run the kubeadm
join command that was printed when the control-plane was set up earlier
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
If token has been generated then run the kubeadm join
command on the node that needs to be joined to the cluster
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
You can obtain the token again by kubeadm token list
. If token has expired use, kubeadm token create
If you don’t have the value for --discovery-token-ca-cert-hash
, then generate a new value on the master node
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
kubectl get nodes
command to see the status of the nodesRun the kubectl get nodes
command to verify the both the nodes are in Ready state and version numbers are listed.
Alternatively, you can also run the following commands
kubectl get pods -A
kubectl cluster-info