Install Kubernetes 1.29 using Vagrant in under 10 minutes

Akriotis Kyriakos
ITNEXT
Published in
9 min readFeb 1, 2024

--

Step by step installation of a Kubernetes 1.29 Cluster, with 1 master and 3 worker nodes, on Ubuntu virtual machines using Vagrant

What is the goal?

After completing all the steps of this article, you will have an automated unattended script that creates on-premises Kubernetes 1.29 Clusters running on Ubuntu virtual machines.

What are we going to need?

  1. Vagrant https://www.vagrantup.com/docs/installation. Vagrant, developed by HashiCorp, is an open-source tool for creating and managing virtualized development environments. It allows users to easily configure and replicate development setups across different machines.
  2. VirtualBox https://www.virtualbox.org. VirtualBox is a free and open-source virtualization platform offered by Oracle.
  3. 4 Virtual Machines. One for the master node (3GB RAM, 1 vCPU), and three as workers (3GB RAM, 1 vCPU). All of them will be automatically provisioned via Vagrant, which is actually the partial scope of this article.
  4. An additional VirtualBox Host-Only Network. A Host-Only Network is a network configuration that allows communication between virtual machines and the host system but not with external networks. It provides a private network for isolated communication among virtual machines and the host. This can be useful for development and testing scenarios where you want to create a closed network environment.

Prerequisites

Vagrant, for its own management and provisioning tasks, binds automatically to every virtual machine the default VirtualBox NAT network (caution here— not a named one!).

We are going to instruct Vagrant to bind an additional Host-Only Network that we first have to create in VirtualBox.

In my case, I chose one with the CIDR 192.168.57.0/24 — if you choose or create one in a different address space, you are going to need to adjust the Vagrantfile. Make sure you have the DHCP Server enabled.

Clone the repo containing the necessary files, and let’s get started:

Analyze the Vagrantfile

A Vagrantfile is a configuration file used by Vagrant; it is written in Ruby and defines the settings and configuration for every environment, specifying parameters such as the base OS box, network settings, hardware specs and many other customizations. It plays the role of the blueprint for creating and configuring reproducible and consistent virtualized development environments across different machines.

If you don’t have Vagrant already installed on your system, visit this link for instructions.

First element of this Vagrantfile (lines 1–5), are the global configuration variables, which we are going to use later as environment variables in the scripts we are going to execute on the master and worker nodes. You can change them at will, nevertheless make sure that the master_node_ip belongs to the address space of the Host-Only network we created previously.

domain = "kubernetes.lab"
control_plane_endpoint = "k8s-master." + domain + ":6443"
pod_network_cidr = "10.244.0.0/16"
master_node_ip = "192.168.57.100"
version = "v1.29"

The variable version, can get only one of the valuesv1.29 or v1.28, and the reason lies in the deprecation of the Google-hosted Kubernetes package repositories, back in August 2023. You can read more details here.

If you need to install an older version — then this guide is not for you — you can scroll my articles lists where you can find guides for step-by-step Kubernetes installation on Ubuntu, CentOS 8 or even using CNIs like Cilium.

Next element (lines 46–50) is the provider configuration that will be applied to every base box, regardless of its role: 3GB Memory, 1 vCPU and binding the default NAT network, we discuss in the previous paragraph, to the first network adapter of every box.

config.vm.provider "virtualbox" do |vb|
vb.memory = "3072"
vb.cpus = "1"
vb.customize ["modifyvm", :id, "--nic1", "nat"]
end

We are going to install Kubernetes with Kubeadm, which requires minimum 2 CPUs in every box. Here we are using only 1 and later we are going to discuss how we can bypass this requirement with a small harmless workaround. Non-production environments disclaimer here, don’t make me state the obvious!

Last two pieces of the Vagrant file are the configuration and initialization of the base boxes themselves: one common, one for the master node and one for the worker nodes.

We will lay the groundwork for every box, by running the same bootstrap script on all of them (line 9) which will install all the necessary prerequisites and make all the configuration necessary so Kubeadm can later promote these nodes either as master or worker nodes.

  config.vm.provision :shell, path: "kubeadm/bootstrap.sh", env: { "VERSION" => version }

Pay special attention here, how we can easily pass our Vagrantfile global variables as environment variables in each individual script:

env: { “VERSION” => version }

This is a pattern we will follow through the Vagrantfile.

For the master node (lines 10–23), besides the trivial staff like setting the OS of the basebox, its hostname and the IP address in the Host-Only network; we run additionally two provisioning scripts. An in-line one, which defines custom mappings for hostnames and IP addresses within the local network in /etc/hosts of each box.

config.vm.define "master" do |master|
master.vm.box = "ubuntu/focal64"
master.vm.hostname = "k8s-master.#{domain}"
master.vm.network "private_network", ip: "#{master_node_ip}"
master.vm.provision "shell", env: {"DOMAIN" => domain, "MASTER_NODE_IP" => master_node_ip} ,inline: <<-SHELL
echo "$MASTER_NODE_IP k8s-master.$DOMAIN k8s-master" >> /etc/hosts
SHELL
(1..3).each do |nodeIndex|
master.vm.provision "shell", env: {"DOMAIN" => domain, "NODE_INDEX" => nodeIndex}, inline: <<-SHELL
echo "192.168.57.10$NODE_INDEX k8s-worker-$NODE_INDEX.$DOMAIN k8s-worker-$NODE_INDEX" >> /etc/hosts
SHELL
end
master.vm.provision "shell", path:"kubeadm/init-master.sh", env: {"K8S_CONTROL_PLANE_ENDPOINT" => control_plane_endpoint, "K8S_POD_NETWORK_CIDR" => pod_network_cidr, "MASTER_NODE_IP" => master_node_ip}
end

And an external one, kubeadm/init-master.sh, that will take all the necessary steps to transform this box to a Kubernetes master node.

For the worker nodes now (lines 24-44), things are following the same pattern as with master but with a bit of twist. On worker nodes two more extra scripts will run in sequence. First, the auto-generated script kubeadm/init-worker.sh is executed which adds this box as worker node in the cluster we just created before with the kubeadm/init-master.sh script. The latter one will automatically create the former script every time we create the environment via the vagrant up command.

(1..3).each do |nodeIndex|
config.vm.define "worker-#{nodeIndex}" do |worker|
worker.vm.box = "ubuntu/focal64"
worker.vm.hostname = "k8s-worker-#{nodeIndex}.#{domain}"
worker.vm.network "private_network", ip: "192.168.57.10#{nodeIndex}"
worker.vm.provision "shell", env: {"DOMAIN" => domain, "MASTER_NODE_IP" => master_node_ip} ,inline: <<-SHELL
echo "$MASTER_NODE_IP k8s-master.$DOMAIN k8s-master" >> /etc/hosts
SHELL
(1..3).each do |hostIndex|
worker.vm.provision "shell", env: {"DOMAIN" => domain, "NODE_INDEX" => hostIndex}, inline: <<-SHELL
echo "192.168.57.10$NODE_INDEX k8s-worker-$NODE_INDEX.$DOMAIN k8s-worker-$NODE_INDEX" >> /etc/hosts
SHELL
end
worker.vm.provision "shell", path:"kubeadm/init-worker.sh"
worker.vm.provision "shell", env: { "NODE_INDEX" => nodeIndex}, inline: <<-SHELL
echo ">>> FIX KUBELET NODE IP"
echo "Environment=\"KUBELET_EXTRA_ARGS=--node-ip=192.168.57.10$NODE_INDEX\"" | sudo tee -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sudo systemctl daemon-reload
sudo systemctl restart kubelet
SHELL
end
end

Analyze the bootstrap script

As we mentioned before, kubeadm/bootstrap.sh runs on every box, and performs the following steps:

0️⃣ Updates the package information from all configured sources:

sudo apt-get update

1️⃣ Configures IPv4 forwarding and lets iptables see bridged traffic:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

Reference here.

2️⃣ Installs a container runtime (here containerd) and configures a cgroup driver:

for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
sudo apt-get install ca-certificates curl gnupg -y

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

cat <<EOF | sudo tee -a /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
EOF

sudo sed -i 's/^disabled_plugins \=/\#disabled_plugins \=/g' /etc/containerd/config.toml

sudo mkdir -p /opt/cni/bin/
sudo wget -nv https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.0.tgz

systemctl enable containerd
systemctl restart containerd

Reference here.

3️⃣ Adds the version specific Kubernetes community-owned repositories to the boxes and installs the necessary kube-related tools like kubeadm, kubelet and kubectl:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/${VERSION}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${VERSION}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Reference here.

4️⃣ Disables swap in every box:

sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
sudo swapoff -a

Reference here.

Analyze the kubeadm scripts

First, we are going to check kubeadm/init-master.sh which runs only on the master node and performs the following steps:

0️⃣ Enables the kubelet daemon:

sudo systemctl enable kubelet

1️⃣ Initializes the control-plane node:

kubeadm init \
--apiserver-advertise-address=$MASTER_NODE_IP \
--control-plane-endpoint $MASTER_NODE_IP \
--pod-network-cidr=$K8S_POD_NETWORK_CIDR \
--skip-phases=addon/kube-proxy \
--ignore-preflight-errors=NumCPU

If you recall, we mentioned earlier that we use a tiny workaround to bypass the minimum 2 required CPUs. The — ignore-preflight-errors flag is the one!

Reference here.

2️⃣ Prepares kubeconfig files for various users:

sudo mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

mkdir -p /home/vagrant/.kube
sudo cp -f /etc/kubernetes/admin.conf /home/vagrant/.kube/config
sudo chown $(id -u):$(id -g) /home/vagrant/.kube/config

sudo chown -R vagrant /home/vagrant/.kube
sudo chgrp -R vagrant /home/vagrant/.kube

sudo cp -f /home/vagrant/.kube/config /vagrant/.kube/config.vagrant

If you want to install the steps manually, you can omit all the vagrant-related parts and keep only the first 3 lines.

3️⃣ Updates kubelet and installs a Pod network add-on (here will be Canal)

echo "Environment=\"KUBELET_EXTRA_ARGS=--node-ip=$MASTER_NODE_IP\"" | sudo tee -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
envsubst < /vagrant/cni/canal/canal.yaml | kubectl apply -f -

That’s an opinionated installation using Canal, of course you can exchange with the network add-on of your preference.

Bear in mind that envsubst is replacing in the canal manifests the value of K8S_POD_NETWORK_CIDR that we passed to this script as an environment variable in our Vagrantfile.

Reference here.

4️⃣ Inits kube-proxy add-on:

kubeadm init phase addon kube-proxy \
--control-plane-endpoint $MASTER_NODE_IP \
--pod-network-cidr=$K8S_POD_NETWORK_CIDR

If you noticed we skipped initializing the kube-proxy add-on before, and that was for a reason. There is a documented hiccup and you can find the whole discussion in this GitHub issue.

5️⃣ Create the workers’ join-command in kubeadm/init-worker.sh:

rm -f /vagrant/kubeadm/init-worker.sh
kubeadm token create --print-join-command >> /vagrant/kubeadm/init-worker.sh

Reference here.

Our second kubeadm script, the one that was just auto-generated in the last step, is kubeadm/init-worker.sh and is a one-liner that will run (after they get bootstrapped) on every soon-to-be worker node. An example of its content (they will vary in every execution as the tokens are created per installation and cluster)

kubeadm join 192.168.57.100:6443 --token ks3jah.lckxyk98oqpaxxxx --discovery-token-ca-cert-hash sha256:852407xxxxxxx

Take it for a spin

Inside the repo, you will find an additional bash script that builds the environment:

#!/bin/bash

rm -rf .kube/config.vagrant
rm -rf kubeadm/init-worker.sh

vagrant up master --provider=virtualbox
cp -f .kube/config.vagrant ~/.kube/config.vagrant

for i in {1..3}
do
sleep 5
vagrant up worker-$i &
done

It will clean your local folders from auto-generated artefacts from previous runs, it will provision the master and then in parallel the 3 worker nodes. It will export a kubeconfig in your local host~/.kube/config.vagrant that you can use it straight ahead in order to connect with your cluster, as soon as the master node is up and running:

export KUBECONFIG=~/.kube/config.vagrant

All the files discussed in this article can be found in this repository:

If you found this information useful, don’t forget to 👏 under this article and follow my account for more content on Kubernetes. Stay tuned…

--

--

talking about: kubernetes, golang, open telekom cloud, aws, openstack, sustainability, software carbon emissions