Kubernetes Production Cluster with Vagrant and Virtualbox on MacOS

Francesco Vitullo
ITNEXT
Published in
3 min readSep 14, 2020

--

Photo by Joseph Barrientos on Unsplash

For development purposes, it would be great to have a Kubernetes Production Cluster locally, and with this article, I am going through the basic setup in order to start using a good one.

Kubespray to the rescue

There are few open-source solutions to achieve the task of deploying Production-Ready Kubernetes clusters, but I selected Kubespray.

It comes with Ansible playbooks (deploy, upgrade, utils and extras) and with plenty of tests.

What is supposed to be already installed:

  • Vagrant
  • Virtualbox
  • Python + Pip

Let’s clone the official repo:

git clone https://github.com/kubernetes-sigs/kubespray

Afterward, moving into the cloned repo, we can start the installation of the tools required:

cd kubespray && pip3 install -r requirements.txt# ORcd kubespray && pip install -r requirements.txt

Once everything has been correctly installed, we can start Vagrant:

vagrant up

This command will spawn the VMs, install Kubernetes on them, and configure the networking. When invoked without customizations, it is grabbing a sample configuration that is located in /inventory/sample. Inner YAML files allow customization of whatever is desired for the planned cluster. In addition, the Vagrantfile contains all the necessary specifications for the spawned VMs (so it allows us to configure the network, VM’s image, its properties, etc…).

When the process completes, and the cluster is successfully created, we could use the configuration automatically injected in the artifacts folder.

Let’s move into the .vagrant folder in order to access it:

cd .vagrant/provisioners/ansible/inventory/artifacts

There are 3 files:

admin.conf     kubectl    kubectl.sh

The one we need is the “admin.conf”, which contains the configuration for our newly created local cluster (and set Kubectl to use it).

export KUBECONFIG=$(pwd)/admin.conf

So let’s check the connection to our cluster:

$ kubectl cluster-infoKubernetes master is running at https://172.18.8.101:6443To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Cool, we are connected to the cluster now.

By default, Virtualbox is using a NAT network type, which is isolating the VMs created, thus we would need to use port forwarding in order to access the desired VM (or VMs) from outside.

$ vagrant statusCurrent machine states:k8s-1                     running (virtualbox) # usually master node
k8s-2 running (virtualbox)
k8s-3 running (virtualbox)

And listing Virtualbox instances

$ vboxmanage list vms"kubespray_k8s-1_1599909087609_21866" {30ba6c6e-6a1d-4647-8f67-9f3f8276d660}
"kubespray_k8s-2_1599909125366_77977" {f618bfd4-fe82-4437-8e57-2bce2bb9e2b1}
"kubespray_k8s-3_1599909169592_4503" {86275e53-025f-4021-85a8-2ff23a0099cb}

So, what we would like to do, it’s exposing the k8s-1 machine with the id “30ba6c6e-6a1d-4647–8f67–9f3f8276d660”. With the NAT network, we need to use port-forwarding from the host to the guest and it is possible to do so:

vboxmanage controlvm 30ba6c6e-6a1d-4647–8f67–9f3f8276d660 natpf1 "master-node,tcp,,6443,,6443"

This command is creating a rule for port-forwarding for the k8s-1 master node. “master-node” is an arbitrary name you desire, TCP the protocol, between the two commas we don’t specify any IPs (done twice), and for both host/guest we assign the same port (6443 in this case).

Once fired, the command is exposing our cluster through port-forwarding.

There are lots of things that could be configured with Kubespray such as Network management solution, Ingress controller, number of nodes, etc... I recommend checking the official page at the following link https://kubespray.io/

Hope it helped!

Cheers :)

--

--

Senior Front End Engineer @IBM — thoughts are my own / I’m a passionate developer and kitesurfer