The Ultimate Guide to Building Your Personal K3S Cluster

Nima Mahmoudi
ITNEXT
Published in
9 min readNov 14, 2020

--

There are a lot of reasons why you might want to have your personal Kubernetes cluster. Personally, I like building my own tools and getting a better grasp of what happens under the hood, but you might want to experiment with Kubernetes for different reasons but are held back by the big cost of using Kubernetes services provided by AWS, GCP, Azure, DigitalOcean, etc.

K3s is a lightweight distribution of Kubernetes created at Rancher Labs. Basically, it is a complete Kubernetes distribution, but they combined all processes into a single binary, added cross-compilation for ARM, dropped a lot of extra bells and whistles that you don’t normally use and bundled some extra user-space tools.

In this tutorial, I will show you how you can setup a complete k3s cluster which is completely compatible with upstream Kubernetes and has added functionality for simple deployment on Virtual Machines. After going through this tutorial, you can deploy any Kubernetes applications onto your cluster and leverage the automatic storage provisioning and software load balancer which can normally only be found in managed Kubernetes services.

Setting Up Your Virtual Machine (VM)

First, you need to set up your VM. If you already have a VM, make sure that you have any important data on it backed up, just to be sure. If you want to create a new VM, there are a lot of options out there. I suggest you take a look at the different cloud options and compare their prices, but generally, you would want something with at least 4GB of RAM to make sense running a Kubernetes cluster on (however, 1GB should work for stateless deployments, without the storage driver, and minimum requirement with the storage driver is 2GB of RAM).

If you don’t want to look too much for now, here is a link to 100$ credit to try out vultr, the good thing about them is that they have data centers all around the world, and have a great performance to cost ratio. If you live in Europe, or the latency to Europe is OK for you, you can try Hetzner, they are REALLY CHEAP (I haven’t used them for long, so I can’t talk about their performance or availability).

After you have chosen your cloud provider and the VM size you are going with, we get to the creation of the server. This is usually straight forward, go to create a new instance, pick the data center closest to you, choose the VM size (I suggest anything with 4GB of RAM, but 2GB works for trying things out), choose “Ubuntu 18.04” or “Ubuntu 20.04” as the operating system, add your SSH key, and finally, click deploy.

After the virtual machine is up, ssh onto it and change its hostname:

sudo hostnamectl set-hostname k3smaster
sudo reboot

Note that if you are planning to create a cluster with more than one VM, each VM should have a unique hostname. Make sure that you change the hostname for each VM.

Make sure that if you are using a firewall for your VM, you open up the ports you will be using, TCP ports 6443 and 10250, and UDP port 8472 for k3s interactions from outside the VM.

Installing Tools on The Client

If you are planning to use the Kubernetes cluster, you need to have some additional tools like kubectl that help you with communicating with your cluster. These tools can either be installed on the master node or your laptop, whichever is easier for you.

To install a lot of these tools, I will make use of Arkade, an awesome tool created by Alex Ellis (who is also the creator of OpenFaaS) that can help with installing a lot of the tools.

First, we can install Docker and Docker Compose to help us with development:

# Docker
curl -sSL https://nimamahmoudi.github.io/cicd-cheatsheet/sh/install-docker.sh | bash
# Docker Compose
sudo apt-get update && sudo apt install -qy python3-pip && pip3 install docker-compose

Then, we can install Arkade, along with its bash-completion:

# Arkade
curl -SLfs https://dl.get-arkade.dev | sudo sh
# Add tools bin directory to PATH
echo "export PATH=\$HOME/.arkade/bin:\$PATH" >> ~/.bashrc
# Copy bash completion script
arkade completion bash > ~/arkade_bash_completion.sh
echo "source ~/arkade_bash_completion.sh" >> ~/.bashrc
# starting bash completion
source ~/.bashrc

Next, we need kubectl to communicate with kubelet:

# Kubectl
arkade get kubectl
# Kubectl bash completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
# starting bash completion
source ~/.bashrc

Next, kustomize, helm, k3sup, and kompose:

# Kustomize
arkade get kustomize
# Helm
arkade get helm
# K3sup
arkade get k3sup
# Kompose for converting docker-compose
curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
echo 'source <(kompose completion bash)' >>~/.bashrc
source ~/.bashrc

If you don’t know about any of these tools, check them out, but I will try to give a summary for each of them. Kustomize can help us change values in kubernetes manifests dynamically. Helm is used for a more professional customization in larger projects, or for installing external applications using helm charts. K3sup is another tool created by Alex Ellis which we will use to install k3s on the VM.

K3s Installation

K3s by default comes with CoreDNS, Metrics server, and Traefik. Check out this video to know more about k3s. In this tutorial, we will be disabling Traefik and use Nginx Ingress instead. We will also enable cluster mode (which uses dqlite instead of sqlite to allow multiple masters).

To install k3s from a remote computer (e.g., your laptop), use the following command:

k3sup install \
--ip YOUR_VM_IP \
--cluster \
--user ubuntu \
--k3s-channel stable \
--local-path ~/.kube/config \
--merge --context k3s \
--k3s-extra-args '--no-deploy traefik --write-kubeconfig-mode 644'

Make sure that you replace YOUR_VM_IP with the IP of the VM you see in your cloud console. For more options, check out k3sup documentation. For more information about high-availability installation, read this tutorial.

Using k3sup, we bootstrapped k3s on the VM, waited for it to finish installing, and fetched the kube config file, moved it to the default location, and named our cluster (context) k3s. We can also move the kube config file on the VM so we can also use kubectl on the VM:

mkdir ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
echo "export KUBE_CONFIG=~/.kube/config" >> .bashrc

In case you want to use the VM itself for installing k3s instead of your laptop, use the following command:

k3sup install \
--cluster \
--local
--k3s-channel stable \
--local-path ~/.kube/config \
--merge --context k3s \
--k3s-extra-args '--no-deploy traefik --write-kubeconfig-mode 644'

K3s comes with kubectl baked in, you can use it on the VM with “k3s kubectl”, or you can set an alias (add this line to ~/.bashrc to make it permanent):

alias kubectl="k3s kubectl"

We can now check out our installation:

kubectl get nodes -o wide

You should see something like this:

NAME        STATUS   ROLES    AGE   VERSION
k3smaster Ready master 3m v1.18.9+k3s1

Now that we have k3s running on a VM, we might want to add other VMs:

k3sup join \
--server-ip YOUR_FIRST_VM_IP \
--ip CURRENT_VM_IP \
--user ubuntu \
--k3s-channel stable \
--k3s-extra-args '--no-deploy traefik --write-kubeconfig-mode 644'

For more information about k3sup options, check out their Github repo. After joining the k3s cluster form all of your VMs, check if everything went fine:

kubectl get nodes -o wide

Well done, you have now installed k3s and are ready to deploy stateless applications that don’t need persistent storage. Next, we are going to add a couple of other tools that will come in handy when we want to do serious deployments on our cluster.

Photo by Alasdair Elmes on Unsplash

Installing Nginx Ingress Controller

Now that we have a running k3s cluster, we want to start deploying awesome applications on it, several of which might be on the same web ports 80 (HTTP) and 443 (HTTPS). In these cases, and for a wide range of other reasons, you would want to install an Ingress Controller. The most used ingress controller as of now is Nginx. Read more about it on their website.

Since we have installed the proper tools at the beginning, installing nginx and cert-manager for let’s encrypt HTTPS certificates is rather easy:

# install nginx ingress
arkade install ingress-nginx --namespace default
# cert-manager for letsencrypt certificates
arkade install cert-manager

Now, we are ready to deploy any application that makes use of the ingress controller. I will write tutorials for some of the most useful ones soon, you can follow me to be notified when that happens.

Installing Longhorn Storage Controller

Longhorn is another awesome open-source project from Rancher Labs which provides a distributed block storage for your Kubernetes cluster. It also comes with a pre-installed dashboard which helps you with settings, backup, and monitoring. We can easily install longhorn using helm:

# add the longhorn helm repository
helm repo add longhorn https://charts.longhorn.io
# update helm repos
helm repo update
# create the namespace
kubectl create namespace longhorn-system
# install longhorn
helm install longhorn longhorn/longhorn --namespace longhorn-system

Check if everything went smoothly:

# check everything went smooth
kubectl -n longhorn-system get pod

You should see something like this:

NAME                                        READY   STATUS 
longhorn-ui-6fb889895f-klgs7 1/1 Running
instance-manager-e-20feb18d 1/1 Running
instance-manager-r-2d587ad2 1/1 Running
engine-image-ei-ee18f965-vrjk8 1/1 Running
longhorn-manager-qg7fz 1/1 Running
longhorn-driver-deployer-6756bb8fd6-wq757 1/1 Running
csi-provisioner-57d6dbf5f4-jn9rr 1/1 Running
csi-provisioner-57d6dbf5f4-5d67p 1/1 Running
longhorn-csi-plugin-z4z5c 2/2 Running
csi-attacher-5b4745c5f7-z9zqj 1/1 Running
csi-attacher-5b4745c5f7-hdvxj 1/1 Running
csi-provisioner-57d6dbf5f4-zt8jc 1/1 Running
csi-resizer-75ff56bc48-clx4p 1/1 Running
csi-resizer-75ff56bc48-qcq22 1/1 Running
csi-attacher-5b4745c5f7-q276d 1/1 Running
csi-resizer-75ff56bc48-spmt2 1/1 Running

Now, you should be able to access longhorn UI to access its configuration:

# check out the longhorn service
kubectl -n longhorn-system get svc
# forward the port to check the configuration
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8002:80

You should now be able to open the longhorn dashboard from “http://localhost:8002”. Add any available disks you have on any VM in the cluster. If only 1 node is available in your k3s cluster, you will need to enable Replica Node Level Soft Anti-Afinity so that it will allow placing block backups on the same node (by default longhorn keeps 3 copies of each data block on different VMs, so that you won’t lose any data in case any data on a VM is lost).

Conclusion

You just set up a very “professional-looking” Kubernetes cluster that you can use for learning purposes, automating your personal tasks, or for hosting your hobby projects on the cloud without having to use managed services (which will end up costing much more than you want to spend yourself). In case this is too complicated for what you want to do, you could also take a look at serverless computing, but the goal of that is very different, but it might be the choice for many people.

In future articles, I will write about the installation procedure of some of the most used self-hosted applications. You can follow me to be notified when those articles come out.

About Me

I am a Ph.D. student at the University of Alberta, a visiting researcher at York University, and a part-time Instructor at Seneca College. Day-in day-out I research serverless computing platforms, trying to find ways to improve their performance, reliability, energy consumption, etc., using analytical or data-driven methods (fancy words for “I either use mathematics or machine learning to model serverless computing platforms”). What I wrote here is the result of my insight into some of the serverless computing platforms that I have worked with during my research and a brief compilation of their documentation regarding their autoscaling patterns.

In case you want to know more about me, check out my website.

--

--

I'm currently a Machine Learning Engineer at Meta, working on ways to improve the users' experience on our ads products.