Automated HA Kubernetes deployment on RockPis

Michael Fornaro
ITNEXT
Published in
4 min readMar 19, 2021

--

Learn how to create a HA Kubernetes cluster on Rock Pis.

Prerequisites

It’s recommended to have at least 4 Rock Pis. This guide uses 3 as master nodes and the remainder as the worker node(s). You can add
more Rock Pis to create an additional master or worker node(s).

Install the following CLI tools to be able to follow the steps in this guide:

Flash OS

Download the flash tool, etcher, from Downloads. Choose the right version for your host operating system.

Find and download your preferred operating system from the available List and use etcher to flash either your microSD/eMMC/SSD with the OS you downloaded.

Instruction on flashing your device can be found in Getting Started.

Prepare

Before we can bootstrap Kubernetes using Ansible there are a few small changes we need to do — currently, this is manual — however, it does need to be done in each device after boot.

  1. SSH to the device(s)
  2. Install required packages:
sudo apt-get update
sudo apt-get install -y wget

Note: You may see some errors about signatures that couldn’t be verified, this is expected and is exactly what this will fix.

3. Either use vim or nano to edit /etc/apt/sources.list.d/apt-radxa-com.listto look like below:

deb http://apt.radxa.com/buster-stable/ buster main
deb http://apt.radxa.com/buster-testing/ buster main

4. Execute the following commands:

wget -O - apt.radxa.com/buster-testing/public.key|sudo apt-key add -
wget -O - apt.radxa.com/buster-stable/public.key|sudo apt-key add -

Ensure this has been repeated on each device!!

Setup Kubernetes Cluster

Now we have all our Rock Pi nodes running, you now need to update the values in the Ansible inventory file.

Below is an example of how I configured my 3 master nodes and 1 worker node. The values for ansible_user and ansible_ssh_pass in this example have only default Rock Pi ssh credentials.

[cluster:children]
masters
workers
[k8s:children]
masters
workers
[masters]
k8s-master-01 hostname=k8s-master-01 ansible_host=192.168.1.111 ansible_user=rock ansible_ssh_pass=rock
k8s-master-02 hostname=k8s-master-02 ansible_host=192.168.1.112 ansible_user=rock ansible_ssh_pass=rockk8s-master-03 hostname=k8s-master-03 ansible_host=192.168.1.113 ansible_user=rock ansible_ssh_pass=rock[workers]
k8s-worker-01 hostname=k8s-worker-01 ansible_host=192.168.1.114 ansible_user=rock ansible_ssh_pass=rock
k8s-worker-02 hostname=k8s-worker-02 ansible_host=192.168.1.116 ansible_user=rock ansible_ssh_pass=rockk8s-worker-03 hostname=k8s-worker-03 ansible_host=192.168.1.117 ansible_user=rock ansible_ssh_pass=rock

When the inventory has been configured with all hosts, there is one last thing we must configure. We need to assign a VIP (“Virtual IP”) that will be used to load-balance across the HA master nodes.

Open masters.yml and configure the keepalived_vip value to an unassigned IP. For my configuration I use 192.168.1.200.

Run the following command to verify SSH connectivity.

env ANSIBLE_CONFIG=ansible/ansible.cfg ansible all -m ping

A successful response should look something like the following:

k8s-master-01 | SUCCESS => {
...
"ping": "pong"
...
}

Note: If your output returns success for each ping then you can continue, otherwise there may be some misconfiguration of either the inventory file or network connectivity issues.

Now we’ve tested network connectivity we can run the automation scripts that will take care of deploying Kubernetes using the following:

env ANSIBLE_CONFIG=ansible/ansible.cfg ansible-playbook \
--extra-vars "ansible_become_pass=rock" \
ansible/playbooks/all.yml

Once successfully completed you can use kubectl to interact with your Kubernetes cluster:

kubectl get nodes --kubeconfig ansible/playbooks/output/k8s-config.yaml

The expected output should look something like the following:

NAME            STATUS   ROLES                  AGE     VERSION
k8s-master-01 Ready control-plane,master 3h45m v1.20.2
k8s-master-02 Ready control-plane,master 3h44m v1.20.2
k8s-master-03 Ready control-plane,master 3h44m v1.20.2
k8s-worker-01 Ready <none> 3h44m v1.20.2

Congratulations! You now have a running Kubernetes cluster running on Rock Pis.

--

--