Development and Debugging with Kubernetes
I’m working with an osx laptop, and in this article, I will show how to quickly create a dev/test kubernetes cluster, and the usage of some useful tools for working and debugging with Kubernetes.
Create a development Kubernetes Cluster
In this case, I wanted to create a real kubernetes cluster on my development, with a minimal memory usage and fast startup times, using Docker in Docker instead of traditional local virtual machines.
I turn out that kubeadm-dind-cluster is made for that
Installation Kubeadm-dind-cluster
Pre-requisites
I’m using docker-for-mac on the laptop and I use homebrew to install Unix softs
brew install jq
brew install md5sha1sum
We also need to create a /boot
directory (it does not exist on OSX) and bind mount it into docker using preferences->File Sharing
Download the bootstrap script:
wget https://cdn.rawgit.com/kubernetes-sigs/kubeadm-dind-cluster/master/fixed/dind-cluster-v1.11.sh
chmod +x dind-cluster-v1.11.sh
mv dind-cluster-v1.11.sh /usr/local/bin/dind-cluster.sh
Note: you can choose different kubernetes versions (1.8, 1.9, 1.10, 1.11)
Start the cluster
# start the cluster
$ dind-cluster.sh up
default will create 1 master and 2 Workers
You can specify numbers of nodes
NUM_NODES=3 dind-cluster.sh up
You can also gives access to unsecure registry
DIND_INSECURE_REGISTRIES='["my-private-registry.com"]' dind-cluster.sh up
The ouput of the scripts tells me the Url to connect to the Kubernetes Web UI in my case : http://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy/#!/workload?namespace=default
TearDown the cluster
When you need your resources for something else, you can shut down the cluster
dind-cluster.sh down
If you don’t want to up the cluster again later, then you need to clear all the resources.
dind-cluster.sh clean
Useful tools
Alias for Kubernetes
Working with Kubernetes, you are going to type a lot of kubectl commands, I recommend you make a simple alias :
alias k=kubectl
In order to keep the kubectl bash completion, you’ll need also to update the end of the completion script in /usr/local/etc/bash_completion.d/kubectl
on osx or in /etc/bash_completion.d/kubectl
on Linux
if [[ $(type -t compopt) = "builtin" ]]; then
complete -o default -F __start_kubectl kubectl
complete -o default -F __start_kubectl k
else
complete -o default -o nospace -F __start_kubectl kubectl
complete -o default -o nospace -F __start_kubectl k
fi
Switch Kubernetes Context and Namespaces
Kubectl can manage several Kubernetes clusters through its configuration file .kube/config
but it is not easy to switch between them or to change namespaces.
There are some very useful tools that will make your life easier working with Kubernetes.
Kubectx / Kubens will install 2 tools:kctx
and kns
that will allows you to easily change kubernetes context and namespaces.
brew install kubectx --with-short-names
Now it is possible to quickly view and change kubernetes contexts :
$ kctx dind
Switched to context "dind".
We can also easily change kubernetes namespace :
$ kubectl create namespace demo
namespace/demo created
$ kns demo
Context "dind" modified.
Active namespace is "demo".
Deploy applications with Helm
We are going to uses Helm to deploy applications.
brew install kubernetes-helm
Actually, helm uses tiller
and we must install it on the cluster using :
$ helm init
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!
We can use helm to deploy an application on Kubernetes as if we were using apt
or yum
to deploy packages on Linux.
We can search for packages :
$ helm search wordpress
NAME CHART VERSION APP VERSION DESCRIPTION
stable/wordpress 3.0.0 4.9.8 Web publishing platform for building blogs and ...
And install :
$ helm install stable/wordpress
...
...
NOTES:
1. Get the WordPress URL: NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace demo -w hazy-hedgehog-wordpress' export SERVICE_IP=$(kubectl get svc --namespace demo hazy-hedgehog-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "WordPress URL: http://$SERVICE_IP/"
echo "WordPress Admin URL: http://$SERVICE_IP/admin"2. Login with the following credentials to see your blog echo Username: user
echo Password: $(kubectl get secret --namespace demo hazy-hedgehog-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)
See what is currently deployed :
$ helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
hazy-hedgehog 1 Sun Sep 30 00:24:19 2018 DEPLOYED wordpress-3.0.0 demo
And delete if you don’t need it anymore :
$ helm delete hazy-hedgehog
release "hazy-hedgehog" deleted
Debug application within Kubernetes
The Hard Way
I’ve written a previous article Debug a Go Application in Kubernetes from IDE which explains how we duplicate a pipeline to compile our application in debug mode, create a special container with the dlv
go debugger, exposing the dlv service port and connect to it from our local IDE using a kube forward command.
In the next section, we are going to see simpler alternatives.
Debugging an application using Telepresence
Telepresence substitutes a two-way network proxy for your normal pod running in the Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed through the proxy to the remote Kubernetes cluster.
That will allow launching a local program to debug in our IDE that will be able to communicate as if it was in the Kubernetes cluster.
This uses a vpn-like solution between your computer and the kubernetes cluster, so that it can disrupt your others connections.
While using telepresence, all your traffic is routed to the tunnel, so that other applications you uses may not be working correctly
Install on OSX
brew cask install osxfuse
brew install socat datawire/blackbird/telepresence
Test
With this example, we create a local docker container with access to /var/run/secrets
mounted by the kubernetes API (and retrieved by telepresence) that is able to call the Kubernetes API
telepresence --mount=/tmp/known --docker-run --rm -it -v=/tmp/known/var/run/secrets:/var/run/secrets lachlanevenson/k8s-kubectl version --short
We ask telepresence to mounts the remote pod filesystems in the local directory /tmp/known
and we mount it in our test docker container.
Swap a Deployment with a local docker image
Telepresence allows swapping a pod deployment in the cluster with your local box.
Here we have launch a test Cassandra Operator which is deployed using helm, which creates a deployment in the Kubernetes cluster.
helm install --name cassandra-operator ./helm/cassandra-operator/
we can see that it has created the cassandra-operator
deployment :
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
cassandra-operator 1 1 1 1 4h
Now we will use telepresence to swap the operator deployment with a local docker image for the Cassandra Operator :
$ telepresence --swap-deployment cassandra-operator --mount=/tmp/known --docker-run --rm -it -v=/tmp/known/var/run/secrets:/var/run/secrets sebmoule/cassandra-operator:0.1.3-master
INFO[0000] Go Version: go1.10.4
INFO[0000] Go OS/Arch: linux/amd64
INFO[0000] operator-sdk Version: 0.0.5+git
INFO[0000] cassandra-operator Version: 0.1.3
INFO[0000] cassandra-operator LogLevel: debug
INFO[0000] Watching db.orange.com/v1alpha1, CassandraCluster, tele, 10
DEBU[0000] starting cassandraclusters controller
--swam-deployment
allows replacing the deployment--mount=/tmp/known
we use this to create theTELEPRESENCE_ROOT
which will be used to synchronize volumes with the kubernetes pods--docker-run -v
we use docker mount options to create the TELEPRESENCE_ROOT volume in the local container
It is possible to uses another image for the telepresence proxy example. For more informations on telepresence please checks the doc :
export TELEPRESENCE_REGISTRY=myprivateregistry.com/datawire
Swap a Deployment to use our local IDE
Now we are going to set up the telepresence tunnels on our local box and retrieves the Pod environments variables from a Cassandra operator deployment so that we can later inject them in our IDE.
telepresence --swap-deployment cassandra-operator --mount=/tmp/known --env-file cassandra-operator.env
this creates the file similar to :
$ cat cassandra-operator.env
KUBERNETES_PORT=tcp://172.18.0.1:443
KUBERNETES_PORT_443_TCP=tcp://172.18.0.1:443
KUBERNETES_PORT_443_TCP_ADDR=172.18.0.1
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=172.18.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
LOG_LEVEL=Debug
POD_NAME=operator-cassandr-7313751b30734a338851b579db9c2608-67bb84dhb6gg
TELEPRESENCE_CONTAINER=cassandra-operator
TELEPRESENCE_CONTAINER_NAMESPACE=cassandra-test
TELEPRESENCE_POD=operator-cassandr-7313751b30734a338851b579db9c2608-67bb84dhb6gg
TELEPRESENCE_ROOT=/tmp/known
WATCH_NAMESPACE=cassandra-test
Now we can imports those env var in your IDE debugger
We make a little hack on the local filesystem so that the operator will found the targeted files :
sudo mkdir -p /var/run/secrets/kubernetes.io/
sudo ln -s /tmp/known/var/run/secrets/kubernetes.io/serviceaccount /var/run/secrets/kubernetes.io/serviceaccount
Then, you can click on the debug button on the IDE, and the Operator will start and will works as inside the Kubernetes Cluster, and you can debug it normally.
KubeSquash for debugging
KubeSquash is a simple command-line tool. They designed its user interface to be dead simple: invoke with a single command “kubesquash”, target the desired pod, and the debugging session is initiated automatically with nothing to configure or deploy.
Download KubeSquash from here
How it works
When launching kubesquash it will create a namespace squash
in the Kubernetes cluster, and create a Pod which will deployed with the Go debugger and will be able to attach to the Pod you want to debug.
For debuging purpose wa can ask kubesquash to keep the temporary pod it install in the cluster
kubesquash -no-clean kubectl -nsquash describe pod kubectl -nsquash logs -l squash=kubesquash-container
Use it in the command line
If you use kubesquash on a private kubernetes with no internet access, you can first download the kubesquash image: soloio/kubesquash-container-dlv:v0.1.6
and push it to your private registry, then ask kubesquash to uses your image :
$ kubesquash --container-repo my-private-registry
? Select a namespace [Use arrows to move, type to filter]
❯ default
Kubesquash now asks you to select the targeted namespace you want to debug, then it asks you to select the targeted Pod, and to select dlv
debugger.
kubesquash --container-repo registry.gitlab.si.francetelecom.fr/dfyarchicloud/dfyarchicloud-registry
? Select a namespace cassandra-demo
? Select a pod operator-cassandra-operator-88fd9bb9c-jwqvl
? Going to attach dlv to pod operator-cassandra-operator-88fd9bb9c-jwqvl. continue? YesIf you don't see a command prompt, try pressing enter.(dlv)
Now, you have a debugger directly attach to your targeted pod
Warning:
in order to kubesquash to create the pod with debugging capability on other pod process you need to have priviledge access on the cluster.
Use it with your IDE
At this time of writing, Kubesquash only supports VsCode as IDE, it provides a VsCode extension.
Because my private kubernetes cluster doesn’t have access to the Internet, I need to ask kubesquash to take my private registry for the docker image.
This is not currently configurable in the extension, I had to change directly the line in the extension code extension.js l 179
:
let stdout = await exec(`kubesquash --container-repo myprivate-registry -machine -debug-server -pod ${selectedPod.metadata.name} -namespace ${selectedPod.metadata.namespace}`);
In order to allow my IDE to properly set up breakpoints, I also needed to change the remotePath to uses in extension.js l 202
:
remotePath: "/go/src/path/to/my/app",
With that, I just need to start the extension in the VsCode command line :
KubeSquash - debug pod
It asks you for the pod to connect to.
Then I will start downloading the kubesquash docker image in the namespace squash, on the kubernetes node where is your targeted pod, and will attach the dlv debugger on your target pod.
It also configures locally you VsCode, to that it automatically attached to the dlv in the cluster, and you are ready to start debugging your app in your IDE with your local breakpoints.
This is a really wonderful tool!!
Other Tools
There are a bunch of other tools I have not already tested, you can find some here:
https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/