Innovative ways to use Weighted Canary Deployments and Multiple Container Runtimes in Kubernetes

Unleashing newer possibilities

Pradipta Banerjee
ITNEXT

--

Combining different Kubernetes features creatively have the potential to provide solutions to some very tricky problems. In this blog, I’ll share how you can combine the power of weighted canary deployment and multiple container runtimes in Kubernetes and solve some tricky issues.

Although ingress controllers in Kubernetes already provide traffic-splitting features for canary deployments, such as the canary-weight annotation in Nginx or the route-based deployment strategy in Red Hat OpenShift, have you ever thought about the immense potential they hold when combined with diverse container runtimes?

Consider this: Your Kubernetes cluster uses two container runtimes — runc (the default) and Kata (accessible via runtimeClassName: kata). You encounter an issue with an application running as a runc container. You can’t run a shell within the container, and it lacks debugging tools. What’s the solution?

You have the following approaches described in the Kubernetes docs.

  1. Debugging with an Ephemeral Debug Container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn’t include debugging utilities.
  2. Debugging Using a Copy of the Pod: In some situations, you can use kubectl debug to create a copy of the pod with configuration values changed to aid debugging.

Here I’m sharing a third approach. Deploy another version of your application as a Kata container with your preferred debugging tools. And redirect a portion of the traffic to the Kata container for debugging.

What advantage does this approach provide over the existing ones? With this approach, you have additional flexibility for debugging as you can take application dumps and use kernel tracing mechanisms without affecting existing applications on the worker node.

But we’re just getting started. Imagine another scenario: You have a critical application dependency that requires an update but can’t upgrade it immediately. You need to keep the application running while minimising the risks associated with outdated dependency.

With virtualisation-based isolation, Kata container runtime can provide a lifeline in such situations. The virtualisation layer provides an extra layer of protection for your application. This reduces the risk of harm to the worker node and other applications running on the worker node when using an application with outdated dependency. By redirecting all traffic to the application running within the Kata containers, you can buy yourself some additional time to update the dependencies. This ensures business continuity while maintaining security and stability.

Please note that I’m not advocating for running applications with outdated dependencies. You shouldn’t. However, there are times when immediately upgrading dependencies is not practically possible. In these scenarios, having the freedom to keep running the application without significantly increasing the security risks for your environment becomes extremely important for the business. It’s a tradeoff.

If you’re feeling particularly adventurous, consider integrating this approach with a policy engine like OPA or Kyverno to enhance your control and governance capabilities further.

To conclude, sharing a complete example deployed on a Kubernetes cluster running the Nginx Ingress controller.

Showing weighted canary deployments scenarios with runc and Kata containers

A simple “http-echo” application is used for illustrative purposes.

Deploy the following YAML to create a pod that runs the http-echo application using the default runc container runtime.

apiVersion: v1
kind: Pod
metadata:
name: echo-runc
labels:
app: echo-runc
namespace: sample
spec:
containers:
- name: echo-runc
image: quay.io/bpradipt/http-echo
args: [ "-text", "Hello from runc container", "-listen", ":8080"]

---
kind: Service
apiVersion: v1
metadata:
name: echo-runc
namespace: sample
spec:
selector:
app: echo-runc
ports:
- port: 8080

Following YAML creates a pod that runs the same http-echo application using Kata containers runtime and employs a privileged sidecar for debugging.

apiVersion: v1
kind: Pod
metadata:
name: echo-kata
labels:
app: echo-kata
namespace: sample
spec:
containers:
- name: echo-kata
image: quay.io/bpradipt/http-echo
args: [ "-text", "Hello from Kata container", "-listen", ":8080"]
- name: perf-sidecar
image: quay.io/bpradipt/perf-amd64:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: perf-output
mountPath: /out
securityContext:
privileged: true
volumes:
- name: perf-output
hostPath:
path: /run/out
type: DirectoryOrCreate
shareProcessNamespace: true
runtimeClassName: kata

---
kind: Service
apiVersion: v1
metadata:
name: echo-kata
namespace: sample
spec:
selector:
app: echo-kata
ports:
- port: 8080

Create an Ingress resource to define the routing rules and enable traffic splitting so that 10% of the traffic is routed to the Kata pod for debugging purposes:


# ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
namespace: sample
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo-runc
port:
number: 8080

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-ingress
namespace: sample
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/canary: "true"
# send 10% of traffic to echo-kata
nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo-kata
port:
number: 8080

For temporary isolating the runc pod, change the canary-ingress object from the example above to set the canary-weight to 100.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-ingress
namespace: sample
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/canary: "true"
# send 100% of traffic to echo-kata
nginx.ingress.kubernetes.io/canary-weight: "100"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo-kata
port:
number: 8080

It would be great to know about the creative ways in which you are combining and utilising different Kubernetes features for your business. Let’s exchange ideas and gain insights from each other! :-)

--

--