Building a Kubernetes deployment pipeline for Microsoft Bot Framework – Part 2
This is the second post in this series discussing how I set up an end to end Kubernetes deployment pipeline for a chat bot. If you missed part 1, you can check it out here.
Part 2 details the deployment of the theme park chat bot. All the code presented is available on my GitHub at: themeparks-bot.
Tools
In my previous post, I introduced the tools I used to create the infrastructure; for the bot I reused the majority of the tools with a few additions:
Docker Hub
Docker Hub is a cloud container image registry. This is used to store the docker images built from every Travis CI build which are then used in Kubernetes.
I chose Docker Hub as it is free for public images and integrates out of the box with Kubernetes.
Docker CLI
Docker CLI is the command-line tool for managing docker containers and images. I used this to build the chat bot docker image and push it to Docker Hub.
Creating the Helm chart
In order to use Helm for deployment I had to create a Chart, this section details how I got started and shows the individual components that make up the chart.
Getting started
The Helm CLI provides a create command to generate a skeleton chart with the required directory structure and files. This was run as follows:
helm create themeparks-bot
The output of this command is this folder structure:
themeparks-bot/
|
|- .helmignore # Contains patterns to ignore when packaging Helm charts.
|
|- Chart.yaml # Information about your chart
|
|- values.yaml # The default values for your templates
|
|- charts/ # Charts that this chart depends on
|
|- templates/ # The template files
The main working area is the templates folder as this is where Kubernetes resources reside. In fact, the skeleton chart actually comes with example Deployment, Ingress and Service templates.
Creating the Values file
Before I created the templates, I first of all modified the values.yaml file. This contains the defaults for properties used in the templates.
There a number of best practices for Values files that I followed when making modifications (see the documentation). A number of key ones are:
- Variables names should use camelcase.
- A flat structure should be preferred, over nesting.
- All properties should be documented.
One thing to note it that I haven’t specified values for those under the secret property; this is because I pass this at deployment time, since they are values I don’t want in source control.
Defining the templates
Using the skeleton templates as a starting point, I modified and created additional templates for the chat bot.
Helm has a lot of general guidance around developing templates such as:
- Templates files should follow a naming convention of [Chart name]-[Kubernetes resource type].yaml e.g. themeparks-bot-deployment.yaml.
- A template file should only contain a single Kubernetes resource (source).
- All resources should have the standard set of Helm labels (source).
- The include function should be used over the template function (source).
With these in mind here are the individual templates.
_helpers
This is the default location for template partials and helpers. These helpers are used amongst the separate Kubernetes resource templates.
Guidance around helpers include:
- Defining a documentation block for each function (source).
- Prefixing each function with the name of the chart (source).
Secret
Here’s the first example of using those helper functions and values together to define a Secret.
Deployment
Next up is the Deployment which depends on the previously defined secret; this is why I reference it in annotations section.
The reason I am doing is so that whenever any secret values change, Kubernetes will redeploy picking up any new values (More information about this can be found here).
Service
Now that I had a Deployment I needed to expose it within the cluster, hence the Service definition.
Ingress
Finally the last piece is an Ingress resource. The Service currently is only available within the cluster and not available externally.
This is because since the bot is based on expressjs it is recommended to use a proxy like NGINX to make it internet facing. This is exactly the job of this Ingress definition.
Specifying dependencies
As I mentioned in the defining the templates section, the chat bot is exposed to the world through NGINX. Additionally the bot uses Redis for caching theme park data.
Rather than creating the Kubernetes templates to deploy NGINX and Redis myself, Helm allows you to define other charts as dependencies which get deployed alongside your chart.
Luckily for me there happened to be charts already for NGINX and Redis. These were included as part of this chart by creating a requirements.yaml file.
Chart.yaml
The final piece of puzzle is the actual chart definition:
Build and deployment
After creating the Helm chart, I put everything together using Travis CI to build and deploy the bot to Kubernetes.
My .travis.yml file looks like the following:
This Travis CI definition utilizes Build Stages (currently in Beta) which allow you to cleanly separate the various phases of a build/deploy pipeline. You will also notice that I am using a number of environment variables which are set using the Repository settings.
This definition performs the following actions:
- Before any stage, it installs dependencies that the scripts require to work i.e. Azure CLI, kubectl and helm.
- The build stage performs the following tasks:
- Installs dependencies using yarn.
- Builds the bot using yarn.
- Lints the helm chart definition.
- If the build is working off master and it is successful, then it builds a docker image and publishes it to Docker Hub.
- If the build phase passes and the build is working off the master branch, then it runs the deploy stage. This stage uses the experimental Script deployment to run a wrapper script to deploy the chart to Kubernetes.
The wrapper script looks as follows:
With all that in place, the theme park chat bot is deployed to Kubernetes and ready for users to chat to.
I hope this two part post has helped anyone else looking to get started with Kubernetes/Helm and setting up a deployment pipeline.