Kubernetes Tips | Using scripts inside configmaps

Weston Bassler
ITNEXT
Published in
6 min readDec 16, 2022

--

https://cncf-branding.netlify.app/projects/kubernetes/

Working in the DevOps/SRE world for the better part of last decade, one thing I have learned is that moving fast is the only way to survive. Things change in a blink of an eye in the industry, within an organization and your workload never decreases. Obviously we all build for efficiency and for quality, but we must also keep speed in mind. In this space, scripting is life. For automation to work we cannot be successful without scripts. I cannot really think of a week in my entire career where I didn’t write some sort of script to pull off some piece of automation. Before you know it you have hundreds to maintain.

How do we manage and maintain all of these scripts to do all of these wonderful things when using an architecture that involves Kubernetes? One way is to create an entire repo dedicated and then build a container image that contains all these scripts called something like “automations”. Which isn’t entirely a bad idea given that we all should follow GitOps practices and I would never argue against that (GitOps always. Always. Always.).

One way I have found to be extremely helpful over the last several years working with Kubernetes is by actually storing these scripts as configmaps. This has allowed me to more easily manage all of these custom scripts needed for automation and deploy more quickly. In the following article I will demonstrate some scenarios of placing scripts inside of configmaps to demonstrate how to do this.

For our K8s environment we will be using KiND. Please see docs for more information on getting started with KiND. This is my preferred local development K8s.

Scenario 1: Executing a simple Task with Bash

apiVersion: v1
kind: ConfigMap
metadata:
name: slim-shady-configmap
data:
slim-shady.sh: |
#!/bin/bash

echo "Hi!"
echo "My name is"
echo "What?"
echo "My name is"
echo "Who?"
echo "My name is"
echo "Chika-chika"
echo "Slim Shady"

In the above configmap we are creating a bash script called “slim-shady.sh” that echos the lyrics “My Name Is” by Eminem (Please note this is not a real use case 🙂 but is one of my favorite songs of all time). To run this script by using the configmap, let’s take a look at an example Kubernetes Job below and walk through it.

apiVersion: batch/v1
kind: Job
metadata:
name: chicka-chicka-slim-shady
spec:
template:
spec:
containers:
- name: shady
image: centos
command: ["/script/slim-shady.sh"]
volumeMounts:
- name: script
mountPath: "/script"
volumes:
- name: script
configMap:
name: slim-shady-configmap
defaultMode: 0500
restartPolicy: Never

In the above Job manifest you can see that we create a new volume for the configmap. We then take that volume and mount it under the shady container at “/script” and then execute the script file slim-shady.sh which is the name of the file in our configmap.

Two things I would like to point out here:

A) Inside the volume, we must specify a defaultMode of atleast 0500 (read+execute to the user). This defines the file permissions inside the container when the pod is run. If we do not set this, we get permission denied errors. This is because as a default the configmap will be mounted with 0644 and you will end up with an error like below.

 Warning  Failed     7s    kubelet            Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/script/slim-shady.sh": permission denied: unknown

B) Here we were also able to use an existing and well known container image instead of creating our own. This takes less effort for us because we did not have to build a new image with new code etc… but instead only needed to deploy the configmap and a Job.

Let’s have a look at the logs after we apply the Job manifest.

> kubectl logs -f chicka-chicka-slim-shady-h7j5m
Hi!
My name is
What?
My name is
Who?
My name is
Chika-chika
Slim Shady

Scenario 2: Promoting Latest ML Model in MLflow

Now for a more real world example but still a pretty simple setup. The following follows the same approach as above but the purpose of this script is to take the latest ML Model from the MLflow Registry and promote it to “Production”.

apiVersion: v1
kind: ConfigMap
metadata:
name: promote-model
data:
promote_model.py: |
"""
This file is used to demonstrate how to promote the latest version of a Model to "Production" stage so that it can be
served out of the Model Registry.
"""
import os
import mlflow
from mlflow.tracking import MlflowClient

# Set the Remote Tracking Server Information
mlflow.set_tracking_uri("http://mlflow")

# Promote the latest Version of Staging to Production
client = MlflowClient()
model_name = os.getenv("MODEL_NAME")

# Get the latest version in Stage "Staging" and promote to "Production"
models = client.get_latest_versions(model_name, stages=["Staging"])
for model in models:
name = model.name
latest_version = int(model.version)
run_id = model.run_id
current_stage = model.current_stage
print(name, latest_version)
# Transition
client.transition_model_version_stage(
name=name,
version=latest_version,
stage="Production"
)

The above code takes in an environment variable as the model name (model_name = os.getenv(“MODEL_NAME”)) to specify which model to promote. To define the model name we can simply pass an env with the name of the model to the container in the Job:

apiVersion: batch/v1
kind: Job
metadata:
name: promote-model-a
spec:
template:
spec:
containers:
- name: promote
image: wbassler/mlflow-utils:0.0.1
command:
- python
args:
- /mlflow/promote.py
env:
- name: MODEL_NAME
value: model-a
volumeMounts:
- name: promote
mountPath: "/mlflow"
volumes:
- name: promote
configMap:
name: promote-model
defaultMode: 0500
restartPolicy: Never

As you can see above we use “model-a” as our environment variable. When our code is executed it will find the latest version of “model-a” in the MLflow model registry and promote it to “Production”.

Two things to point out here:

A) Once again we saved a ton of time by using an existing container image and did not have to build our own to add new code. Perhaps all you needed here was to your configmap via GitOps to deploy.

B) The way in which we wrote the script can also be reused very simply by other Jobs. In our case, other teams/users/pipelines/etc… can very easily reuse the script to promote their own MODEL_NAME by simply changing the env in the Job manifest.

Scenario 3: Creating new directories for new users

In the final scenario, I am going to demonstrate how to execute a script that is stored in a configmap that reads from a config file that is also store in a configmap. Before explaining the purpose of doing this, lets have a look at the following configmap:

apiVersion: v1
kind: ConfigMap
metadata:
name: create-users-dir
data:
users.ini: |
user1
user2
user3
user5
create-dir.sh: |
#!/bin/bash
set -ex
cat /scripts/users.ini | while read line
do
mkdir -pv /nfs/folder/$line
done

In the above configmap, we are creating two files. users.ini is used a config file that defines a list of user directories. create-dir.sh is a bash script that is used to create a folder for each entry in the users.ini file.

---
apiVersion: v1
kind: Pod
metadata:
generateName: create-users-dirs
spec:
containers:
- name: create-users
image: centos
command: ["/scripts/create-dir.sh"]
volumeMounts:
- name: script
mountPath: "/scripts"
volumes:
- name: script
configMap:
name: create-users-dir
defaultMode: 0500
items:
- key: create-dir.sh
path: create-dir.sh
- key: users.ini
path: users.ini

When run:

+ cat /scripts/users.ini
+ read line
+ mkdir -pv /nfs/folder/user1
mkdir: created directory '/nfs'
mkdir: created directory '/nfs/folder'
mkdir: created directory '/nfs/folder/user1'
+ read line
+ mkdir -pv /nfs/folder/user2
mkdir: created directory '/nfs/folder/user2'
+ read line
+ mkdir -pv /nfs/folder/user3
mkdir: created directory '/nfs/folder/user3'
+ read line
+ mkdir -pv /nfs/folder/user5
mkdir: created directory '/nfs/folder/user5'
+ read line

Two things to point out with this example:

A) Once again we took advantage of using an existing container image and did not have to create our own.

B) With the above example, we can reuse this same code across several different K8s environments by utilizing something like Kustomize configMapGenerator (And if you want to get real sexy, integrate it with your GitOps system 😺).

Running as a Pod resource obviously is not a good choice here but I really like the above scenario as it can be utilized in many different ways such as a Job or as an initContainer. A new Job could be run each time a new user is added to the config.ini as part of your CI pipeline or an initContainer could be run each time a pod starts to ensure all directories already exist.

Storing scripts or even code as configmaps is not something new or something that I have come up with on my own. Several of the Kubernetes applications and Operators I have installed and worked with over the years follow this approach as well. There are many pros and cons to this approach but hopefully I have given someone out there an idea that saves them some time and gives some ease to a piece of automation.

--

--

Husband. Father. Friend. Mentor. Developer. Engineer. | Sr MLOps Eng