Jenkins X — Managing Jenkins

Steve Boardwell
ITNEXT
Published in
13 min readAug 7, 2019

Welcome back to my mini-series on Jenkins X. In the last post I discussed how to enable TLS in your preview environments. This time around we will be discussing the Jenkins server and how we, here at Datameer, manage changes and updates to both jobs and configuration.

I’ll admit, it has taken me a little too long to write this post. Mostly because there are so many aspects and angles to it that I couldn’t decide on the right structure, often swapping out huge chunks of information, only to swap them back again half an hour later. You’ll might even find that it is quicker to apply the ideas in this post than to actually read about them.

Still, no chicken without the egg…

I’ve settled on the following structure trying to make each section as optional as possible, allowing you to pick and choose what’s best for you:

  1. Creating the Jenkins image
    This section will show you how to create a custom docker image and integrate it into your current Jenkins X release.
  2. Bonus Section: How we safely refresh Jenkins X
    As the title suggests, this offers a stress-free-ish approach to updating and upgrading the Jenkins X platform.
  3. Persistent Plugins
    Here I will talk about persisting our plugin updates across pod restarts.
  4. OAuth for Jenkins (manually)
    This section will show you how to enable OAuth for Jenkins using either GitHub or Google. I will also explore “matrix based security” to allow more fine-tuned control over users permissions.
  5. OAuth for Jenkins (automated)
    Same as above, but persisted over pod restarts.
  6. Persistent Jobs
    I will present one method of persisting your jobs.

1. Creating the Jenkins Image

The Jenkins X platform already comes with it’s own Jenkins docker image with a good set of plugins to get you started. However, the plugins are baked into the image meaning they are as old as the image itself.

Who hasn’t seen this before?

It also means any changes on a live system will be lost if the pod needs to be restarted for any reason. This includes:

  • any plugins updates
  • any additional plugins
  • any changes in the Jenkins configuration

This being the case, we need a way to persist all changes. The default Jenkins image already comes with the JCASC Plugin installed so this seemed like a good place to start. However, while this plugin will come in useful later on in the post, we decided not to use it to add additional plugins, etc, since with JCasC, plugins are installed/updated at startup meaning greatly increased times when the pod restarts.

We decided instead to create our own image. In order to get you started, there is some information on the Jenkins X website about creating a custom image.

My partner in crime, Ilya Shaisultanov, then came up with a Jenkinsfile to create the custom Jenkins image. Whilst I can’t supply the whole file, here’s basically what happens.

Given a versioning scheme of <upstream-version>-<inhouse-version>:

The original Dockerfile looked something like this:

FROM jenkinsxio/jenkinsx:REPLACE_UPSTREAM_VERSIONCOPY plugins.txt /usr/share/jenkins/ref/custom-plugins.txtRUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/custom-plugins.txt

According to the Jenkins X documentation, the plugins.txt can contain any additional custom plugins you may wish to install in the form:

<plugin-name>:<plugin-version>

So we are left with the following files using skaffold.yaml to build the docker image:

├── Dockerfile         # docker image to build
├── plugins.txt # any extra plugins to include
└── skaffold.yaml # skaffold to build and push the image

A few minutes later and we have our image: my-reg/my-company/jenkins-x-image:0.0.70–1

To add this image to your Jenkins X release, we need to overwrite the default helm chart values by adding it to our myvalues.yaml. This will look like:

Now we have the image in place, we need to refresh the Jenkins X release. This will be covered in the next section.

How we safely refresh Jenkins X

Calling jx upgrade platform can be a daunting task.

Especially so if, like me, you have a multiple clusters with different configurations. I am never too sure what the jx binary is going to find in the local ~/.jx/ directory.

Here are some steps you can take to minimise the chance of error:

  • ensure your current directory contains the correct myvalues.yaml
  • create a temp directory and run export JX_HOME=${JX_TEMP_DIR}
  • find the current released version of the platform with:
    jx step helm list | grep jenkins-x
  • update with the current released version, effectively refreshing the platform with your newly created values file:
    jx upgrade platform --batch-mode --verbose --version ${JX_VERSION} --always-upgrade

NOTE: the --always-upgrade option is needed to force the upgrade process

And here’s a little script to do just that:

After running the script and updating the platform, you should see the Jenkins deployment using your image:

$ kubectl get deployments.apps jenkins \
-oyaml -o'jsonpath={ .spec.template.spec.containers[0].image }
my-reg/my-company/jenkins-x-image:0.0.70–1

In the next section we will discuss how to populate the plugins.txt.

Persistent Plugins

According to the Jenkins X documentation, the plugins.txt can contain any additional custom plugins you may wish to install. That got me thinking…

“Why not have it contain ALL my plugins?”

Listing all plugins and their respective versions can be achieved by running a small script in the Script Console page (http://JENKINS_URL/script). An alternative method of getting the plugins is mentioned on the jenkinsci/docker GitHub page, but I prefer this way.

Our console script looks like this:

Jenkins.instance.pluginManager.plugins.stream().sorted().collect(java.util.stream.Collectors.toList()).each { plugin -> 
println ("${plugin.getShortName()}:${plugin.getVersion()}")
}
x=""

NOTE 1: the x="" at the end is a little hack to avoid groovy printing the result at the end. Leave it out to see what I mean.
NOTE 2: The plugin list has been sorted to make git diffs easier to read.

This can also be automated using the following script:

Script to list the Jenkins plugins for a remote server

We are now in the position to update the plugins.txt with all the actual plugins from our Jenkins server.

NOTE: The actual process of updating the plugins in the UI is still a manual one.

In my opinion, however, this is correct as it should be a conscious decision made by a Jenkins admin. However, if you have a staging server, for example, you can always update the plugins on the staging server first, test them, and then use the staging server to provide the list for the new image to use in production.

Let’s test the new plugins.txt, shall we:

  • update your Jenkins plugins through the UI, maybe add a new plugin
  • update your plugins.txt using the refresh-plugins.sh
  • build a new docker image
  • update the myvalues.yaml with the new image version
  • use refresh-jx-platform.sh to update your Jenkins X release
  • restart the Jenkins pod by:
    kubectl scale deployment jenkins --replicas 0
    kubectl scale deployment jenkins --replicas 1

Success! My plugins are up to date!

So after adding the refresh scripts and myvalues.yaml, we now have the following files in place:

├── my-jenkins-image
│ ├── Dockerfile
│ ├── plugins.txt
│ ├── refresh-plugins.sh
│ └── skaffold.yaml
└── my-jx-platform
├── myvalues.yaml
└── refresh-jx-platform.sh

Now to tackle the problem of authentication and permissions…

OAuth for Jenkins (manual steps)

First, a little introduction as to what OAuth is actually about.

The OAuth 2.0 specification has more to do with authorisation than authentication, thus allowing a third-party service provider, in this case Google or GitHub, to grant a user access to a specific resource, in this case Jenkins. The ins and outs of OAuth 2.0 is beyond the scope of this post but as a starter, here is an excellent explanation on Stackoverflow from Takahiko Kawasaki.

This section will cover OAuth for both Google and GitHub. Regardless of provider, the steps for this section will basically be the same:

  • install the appropriate Jenkins plugin
  • create the OAuth Client Application
  • configure the plugin using the OAuth apps:
    clientID
    clientSecret
  • configure the authorisation strategy

Pro-Tip: If you make a mistake in any of these steps and don’t know how to fix it, simply restart the Jenkins pod and you will get your original configuration again since nothing has been persisted...yet!

Google Login Plugin

Little documentation but works well enough. One minor disadvantage of using this plugin over GitHub is that Google Groups are not supported. This means all non-standard permissions need to be applied on an individual basis. This will be explained further in the authentication strategy section below.

Here is an excellent post explaining how to install and configure the Google Login Plugin.

No need to re-invent the wheel, hey 🤷‍♂?

GitHub Authentication Plugin

This plugin seems a little more mature and supports both Teams and GitHub Enterprise. I would recommend using this plugin if you have the choice.

Again, here is another excellent post explaining how to install and configure the GitHub OAuth Plugin.

Configuring the “matrix based security”

Once the plugins have been configured, we can customise our “matrix based security” policy. The main difference to consider here is the the authorisation method since this dictates how Jenkins sees the users and therefore what we need to place in the matrix rules.

“Google OAuth” grants users access based on their email address so any non-standard matrix rules will need to use the persons email address like this:

NOTE: Authenticated users represent those authenticated by your Google OAuth Client Application.

“GitHub OAuth” on the other hand grants access based on:

  • individual usernames
  • global organisations
  • teams within an organisation

This allows for a much easier distribution of roles based on teams or organisation rather than just individuals. Here an excerpt from the the plugins website:

NOTE: Again, authenticated users represent those authenticated by your GitHub OAuth Client Application.

❗️❗️❗️ — B E W A R E — ❗️❗️❗
Authenticated users could mean anyone logged into GitHub so️ make sure you test with an external user not belonging to your organisation. You can remove all permissions from Authenticated users, and restrict access to your GitHub Organisation or user only.

Finished? Does it work? Congratulations!

You have now setup OAuth for your Jenkins instance and configured your access and security permissions. Let’s quickly recap what we have done. We have:

  • installed the appropriate Jenkins plugin
  • created an OAuth Client App
  • used the clientId and clientSecret to configure the plugin
  • determined a set of rules, let’s call it the authz_strategy_config, for the “matrix based security” to fine tune access rights and permissions

OAuth for Jenkins (automated)

Now we have the right setup in place, let’s make all that persist over pod restarts.

The next steps will assume we are using the Google OAuth method. Any changes needed for GitHub OAuth will be noted at the bottom of each section.

Persisting the OAuth Plugin

We already discussed persisting the plugins earlier in this post so, if you’re happy with the state of your plugins, simply:

  • take the list
  • update your plugins.txt for the docker image

Applying the Security Configuration

Now the official Jenkins docker image provides you with the a possibility to add your own custom configuration scripts by placing groovy scripts in the /usr/share/jenkins/ref/init.groovy.d directory.

We will use this feature to add two groovy scripts (the scripts are numbered to ensure the order of execution):

  • scripts/001SecurityRealm.groovy.override
    to set the Jenkins security realm
  • scripts/002AuthStrategy.groovy.override
    to configure the authorisation strategy

Quick note on the .override suffix.
The default docker image behaviour is not clobber any changes made over the UI. The override suffix is needed to ensure the any same named files on the file-system are always overwritten by the files in the docker image since we shouldn’t be making changes in the UI, right? ;-)
See this comment for more information.

Our Dockerfile now looks like this:

FROM jenkinsxio/jenkinsx:REPLACE_UPSTREAM_VERSION# add groovy scripts to configure oauth
COPY scripts/*.override /usr/share/jenkins/ref/init.groovy.d/
COPY plugins.txt /usr/share/jenkins/ref/custom-plugins.txtRUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/custom-plugins.txt

Let’s have a look at those scripts.

Security Realm Script

The script expects a file /etc/jenkins-secrets/google-client-auth. The file contains two lines for clientID and clientSecret

GitHub OAuth:
— script can be found here.
— differs in:
— — the file name expected (…./github-client-auth)
— — the SecurityRealm used (import and constructor)

Matrix Based Security Script

Kudos to Sam Gleske at this point for his work on GitHub. I highly recommend anyone interested to have a look through his scripts if you get the chance.

I extended the configure-matrix-authorization-strategy.groovy to allow configuration through an external file. The script itself is fairly self explanatory and expects a file /etc/jenkins-secrets/authz-strategy-config which is then used to configure the “matrix based security”.

Creating docker image and credentials

Here are the files needed to create our custom Jenkins docker image:

├── Dockerfile
├── plugins-refresh.sh
├── plugins.txt
└── scripts
├── 001SecurityRealm.groovy.override # <- Google or GitHub
└── 002AuthorisationStrategy.groovy.override

Before using this image, we also need to create the secrets containing the necessary credentials. I chose to create one secret to hold both sets of credentials but you are free to set these up as you wish.

Let’s create the secret:

apiVersion: v1
kind: Secret
metadata:
name: jenkins-security-secrets
data:
authz-strategy-config: BASE64_ENCODED_AUTH_STRATEGY
google-client-auth: BASE64_ENCODED_GOOGLE_FILE

where the files in question look like (example only):

And finally, with the both image and secret in place, we can make the necessary changes to the myvalues.yaml. We’ll need to:

  • update the docker image
  • mount the secret into the Jenkins pod

After adding the persistence blocks, our new myvalues.yaml looks like:

myvalues.yaml with the jenkins-security-secrets mounted

Restarting the Jenkins pod, you should be greeted with a login-free, cool as you like, OAuth 2.0 experience. Now to check the authorisation strategy. Ask a non-admin colleague to visit the site. Whether they are authorised, and what their permissions on the server will be, should all be based on your self-made authorisation strategy.

OAuth, Security Matrix, and up-to-date plugins! Result!

Next up, the jobs!

Persistent Jobs

Before we get into the implementation of things, I’d like to briefly explain the thought process behind our decisions. We were not interested in archiving the build results in Jenkins. For this, we should really trust the various cloud providers to take care of our Persistent Volume Claim. And anyway, we have release artifacts to show which runs were successful :-).

What we do want, however, is to:

  • have all our jobs and projects saved in a git repository
  • be able to create or update jobs on demand
  • for this to be available upon Jenkins startup

Job DSL Plugin

Although Jenkins X has the jx import command, there was no way to backup the created Jenkins jobs. So since we wanted to manage the jobs ourselves, we decided to use the Job DSL Plugin. I will not go into the specifics of the plugin and how to use it. There are plenty of examples, especially on the wiki page of the plugin.

I used the gradle project example as a template and placed our own jobs in it, including the seed-job job script.

A Jenkinsfile was added to run the tests on PR’s and process the Job DSL scripts on master. Since I didn’t want to give the DSL scripts full admin rights (see below) I needed to add a little try { … } catch { … } to allow for the approval of the scripts in thoses cases where they had been altered.

The Jenkinsfile for the seed-job looks similar to this:

Now that I had the Jenkinsfile, it was just a matter of knowing where to place it.

Jenkins CasC Plugin

As I mentioned earlier in the post, the Jenkins CasC Plugin is installed per default in Jenkins X. This is a great place to start. Here, from the CasC website:

If you do not set the CASC_JENKINS_CONFIG environment variable, the plugin will default to looking for a single config file in $JENKINS_ROOT/jenkins.yaml.

Easy then, all we need to do is create the file and use it to configure our server. And it gets even better thanks to this PR from Ilya Shaisultanov. We are now able to add the additional config file directly through the helm chart.

Now we can basically configure everything with CasC. Have a look at the demos page for an idea of the things possible. The sky is now quite literally the limit and the options (and questions) endless:

  • do you put your all your jobs in the casc file?
  • do you configure the authorizationStrategy in JCasc?
  • how about the github-oauth, google-oauth?
  • do you need a seed job, or can you live with running job-dsl from JCasC directly?
  • etc, etc

NOTE: We will eventually be moving our init scripts from the docker image to the CasC file but that will be something for the future and I’m still not sure how the plugin handles secrets in the configuration yaml that are not in Vault.

Here’s another nice post on using JCasC by the way.

We decided on using a single seed job instead of putting all the jobs in a JCasC file which was itself placed in the myvalues.yaml. It would get messy and wouldn’t really allow for PR’s and testing the way a real job would.

So, that being said, our new myvalues.yaml now looks like:

Now when we refresh the Jenkins X platform…

  • we will see our multi-branch seed-job pre-installed
  • branch indexing will run triggering the master branch run
  • which then creates all your jobs.

Plugins, security, and jobs!

All automated and all persisted across Jenkins pod restarts!

Summary

We have covered a lot of ground in this post but I hope I was able to break down the relevant parts into small enough pieces so as to give you an idea of what’s possible.

Jenkins X does a great job bringing together all the right components to create a fully functional CI/CD platform. However, with the project moving so fast, it is also quite volatile, especially since there are so many moving parts to manage.

But…with a little tweaking here and there it is still possible to maintain a stable platform whilst still having it customised to your needs.

The next post will focus on the Nexus instance within Jenkins X, in particular how to add your own custom repositories.

Until then…

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Published in ITNEXT

ITNEXT is a platform for IT developers & software engineers to share knowledge, connect, collaborate, learn and experience next-gen technologies.

Written by Steve Boardwell

Looking at all things Devops and Automation