Shift your CI scripts to docker build

Ignacio Millán
ITNEXT
Published in
4 min readMar 25, 2019

--

Typical scenario: your team maintains dozens of Jenkinsfile/.gitlab-ci.yml/whatever, each one specific to the needs of its projects. You have tried to reuse these continuous integration scripts from one repo to another. But that is hard because each project has its own tech stack, versions, dependencies to other tools, etc.

There is more, you dream of being able to test your CI pipeline locally, instead of debugging in the CI server.

Am I right? Then, multistage docker builds are meant for you.

This technique has been around for years, but users usually don’t realize its full potential.

The idea is simple: to merge multiple Dockerfiles into the same Dockerfile. Each of them can perform a different task in the building process.

Let’s walk over an example. I have included some extra complexity to demonstrate advanced concepts but by now keep the focus on the essence. Please refer to the official docs for a kickstart. This post trait to be a demonstration of CI capabilities.

The first stage performs sonar tests:

FROM newtmitch/sonar-scanner AS sonar
COPY src src
RUN sonar-scanner

The next stage installs dependencies and builds the app:

FROM node:11 AS build
WORKDIR /usr/src/app
COPY . .
RUN yarn install \
yarn run lint \
yarn run build \
yarn run generate-docs
LABEL stage=build

The next one, unit tests:

FROM build AS unit-tests
RUN yarn run unit-tests
LABEL stage=unit-tests

The third, push the docs to S3:

FROM containerlabs/aws-sdk AS push-docs
ARG push-docs=false
COPY --from=build docs docs
RUN [[ "$push-docs" == true ]] && aws s3 cp -r docs s3://my-docs-bucket/

Finally, the last stage is the only one that will be reflected in the resulting image. It uses a smaller base image and has only the required artifacts:

FROM node:11-slim
EXPOSE 8080
WORKDIR /usr/src/app
COPY --from=build /usr/src/app/node_modules node_modules
COPY --from=build /usr/src/app/dist dist
USER node
CMD ["node", "./dist/server/index.js"]

The Jenkinsfile becomes much simpler (example for Jenkins on Kubernetes):

#!/usr/bin/env groovypodTemplate(label: "example", name: "example", 
containers: [
containerTemplate(name: 'dind',
privileged: true, image: 'docker:18.06-dind', command: 'dockerd-entrypoint.sh'),
containerTemplate(name: 'docker', image: 'docker:18.06', command: 'cat', ttyEnabled: true)
],
envVars: [
envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375')
]
){
node('example'){container('docker'){stage('checkout'){
checkout scm
}
stage('Docker build + push'){
sh """#!/bin/bash
docker build -t test --build-arg push-docs=true .
docker push test
"""
}
stage('deploy'){
.......
}
}
}
}

It can be used for nearly every project!

Main advantages:

  • It is reusable from one CI system to another (i.e., migrating from Jenkins to GitHub actions). That’s especially convenient for open source projects.
  • You can test it just by running docker build in local.
  • Everything related to building and testing the source code is in the Dockerfile. Thus the CI scripts keep abstract from the source code.
  • Less room for human errors: every step is unavoidably executed inside an unprivileged docker container. You can even avoid the docker daemon by using tools like Kaniko.

Summarizing, everything specific to the project is self-contained, and CI scripts can be reused from one repository to another, making the infrastructure simpler, cheaper, and more maintainable. Give it a chance and shift to the Dockerfile as many workloads as you can!

Extra Tips

There are some caveats, but those are easy to overcome. I’ll tell you about two of them:

Skipping specific steps on demand

For example, I have included a stage to push the generated docs to a bucket on S3. This is only useful if the build is executed in my CI system, where I provide credentials to write in this bucket.

To achieve that, I set a build argument with the ARG command. By default it is false, but in my CI server I run docker build --build-arg push-docs=true and then the commandaws s3 cp gets executed.

Exporting test reports or any other artifact

The most significant caveat of executing everything in docker build is that the artifacts remain into the intermediate docker images. For example, for me, it’s useful to have the tests results in the Jenkins workspace to generate stats.

It’s easy to take any artifact from any intermediate stage with labels.

I’ve labeled the second stage with stage=test. So after docker build, I can run a little script to get the file test-results.xml

docker cp $(docker create — name temp $(docker image ls — filter label=stage=test -q | head -n 1)):/usr/src/app/tests-results.xml .; docker rm temp

It uses docker image ls to get the ID of the image of this stage and docker cp to copy the file.

A better solution is to use more LABELS to filter your specific build from other similar builds.

You reached the end, so here is the last advice: Use BuildKit. 😉

ignaciomillan.com

--

--

DevOps, SRE Freelancer. In the intersection of business operations and tech. Fan of #containers #K8S #cloud #data-analytics