Photo by Zhen Hu on Unsplash

Docker & Makefile | X-Ops — sharing infra-as-code parts

Anthony Potappel
ITNEXT
Published in
6 min readFeb 19, 2019

--

In our life as X-Ops — i.e. roles like DevOps, CloudOps, GitOps, SysOps — we focus on building infra-as-code. We frequently use Docker to build and test, commit and pull the code to/from repositories, and when all tests pass, infra moves in production — completely automated.

A big opportunity now is scaling and integrating our X-Ops teams. Two goals on that shortlist are; 1. low- (preferably zero-) barriers to start, helping to get more colleagues come along, and 2. easy-to-follow build and test habits, getting parts into production faster and improve re-use.

In this article I focus on a few easy technical practices — namely applying Docker, Makefile and getting UIDs right — that, in my experience, put us much closer to those goals.

Using a Makefile

Docker brings us the automation to build systems fast and effectively, and we can even boost to running services with docker-compose.

However, in the majority of — in-production — cases there is a separate container platform. The container we build, test and run locally, is part of a larger context. One issue that often pops up, is that when you share docker code parts within a team, or the world, you will also need to explain how your container requires to be run.

Personally, I want all my projects to have something like this working:

git pull && make test && make build && make deploy

It is extremely powerful to toss around a one-liner in this format. You can easily ask a colleague to take your product (-component) for a spin, without having him/ her plough through a big README. It either works or does not.

Enter the good old Makefile. A 40-year old classic, still too young to retire, is now becoming reborn within automation projects (nice short read on that: makefile-for-lazy-developers).

This is my version of a Makefile that plays nice with Docker:

If you are not familiar with Makefile, this may look intimidating. Before you start, you will need Docker. Docker install instructions are here: Ubuntu, Mac or Windows. I am also assuming, you also know how to open a shell, install a git client and can run make. A Linux distribution is recommended, but I have also tested it on Mac. If you are set, here are some commands to try:

# download our files
git clone https://github.com/LINKIT-Group/dockerbuild
# enter directory
cd dockerbuild
# build, test and run a command
make build test shell cmd="whoami"
# my favorite for container exploration
make shell
# shell-target is the default (first item), so this also works:
make cmd="whoami"
make cmd="ls /"
# force a rebuild, test and cleanup
make rebuild test clean

Notice I not used “&&”-chars anywhere? You can use a single make-command with a list of targets (e.g. build, test, clean), make stops once an error is hit.

In Makefile, You can customise the one-liners to run at a specific target. Personally, I always like to add “ — rm” to a run-statement, to prevent ending up with a trail of abandoned containers. If you read the Makefile carefully, you will see more interesting things to try-out.

You should want to customise the test-target, and make testing become standard practice. When your build part gets integrated in CI/CD pipelines (eventually, it will), it is incredibly helpful for “make test” to exist.

From here, it is only one more step to add a deploy- target, pushing the container to Docker-hub or a similar repository. Continuous deployment is where things get really exciting. Unfortunately, this also requires much more writing space, which is why I need to save this for a follow-up article (or two).

Before we move in the direction of deployments, we need to cover one more basic configuration practice. Noticed the HOST_USER and HOST_UID variables in Makefile? These are filled at runtime, and re-exported for docker-compose to read. Which leads me to the next chapter on UIDs.

Dealing with UIDs

Docker defaults a container to user root when a UserID (UID) is not configured. When you get to deal with production systems, it is required practice to configure users correctly. You can read about it on Dockers’ best-practices-list, and this piece from K8s covers it nicely just as well.

When you run tests locally on Mac, root is mapped to your own user so all works fine. Production platforms — should — be configured for user-isolation, but that is not always the default. If for example, you run your tests on a Linux system (without re-mapping), you will likely stumble upon the issue that files generated by Docker are owned by root.

You will also run into issues when you have multiple users on one system. Even when you do not share a system, running parallel tests requires the separation just as well. And, how would you even pass credentials (e.g. by linking volumes as ~/.ssh and ~/.aws) without a correct UID setup? The latter is a common pattern when dealing with infra deployments.

If you start becoming a more intensive docker-consumer, your team grows and things get complex, eventually you will either want or need to embed UID separation in all of your projects.

Fortunately, configuring UID early-on is (relatively) easy. While it took me some practice to get a good setup, I now have a template that (assuming use of Makefile) does it all automatically, costing near zero effort.

Below is a copy of a docker-compose file to use:

Some magic is in these variable statements:

${HOST_USER:-nodummy}
${HOST_UID:-4000}

This copies variables from your runtime, and if not exist, default to “nodummy” and “4000” respectively. If you don’t like defaults, just do this:

${HOST_USER:?You forgot to set HOST_USER in .env!}
${HOST_UID:?You forgot to set HOST_UID in .env!}

Note on the “HOST_” prefix. I steer away from using USER and UID directly. These variables are not guaranteed to be available in a runtime. USER often is available in the shell, but UID is mostly an environment variable which Docker will not pick up. Having a separate naming scheme prevents accidents, and allows flexibility in configuring automation pipelines.

Dockerfile looks like this:

Here we build a small (its just 10MB, micro-services FTW!) Alpine container with bash added for fun and practice. We apply the concept of so-called staged builds to keep base-images (re-usable build components) separated from user-images (image prepared for a specific run).

All the variables are set automatically when using the Makefile as explained in previous chapter. This Dockerfile also works fine without a Makefile though, users can still configure variables in their own runtime or a separate env-file. Personally, I just use Makefiles because I enjoy the zero-effort.

One last thing. If you are in developing mode, you will probably hit a certain obstacle requiring you to troubleshoot as user root. Simply type:

make shell user=root

Wrapping it up

I hope you found this article useful. Full code is available at github.

Now, when we build a new service, we can simply kickstart our project by copy/pasting these files — Makefile, Dockerfile and docker-compose.yml — , modify where needed, and have our new service up and running in no-time.

Using the Docker+Makefile process not just makes your parts portable, but also re-usable to your team (or the world), and allows for shipping with testing-rules attached. Welcome to the world of X-Ops!

Happy coding, and see you in next article!

--

--

Writer for

Seasoned IT practitioner — passioned on programming cloud environments — soft spot for AWS —love to connect: https://linkedin.com/in/anthonypotappel/