Elixir + Kubernetes = 💜 (Part 1)

Drew Cain @groksrc
ITNEXT
Published in
8 min readJul 26, 2019

--

How To Set Up and Auto-Scaling Elixir Cluster with Elixir 1.9

This is Part 1 of a three part series on how to create and deploy an Elixir application that will automatically scale on Kubernetes. Part 2 and Part 3 are also available. If you just want to see the source code it’s available here: https://github.com/groksrc/el_kube and if you just want a summary of the commands the slide deck from the original talk is available here.

This series is designed to show you how to create an auto-scaling Elixir cluster using Elixir 1.9 and Kubernetes. What does that mean? Well, it means that the application will start, automatically connect in an erlang cluster configuration, and then automatically add and remove erlang nodes as the Kubernetes (k8s) configuration changes.

In Part 1 we’ll look at creating the application itself. Part 2 will show you how to Dockerize the application and confirm that the container works. And finally in Part 3 we will launch the application on Kubernetes using minikube and demonstrate how it automatically scales as the deployment changes. Let’s get started!

Part 1 — Creating the App

Before you begin you’ll want to make sure that you have Elixir 1.9 installed, along with OTP 22. You’ll also want to make sure that you’re using the current version of Phoenix which is v1.4.9 at the time of this writing. Also, make sure that minikube is installed and kubectl is configured to use the minikube context.

Our application is going to be named el_kube and as an aside, all of this code is available in my repo at https://github.com/groksrc/el_kube. I start with an empty project at master, then at each branch I change only the files necessary to proceed to the next step. I originally presented this work at a Simpli.fi tech talk so you should be able to issue the commands and make the file changes straight from the slide deck as well. If you run into any trouble, send me a reply and I’ll try to help out.

00: New Project

From the working directory on your development machine let’s start a new Phoenix project. I called mine el_kube because elixir and Kubernetes mashed up you know.

$ mix phx.new el_kube

Be sure to say Yes when asked to “Fetch and install dependencies? [Yn]”

01: Open the project

Now let’s switch into the project folder and open our editor. All of the following commands are issued from the project folder’s root directory. Also, I’m using vscode with the ElixirLS plugin, so you’re working with git you will probably want to add the .elixir_ls/ folder to the .gitignore.

$ cd el_kube && code .

02: Edit mix.exs

Once the project is open in your editor pop over to the mix.exs file that’s at the project root. We’re going to make three changes here.

First, update the project key from 1.5 to 1.9. Using Elixir 1.9 is required for this project.

elixir: "~> 1.9",

Next, we’re going to add a dependency on peerage and set peerage to boot as an extra_application on startup. Like it says in its readme, Peerage helps your erlang nodes find each other. We’ll be using it to provide the secret sauce for getting this to work.

def application do
[
mod: {ElKube.Application, []},
extra_applications: [:logger, :runtime_tools, :peerage]
]
end

And...

defp deps do
[
{:phoenix, "~> 1.4.9"},
{:phoenix_pubsub, "~> 1.1"},
{:phoenix_ecto, "~> 4.0"},
{:ecto_sql, "~> 3.1"},
{:postgrex, ">= 0.0.0"},
{:phoenix_html, "~> 2.11"},
{:phoenix_live_reload, "~> 1.2", only: :dev},
{:gettext, "~> 0.11"},
{:jason, "~> 1.0"},
{:plug_cowboy, "~> 2.0"},
{:peerage, "~> 1.0"}
]
end

Once the three changes are mix.exs you can save it and close it up. Don’t forget to run $ mix deps.get if your editor doesn’t do that for you.

02: Initialize the release

Next, we’re going to use the new mix command to generate some template files for us. These template files are used to help generate the scripts which are executed when the application starts up.

$ mix release.init
* creating rel/vm.args.eex
* creating rel/env.sh.eex
* creating rel/env.bat.eex

This command generates three different Elixir templates for you in the rel directory. We’ll ignore the env.bat.eex. I’m not on windows, but if I were I would need to apply the relevant changes there. Instead, I’m going to update env.sh.eex to set some environment variables for me at startup.

03: Update rel/env.sh.eex

Inside rel/env.sh.eex we’re going to uncomment the lines that export RELEASE_DISTRIBUTION and RELEASE_NODE. Then change the localhost IP to an environment variable that will be passed dynamically. Make the following changes to the file:

export RELEASE_DISTRIBUTION=name
export RELEASE_NODE=<%= @release.name %>@${HOSTNAME}

Configuring the RELEASE_DISTRIBUTION environment variable to name will configure the erlang beam to be able to connect to other nodes in the cluster using the long name format. The RELEASE_NODE is the name of the erlang node using the long name format. See the mix task docs for more details on configuring these environment variables.

For our purposes, we’re going to dynamically pass in the IP address that k8s assigns to the container as the HOSTNAME but you can use a valid cluster DNS name here as well. Save that file up and we’re on to the next.

04: Remove config/prod.secret.exs

This file won’t be used, so we’re going to drop it.

$ rm config/prod.secret.exs

05: Create config/releases.exs

Instead, we’re going to use the new releases.exs file to help us dynamically configure our application at startup. Let’s create the file and open it in our editor.

$ touch config/releases.exs

06: Edit config/releases.exs

Now we’re ready to set up our dynamic configuration. Paste this into the file:

import Config

service_name = System.fetch_env!("SERVICE_NAME")
db_url = System.fetch_env!("DB_URL")
secret_key_base = System.fetch_env!("SECRET_KEY_BASE")
port = System.fetch_env!("PORT")

config :el_kube, ElKube.Repo, url: db_url

config :el_kube, ElKubeWeb.Endpoint,
http: [port: port],
secret_key_base: secret_key_base,
url: [host: {:system, "APP_HOST"}, port: {:system, "PORT"}]

config :peerage, via: Peerage.Via.Dns,
dns_name: service_name,
app_name: "el_kube"

The service_name is going to be the internal DNS name of the application. In other words, a node on the cluster will be able to query DNS for the service_name and get a response with a list of IP addresses.

The db_url is the Postgres connection string. The secret_key_base is used for Phoenix to sign the cookies and the port is the port the web endpoint will listen on.

You’ll also notice there’s an APP_HOST environment variable sprinkled in to the ElKubeWeb.Endpoint as well. Phoenix is a bit of an odd-ball because it did allow some dynamic configuration on its own prior to Elixir 1.9. Here we’re just using that built-in construct to tell Phoenix to pull APP_HOST and PORT from the environment. These two values are used for its internal URL helpers. Save this file up, on to the next.

07: Edit config/prod.exs

Now let’s look at the production config file. As in prior Elixir releases, the environment specific configs are layered on top of the base config/config.exs, so here we can set some production specific stuff that can’t be changed. First, we need to remove the url key that’s been moved into config/releases.exs and next we need to add the endpoint config key/value pair server: true

This tells Phoenix to start the webserver endpoint when the server starts up. Why that’s not the default in this file I’m not sure, but let’s make those changes and move on. Your config/prod.exs file should now look like this.

use Mix.Config# Lots of comments ...
config :el_kube, ElKubeWeb.Endpoint,
cache_static_manifest: "priv/static/cache_manifest.json",
server: true
# Do not print debug messages in production
config :logger, level: :info
# Lots more comments ...

Oh! And don’t forget to scroll down to the very bottom of the file and delete the import_config "config/prod.secrets.exs" I missed that on several run-throughs, so that’s why we deleted it earlier. It will now complain at us if we forget this step.

08: Edit config/config.exs

This step isn’t completely necessary. Of course you can configure your application however you want in real life. But for demonstration purposes we’re going to drop in some base Ecto config just so we can show that it does indeed get picked up. Open up the config/config.exs file and add the following:

config :el_kube, ElKube.Repo,
adapter: Ecto.Adapters.Postgres,
pool_size: 10

Again, all or none of this could be in here. In real life you’d probably want your pool_size to be configured from config/releases.exs so this is purely didactic.

09: Edit config/dev.exs

One last file change and we’ll be ready to start compiling. Let’s make peerage happy in the development environment, and also because we typically don’t run a cluster in development mode (which peerage complains vociferously about) let’s tell it to be quiet. Add the following to your config/dev.exs file:

config :peerage,
via: Peerage.Via.List,
node_list: [:"el_kube@127.0.0.1"],
log_results: false

10: Compile

Now let’s switch to our terminal and execute the compilation and smoke tests. First we need to generate the digest files for the javascript side of Phoenix to be happy.

$ mix phx.digest
...
==> el_kube
Check your digested files at "priv/static"

Next let’s generate the release:

$ MIX_ENV=prod mix release
...
...
To list all commands:
_build/prod/rel/el_kube/bin/el_kube

If you see the above, you’re making great progress! The last step is to create a database for our application to connect when it starts up.

$ mix ecto.create
The database for ElKube.Repo has been created

Now we’re ready to run the release and make sure it fires up. This command has to pass all of the environment variables we considered earlier in addition to one I didn’t talk about, the RELEASE_COOKIE. This one is also covered in the mix release task docs, but I wanted to touch on it here. The RELEASE_COOKIE is a sort of pre-shared key that the erlang beam uses to authenticate when nodes connect. It’s not strictly necessary here, but I wanted to introduce it here because you’ll be seeing it again in Part 3.

11: Smoke Test

Here is the command to start the app:

DB_URL=ecto://postgres:postgres@localhost/el_kube_dev \
RELEASE_COOKIE=foo \
SECRET_KEY_BASE=foo \
HOSTNAME=127.0.0.1 \
SERVICE_NAME=localhost.svc \
APP_HOST=localhost \
PORT=4000 \
_build/prod/rel/el_kube/bin/el_kube start

We covered what all of these environment variables do above so I won’t go over that again. What’s new here is that we’re actually executing our application and passing the start command. When you do this you should see the following:

15:20:47.765 [info] Running ElKubeWeb.Endpoint with cowboy 2.6.3 at 0.0.0.0:4000 (http)
15:20:47.765 [info] Access ElKubeWeb.Endpoint at http://localhost:4000

Now you can open your browser to that address or $ curl http://localhost:4000 and you should see a webpage come back. If it does then you’re in business!

Next, let’s confirm our database connectivity is working by launching another terminal and connecting to this running instance. Everything is the same as the previous command except this time instead of passing start at the end we’ll pass theremote argument:

DB_URL=ecto://postgres:postgres@localhost/el_kube_dev \
RELEASE_COOKIE=foo \
SECRET_KEY_BASE=foo \
HOSTNAME=127.0.0.1 \
SERVICE_NAME=localhost.svc \
APP_HOST=localhost \
PORT=4000 \
_build/prod/rel/el_kube/bin/el_kube remote

Here you should be dropped into an elixir prompt:

Erlang/OTP 22 [erts-10.4.4] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe] [dtrace]Interactive Elixir (1.9.1) - press Ctrl+C to exit (type h() ENTER for help)
iex(el_kube@127.0.0.1)1>

At this point you can issue an Ecto command and you should get back an :ok tuple.

iex(el_kube@127.0.0.1)1> ElKube.Repo.query("select 1 as test")
{:ok,
%Postgrex.Result{
columns: ["test"],
command: :select,
connection_id: 41993,
messages: [],
num_rows: 1,
rows: [[1]]
}}

If you made it this far, congratulations! You’re now ready to move on to Part 2 and Dockerize your application.

-g

--

--