How-to: Kubernetes Application Deployment with DNS management

Jaroslav Pantsjoha
ITNEXT
Published in
10 min readSep 27, 2019

--

When it comes to GitOps efforts, amongst the many caveats and the varied snags to watch out for when configuring these, — is the DNS toil. I have been long procrastinating to get a running demo of this External-DNS https://github.com/kubernetes-incubator/external-dns for a little while, alas it is here now.

And it’s so dang straight forward.

External-DNS undertakes all that management, — mapping FQDN to a service and an ingress. Albeit the Kubernetes Service DNS management will require a public IP address, provisioned with a loadBalancer type. This simplifies the DNS management — A records added and removed automatically, as your K8 services are deployed and removed. You will probably not want to use the K8 Service with `ExternalIP` DNS mapping as it is to incur IP provisioning costs PER such service. So I would advise reusing the ingress, host path routing as you find to be more appropriate.

For most of the infra Environment designs, this would largely be the development and perhaps Staging environments could benefit from a version-controlled, and automated infrastructure-as-code provisioning process.

Let’s assess the problem we’re solving — Address the dynamic DNS mapping configuration requirement for our application. This may be required for the, say Dev Team when deploying and testing new application features. This tastes better within the #GitOps environment — where your Git Repo is all the "source of truth" which gets automatically provisioned. This, coupled-up with the Weaveworks Flux (https://github.com/fluxcd/flux) which would auto-apply your Kubernetes YAML manifests, — offers a significant degree of CD* into the release process.

I hope to “Explain it, like to a 10-year-old” — with a reasonably straight forward demo. Err…

wowzers, you’d think, right? Don’t worry, it’s not all that bad actually :)

I intend the hands-on number of you, readers to be able to duplicate the success, providing you do have your own DNS in your Google Cloud DNS service mapped and configured. If so, by all means, do copy-paste away at your heart’s content and like-share this along for virtual beer kudos.

Your System Requirements

This may be obvious, but you will need git, google-cloud-sdk and kubectl installed on whichever system you are running this

Having had a go now setting this up. It is moderately straight forward with some expected K8 and GCP delays in service provisioning. When copy-paste testing yourself, bear that in mind.

Just show some of that YAML Magic!

Pre-Flight Planning & Sanity Checks

I expect you to have a DNS name managed by Google Cloud DNS, and Cloud DNS API enabled in the project you are replicating this run. Furthermore, We will be setting up the DNS zone management first, and then continue with the fresh cluster setup. Follow the progress for details.

  • My Test domain zone is jpworks.squadzero.io
  • Ensure the correct NS servers are mapped to google Cloud DNS zone for jpworks.squadzero.io zone.

The Process

This is a working demonstration, however, based on my GCP project name and my domain zone. Your setup will vary, so be sure to update the global vars as well as the external-DNS deployment argument list.

Setting up GLOBAL VAR

## This global will be used throughout the copy-paste scripts
## CHANGE THIS as required
PROJECT=jaroslav-pantsjoha-contino
CLUSTER_NAME=jpworks-cluster

DNS Zone

If you have not set this up manually yet, you can programmatically create a zone that we want external-DNS to manage — an obvious benefit to having multiple cluster environments to manage own separate zones.

cloud dns managed-zones create "jpworks-demo-squadzero-io"  \
--dns-name "jpworks.squadzero.io." \
--description "Automatically managed zone by kubernetes.io/external-dns"
Created [https://dns.googleapis.com/dns/v1/projects/jaroslav-pantsjoha-contino/managedZones/jpworks-demo-squadzero-io].

DNS Registrar update

Update the NS Records with your registrar, having dropped TTL if possibly, earlier, or just wait for the usual 24h until record update propagates.

Check again, to see them propagated correctly reflected in dig query.

dig NS jpworks.squadzero.io
...
;; ANSWER SECTION:
jpworks.squadzero.io. 21600 IN NS ns-cloud-c1.googledomains.com.
jpworks.squadzero.io. 21600 IN NS ns-cloud-c2.googledomains.com.
jpworks.squadzero.io. 21600 IN NS ns-cloud-c3.googledomains.com.
jpworks.squadzero.io. 21600 IN NS ns-cloud-c4.googledomains.com.
...

DNS — Additional Zone & Verification

The default view should contain about 2 records

jp$ gcloud dns record-sets list --zone "jpworks-demo-squadzero-io" 
NAME TYPE TTL DATA
jpworks.squadzero.io. NS 21600 ns-cloud-c1.googledomains.com.,ns-cloud-c2.googledomains.com.,ns-cloud-c3.googledomains.com.,ns-cloud-c4.googledomains.com.
jpworks.squadzero.io. SOA 21600 ns-cloud-c1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300

Subzone Side Note: if you do have a sub-zone within the main zone, you need to tell the root zone where to find these records as well.

Tell the parent zone where to find the DNS records for this zone by adding the corresponding NS records there. Assuming the new zone is demo.jpworks.squadzero.io and the domain is jpworks.squadzero.io and that it's also hosted at Google we would do the following;

$ gcloud dns record-sets transaction start --zone "$PROJECT"
$ gcloud dns record-sets transaction add ns-cloud-e{1..4}.googledomains.com. \
--name "jpworks.squadzero.io." --ttl 300 --type NS --zone $PROJECT"
$ gcloud dns record-sets transaction execute --zone "$PROJECT"

Demo Cluster Setup

Creating a default cluster-demo cluster. Pay attention to the scope permissions your potentially current Cluster, Node-Pool comes with — Requires CloudDNS RW — if you don’t have that, you will be forced to create a new node-pool. Good luck.

But, for the simplicity of the demo, I allow all permission scopes, but you should cut-down permissions to “Least privileged requirements” tune to your needs.

A basic, default spec 2 node cluster with preemptible nodes

gcloud beta container --project "$PROJECT" clusters create "$CLUSTER_NAME" \
--zone "us-central1-a" --no-enable-basic-auth \
--cluster-version "1.14.6-gke.2" --machine-type "n1-standard-2" \
--image-type "COS" --disk-type "pd-ssd" --disk-size "50" \
--metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/cloud-platform" \
--preemptible --num-nodes "2" \
--enable-cloud-logging \
--enable-stackdriver-kubernetes --enable-ip-alias \
--network "projects/$PROJECT/global/networks/default" \
--subnetwork "projects/$PROJECT/regions/us-central1/subnetworks/default" \
--default-max-pods-per-node "110" --addons HorizontalPodAutoscaling,HttpLoadBalancing \
--enable-autoupgrade \
--enable-autorepair

Example Created Cluster

Creating cluster jpworks-cluster in us-central1-a... Cluster is being deployed...⠼kubeconfig entry generated for jpworks-cluster.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
jpworks-cluster us-central1-a 1.13.7-gke.8 34.70.157.163 n1-standard-2 1.13.7-gke.8 2 RUNNING

Connect to Cluster

Configuring the local environment to use the correct project and obtain cluster credentials to log with.

gcloud config set project $PROJECT
gcloud container clusters get-credentials $CLUSTER_NAME --zone us-central1-a --project $PROJECT

Ready to Deploy Application

Getting Started: The full demo YAML below should be all you need.

Create a folder and copy-paste save all these YAML manifests into that folder. Then aptlykubectl apply -f .

Now, Components;

Ingress

The important bit — This is where the ingress host Record will be picked up by the external-DNS app. That will attempt to perform the A Record mapping to the provisioned IP, as assigned to this ingress (on GCP).

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: nginx.jpworks.squadzero.io
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80

Nginx Demo App Deployment + service

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx-app
name: nginx-app
spec:
replicas: 1
selector:
matchLabels:
run: nginx-app
template:
metadata:
labels:
run: nginx-app
spec:
containers:
- name: nginx
image: nginx:1.15.7-perl
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
mkdir -p /usr/share/nginx/html/ && echo '<h1>hello world</h1><p>flux-kubernetes-demos/nginx</p>' > /usr/share/nginx/html/index.html;
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx-app
name: nginx-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
external-dns.alpha.kubernetes.io/hostname: 'nginx-svc.jpworks.squadzero.io'
spec:
type: LoadBalancer
ports:
- port: 80
selector:
run: nginx-app

The service component is that example — requires annotation but allows you to have the same DNS management for your consideration. I purposely maintained the service with a `type=loadbalancer` butinternal loadbalancer subtype, so there is no externalIP cost.

External-DNS deployment + rbac + service account

The final, crucial component to this demo.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: registry.opensource.zalan.do/teapot/external-dns:latest
args:
- --source=service
- --source=ingress
- --domain-filter=jpworks.squadzero.io # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=google
- --google-project=jaroslav-pantsjoha-contino # Use this to specify a project different from the one external-dns is running inside
# - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --registry=txt
- --txt-owner-id=my-jpworks-demo-domain
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---

Noteworthy features for the external-DNS deployment, in the snippet below, once again.

YOUR TODO: You will need to specify your own domain zone and your project-name at the very least.

args:
- --source=service # explicitly select your sources
- --source=ingress
- --domain-filter=jpworks.squadzero.io # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=google # can be aws,azure and many others - --google-project=jaroslav-pantsjoha-contino # Use this to specify a project different from the one external-dns is running inside
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --registry=txt
- --txt-owner-id=my-jpworks-demo-domain

Let’s do this!

jp$ kubectl apply -f .

Better yet, if you adopt #GitOps CICD release-deploy methodology, you could have such auto-release done by the likes of weaveworks flux.

Get in touch with us if you want to learn more!

Back to demo, Let’s see the result of the aforementioned ‘Apply All’ on the cluster.

jp$ kubectl apply -f .
configmap/nginx-configuration configured
deployment.extensions/nginx-app configured
service/nginx-service configured
deployment.extensions/external-dns configured
serviceaccount/external-dns configured
clusterrole.rbac.authorization.k8s.io/external-dns configured
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer configured

What now

It takes about 3–5 minutes for the full successful rollout of the external-dns demo, primarily due to the fact that ingress IP setup on GCP control pane takes a bit of time. Once Services and Ingress are fully provisioned, you can view the demo on;

In the meanwhile, you can monitor the progress of the deployment via;

kubectl get po -w

Then services and ingress

kubectl get svc,ing

You are expected to view IP addresses on both External IP for the Service as well as the ingress.

Example:

kubectl get svc,ing
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 28m
service/memcached ClusterIP None <none> 11211/TCP 6m49s
service/nginx-service LoadBalancer 10.0.1.243 10.128.0.16 80:30903/TCP 26m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/nginx-ingress-default nginx.jpworks.squadzero.io 34.96.92.237 80 26m

Let's confirm the external-DNS is working as expected also.

time="2019-09-27T10:06:40Z" level=info msg="Change zone: jpworks-demo-squadzero-io"time="2019-09-27T10:06:40Z" level=info msg="Add records: nginx.jpworks.squadzero.io. A [34.96.92.237] 300"time="2019-09-27T10:06:40Z" level=info msg="Add records: nginx.jpworks.squadzero.io. TXT [\"heritage=external-dns,external-dns/owner=my-jpworks-demo-domain,external-dns/resource=ingress/default/nginx-ingress-default\"] 300"time="2019-09-27T10:07:40Z" level=info msg="Change zone: jpworks-demo-squadzero-io"time="2019-09-27T10:07:40Z" level=info msg="Add records: nginx-svc.jpworks.squadzero.io. A [10.128.0.16] 300"time="2019-09-27T10:07:40Z" level=info msg="Add records: nginx-svc.jpworks.squadzero.io. TXT [\"heritage=external-dns,external-dns/owner=my-jpworks-demo-domain,external-dns/resource=service/default/nginx-service\"] 300"

Let’s also confirm the domain zone is getting all the relevant updates, as external-dns supposedly says so.

gcloud dns record-sets list     --zone "jpworks-demo-squadzero-io"
NAME TYPE TTL DATAjpworks.squadzero.io. NS 21600 ns-cloud-c1.googledomains.com.,ns-cloud-c2.googledomains.com.,ns-cloud-c3.googledomains.com.,ns-cloud-c4.googledomains.com.jpworks.squadzero.io. SOA 21600 ns-cloud-c1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300nginx.jpworks.squadzero.io. A 300 34.96.92.237nginx.jpworks.squadzero.io. TXT 300 "heritage=external-dns,external-dns/owner=my-jpworks-demo-domain,external-dns/resource=ingress/default/nginx-ingress-default"nginx-svc.jpworks.squadzero.io. A 300 10.128.0.16nginx-svc.jpworks.squadzero.io. TXT 300 "heritage=external-dns,external-dns/owner=my-jpworks-demo-domain,external-dns/resource=service/default/nginx-service"

Looks good, but is it accessible yet?

Unfortunately, and somewhat expectedly, It does take a while to get ingress setup and fully provisioned, for that very first time. In the backgrounds are a lot of processes kicked off like IP request and reservation, as well as backend mapping, IP forwarding rules and so on. Actually, given this method provisions production-grade scalable architecture, this time delay is really not that bad.

To be clear and fair on the ingress provisioning delay, — if you did want to map a “mere mortal” service externalIP mapped to A record, — this would available and accessible immediately.

External traffic would then be hitting your k8 service directly, load-balancing rr such traffic on however many pods the deployment is configured with. That is it. no advance features or logic; no traffic route splitting, WAF, or SSL offloading.

A record only to take a minute to be added.

Once ingress is fully provisioned and ready, this nginx application would be publically accessiblehttp://nginx.jpworks.squadzero.io

curl nginx.jpworks.squadzero.io<h1>hello world</h1><p>flux-kubernetes-demos/nginx</p>
Yey, it work’s like magic and Larry is overjoyed! Happy Larry Day everyone!

And finally, the Nginx service A record is also recorded, but that is the internal DNS only. Validate service-based DNS mapping via dig to confirm.

jp$ dig A nginx-svc.jpworks.squadzero.io
...
;; ANSWER SECTION:
;nginx-svc.jpworks.squadzero.io. 111 IN A 10.128.0.16

Hope you enjoyed the demo

Give a like, a share and some good vibes, if you found it useful*

Until Next time,

Jaroslav Pantsjoha

Optional: Clean Up

Nuking cluster. Be careful not to nuke anything else by accident. “Don’t do anything I would do”jp$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
jpworks-cluster us-central1-a 1.13.7-gke.8 34.70.157.163 n1-standard-2 1.13.7-gke.8 2 RUNNING
jp$ gcloud container clusters delete jpworks-cluster --zone us-central1-a
The following clusters will be deleted.- [jpworks-cluster] in [us-central1-a]Do you want to continue (Y/n)? yDeleting cluster jpworks-cluster...⠏

PS There is a host of exciting Kubernetes projects taking place at Contino. If you are looking to work on the latest-greatest infrastructure stack or looking for a challenge, — Get in touch! We’re hiring!

We’re looking for bright minds at every level. At Contino, we pride ourselves on delivering the best practices cloud transformation projects, for medium-sized businesses to large enterprises.

JP

By the way, 👏🏻 *clap* 👏🏻 your hands (up to 50x) if you enjoyed this post. It encourages me to keep writing and help other people finding it :)

--

--

Google Cloud CoP Lead @ Cognizant. All Content and Views are my own.