GKE managed SSL certificates — in action

Igor Domrev
ITNEXT
Published in
4 min readDec 17, 2020

--

Photo by Loic Leray on Unsplash

If you are writing an api gateway, you need TLS termination (https requests), and you probably don’t want to manage SSL certificates (public keys signed by someone all the internet trust), yourself . If you use GKE. A simple solution can be achieved using a k8s ingress a CRD named ManagedCertificate and a couple of annotations, all you need to do is follow the documentation, which I did, and stumbled upon so many issues that it became a real pain. Google did a great job implementing the feature, but a really shitty one documenting it. In this post, I’ll attempt to walk through the setup process.

tl;dr — here’s a helm chart with everything this post describes. — ( it can be used with argocd to visualize the entities in play )

argocd applying the helm chart on a cluster

The static ip

the simple part, reserve an IP in GCP and assign it a DNS A record.

gcloud compute addresses create reserved-ip-name --global
gcloud compute addresses describe reserved-ip-name --global

gcloud dns record-sets transaction start --zone=<ZONE NAME>
gcloud
dns record-sets transaction add 79.179.75.123 --name=bannana.com --ttl=300 --type=A --zone=<ZONE NAME>
gcloud
dns record-sets transaction execute --zone=<ZONE NAME>

ZONE_NAME is the GCP DNS hosted zone name, The only thing worth mentioning here is that the GCP dns hosted zone should be connected to some public dns server. ( the hosted zone should be connected to the internet… )

The ManagedCertificate resource

If you are using GKE, google installed a CRD called ManagedCertificate in your cluster without you noticing. This resource is really simple.

apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: banana-cert
spec:
domains:
- banana.com

When you apply it to your cluster, immediately it should reflect the following event.

kubectl apply -f managedCert.yamlkubectl describe ManagedCertificate some-valid-dns-nameAPI Version:  networking.gke.io/v1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-12-16T10:13:12Z
Generation: 2
Resource Version: 57704800
Self Link: /apis/networking.gke.io/v1/namespaces/******/managedcertificates/adika-storefront-prod
UID: *****
Spec:
Domains:
banana.com
Status:
Certificate Name: mcrt-*******************
Certificate Status: Provisioning
Domain Status:
Domain: banana.com
Status: Provisioning
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Create 32s managed-certificate-controller Create SslCertificate mcrt-some-uuid-here

The “Create” event, notice Status: Provisioning . This means that managed-certificate-controller, a k8s controller ( link to repo ). That’s managing the ManagedCertificate resources. Started working. If you don’t see anything in the events section of the kubectl describe command, this means that something is wrong. Either your cluster version is not up to date, or that you are out of luck today, and you need to somehow awake this controller, by upgrading your cluster version for example( what I had to do).

Immediately after applying the k8s resource, a managed-cert entity should be created in GCP. You can get it by running

gcloud beta compute ssl-certificates list

This should print the certificate name mcrt-some-uuid-here that was listed in your kubectl describe command. Additionaly , the certificate name should be a valid dns name, no fancy characters besides [a-z][A-z] . -[0–9]. If this didn’t work, and after applying the ManagedCertificate, the GCP LoadBalancer certificate was not created. Dont proceed.

The ingress

The next thing you need is a K8S ingress that will be linked to the ManagedCertificate with an annotation, and to a node port service ( don’t try other service types .. ) that will listen on port 80 and will answer with a 200 to a GET request on “/” path . ( you wished they would write it in the docs… )

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-name
annotations:
kubernetes.io/ingress.global-static-ip-name: reserved-ip-name
networking.gke.io/managed-certificates: banana-cert
kubernetes.io/ingress.allow-http: "false"
# a bonus, disablles http on the load balancer
spec:
rules:
- host: banana.com
http:
paths:
- path: /*
backend:
serviceName: banan-service
servicePort: 80

Now, there are a couple of things going on here.

networking.gke.io/managed-certificates, annotation is the thing that’s linking the managed-cert to this ingress, Only after the ingress was created, and connected to healthy backhands. The Certificate can become active ( remember the Status: Provisioning .. )

Additionally, the dns that the managed-cert was configured with, should point to the kubernetes.io/ingress.global-static-ip-name. In our example dig bannana.com, should resolve to a reserved static ip that you created beforehand.

And finally, the service the ingress points to should be a NodePort service, and it should listen on port 80. A GCP load balancer will be created and will health check this service by calling it on the “/” path. So make sure it returns 200.

If everything worked fine, the ingress you created should receive two additional annotations by GKE magic.

ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-some-uuid-here
ingress.kubernetes.io/backends:'{"k8sbe-":"HEALTHY","k8sbe":"HEALTHY"}'

the first annotation links the ingress with the GCP load balancer certificate, and the second describes the healthy backend connected to it.

If you survived this far. Congrats, you have a GCP load balancer connected to all the nodes in your k8s cluster that does ssl termination and automatic certificates provisioning and rotation.

an ssl termination load balancer

Cheers !

--

--