Amazon EKS Upgrade Journey From 1.18 to 1.19

Marcin Cuber
5 min readFeb 17, 2021

Process and considerations while upgrading EKS control-plane to version 1.19

Overview

AWS recently released support for Amazon Kubernetes Service 1.19. With this release there are some new features introduced and there are not too many deprecated options. In this post I will go through the services that are a must to check and upgrade if necessary before even thinking of upgrading EKS. I have to say, that those EKS upgrades are becoming nice and smooth which is amazing.

If you are looking at

  • upgrading EKS from 1.15 to 1.16 then check out story
  • upgrading EKS from 1.16 to 1.17 check out this story
  • upgrading EKS from 1.17 to 1.18 check out this story

Kubernetes 1.19 features

Ingress 1.19

Until Kubernetes 1.19, ingress API was in beta since version 1.1, which is rather long. Ingress handles outer admittance to administrations in a cluster, uncovering HTTP and HTTPS courses. It might likewise oversee load adjusting, end SSL/TLS, and give name-based virtual facilitating.

With this upgrade to EKS 1.19, Ingress is GA and is added to the systems administration v1 APIs. As a component of this achievement, there are some key contrasts in v1 Ingress objects, including mapping and approval changes. For instance, the pathType field no longer has a default esteem and should be indicated.

TLS 1.3 Support

Kubernetes 1.19 finally supports TLS and 1.3 and includes uphold for new TLS 1.3 codes that can be utilised for Kubernetes.

Node Debugging

Node debugging finally landed and is accessible as part of alpha, running the “kubectl alpha investigate” command will make and run another case that runs in the host OS namespaces and can be utilised to investigate nodes. This permits a client to review a running case without restarting it and without entering the compartment itself to.

Note that EKS doesn’t support alpha features, however, definitely check AWS EKS maybe it landed there. I personally haven’t checked :)

Major Updates

  • Instrumentation refreshes incorporate organised logging and an updated occasion API.
  • New in the Kubernetes organise is Ingress rendition 1, the EndpointSlice API, AppProtocol in administrations and endpoints, SCTP uphold for administrations, unit, endpoint and system strategy.
  • For nodes, 1.19 will permit clients to set a unit’s hostname to its completely qualified area name, disabledAcceleratorUsage Metrics, the hub geography chief, Seccomp, and capacity to construct Kubelet without Docker.

The Depreciation

  • Hyperkube, across the board paired for Kubernetes segments, is presently censured and won’t be worked by the Kubernetes venture going ahead.
  • A few, more seasoned beta API renditions are censured in 1.19 and will be evacuated in adaptation 1.22. Be assured of a follow-on update since this implies 1.22 will probably wind up being a breaking discharge for some end clients.

Shortened notes on added features:

  • The API server will include a warning header when you use deprecated APIs (beta)
  • “.status.conditions” is standardised across objects (stable)
  • Standardise on “http://node-role.kubernetes.io" label (beta)
  • Removed the beta APIs from the conformance tests (beta)
  • Support CRI-ContainerD on Windows (beta)
  • The process to obtain the kubelet certificate and rotate it as its expiration date approaches is now stable
  • NodeRestriction admission plugin prevents Kubelets from self-setting labels within core namespaces (stable)
  • The Certificates API includes a Registration Authority to provision certificates for non-core uses (stable)
  • A standard structure for Kubernetes log messages (alpha)
  • Redesign Event API (stable) 15. Ingress graduates to V1 (stable)

Dashboard v2

Something I noticed while reading Kubernetes 1.19 release notes. SIG UI has released v2 of the Kubernetes Dashboard add-on. You can find the most recent release in the kubernetes/dashboard repository. Kubernetes Dashboard now includes CRD support, new translations, and an updated version of AngularJS.

Upgrade your EKS with terraform

This time upgrade of the control plane takes around 46 minutes and didn’t cause any issues. I have noticed that the control plane wasn’t available immediately so upgraded worker nodes took around 2 minutes to join the upgraded EKS cluster.

aws_eks_cluster.cluster[0]: Modifications complete after 46m29s [id=eks-test-eu]

I personally use Terraform to deploy and upgrade my EKS clusters. Here is an example of the EKS cluster resource.

resource "aws_eks_cluster" "cluster" {
enabled_cluster_log_types = ["audit"]
name = local.name_prefix
role_arn = aws_iam_role.cluster.arn
version = "1.19"

vpc_config {
subnet_ids = flatten([module.vpc.public_subnets, module.vpc.private_subnets])
security_group_ids = []
endpoint_private_access = "true"
endpoint_public_access = "true"
} encryption_config {
resources = ["secrets"]
provider {
key_arn = module.kms-eks.key_arn
}
} tags = var.tags
}

Template I use for creating EKS clusters using Terraform can be found in my Github repository reachable under https://github.com/marcincuber/eks/tree/master/terraform-aws

After upgrading EKS control-plane

Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.19.

The above is just a recommendation from AWS. You should look at upgrading all your components to match the 1.19 Kubernetes version. They could include:

  1. calico-node
  2. cluster-autoscaler
  3. Kube-state-metrics
  4. calico-typha and calico-typha-horizontal-autoscaler

CoreDNS

AWS decided to again change the naming convention. You now need to set it in your yaml to the following:

602401143452.dkr.ecr.eu-west-1.amazonaws.com/eks/coredns:v1.8.0-eksbuild.1

Now the images have additional text `eksbuild.1` in the tag value.

Kube-Proxy permissions issue

With the new version of kube-proxy, you will likely see the following error and similar ones:

User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope

Fix is by updating cluster role attached to the service account. It should look as follows:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:node-proxier
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- "discovery.k8s.io"
resources:
- endpointslices
verbs:
- watch
- list
- get
---

Summary

I have to say that this was a nice, pleasant and fast upgrade. Yet again, no significant issues.

If you are interested in the entire terraform setup for EKS, you can find it on my GitHub -> https://github.com/marcincuber/eks/tree/master/terraform-aws

Hope this article nicely aggregates all the important information around upgrading EKS to version 1.19 and it will help people speed up their task.

Long story short, you hate and/or you love Kubernetes but you still use it ;).

Enjoy Kubernetes!!!

Sponsor Me

Like with any other story on Medium written by me, I performed the tasks documented. This is my own research and issues I have encountered.

Thanks for reading everybody. Marcin Cuber

--

--

Marcin Cuber

Technical Lead/Principal Devops Engineer and AWS Community Builder