Upgrade EKS Self-Managed Node Groups¶
The following steps describe how to upgrade self-managed node groups to a newer Kubernetes version. To upgrade self-managed node groups to a newer version you will directly modify the underlying Auto Scaling group (ASG).
Note
Updates performed in other ways, such as Cloudformation or by letting the AutoScalingGroup scale down the cluster based on its Termination policy, may cause data loss since it is not guaranteed that they will properly drain the node.
What You’ll Need¶
- A configured management environment.
- An existing EKS cluster.
- An existing Rok deployment.
Check Your Environment¶
Before you start upgrading the EKS self-managed node group, ensure that you have enabled Scale-in protection.
Procedure¶
Go to your GitOps repository, inside your
rok-tools
management environment:root@rok-tools:~# cd ~/ops/deploymentsRestore the required context:
root@rok-tools:~/ops/deployments# source <(cat deploy/env.eks-cluster)root@rokt-tools:~/ops/deployments# export EKS_CLUSTEREnsure that Rok is up and running:
root@rok-tools:~/ops/deployments# kubectl get rokcluster \ > -n rok rok \ > -o jsonpath='{.status.health}{"\n"}' OKEnsure that rest of the Pods are running. Verify field STATUS is Running and field READY is n/n for all Pods:
root@rok-tools:~/ops/deployments# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE auth dex-7747dff999-xqxp 2/2 Running 0 1h cert-manager cert-manager-686bcc964d 1/1 Running 0 1h ...Scale the Cluster Autoscaler deployment down to zero to avoid conflicting scaling actions:
root@rok-tools:~/ops/deployments# kubectl scale deploy \ > -n kube-system cluster-autoscaler \ > --replicas=0 deployment.apps/cluster-autoscaler scaledList the Auto Scaling groups that correspond to the self-managed node groups of your EKS cluster:
root@rok-tools:~/ops/deployments# aws autoscaling describe-auto-scaling-groups | \ > jq -r '.AutoScalingGroups[] | select(.Tags[] | .Key == "kubernetes.io/cluster/'${EKS_CLUSTER?}'") | .AutoScalingGroupName' arrikto-cluster-workers-NodeGroup-1WW76ULXL3VOE ...Pick an Auto Scaling group and inspect it.
Specify the Auto Scaling group to upgrade:
root@rok-tools:~/ops/deployments# export ASG=<ASG>Replace
<ASG>
with the name of the Auto Scaling group running the old Kubernetes version. For example:root@rok-tools:~/ops/deployments# export ASG=arrikto-cluster-workers-NodeGroup-1WW76ULXL3VOEFind the CloudFormation stack that corresponds to your self-managed node group:
root@rok-tools:~/ops/deployments# export CF_STACK_NAME=$(aws autoscaling describe-auto-scaling-group \ > --auto-scaling-group-names ${ASG?} \ > --query 'AutoScalingGroups[].Tags[?Key==`aws:cloudformation:stack-name`].Value' \ > --output text) \ > && echo ${CF_STACK_NAME?} arrikto-cluster-workersInspect the CloudFormation stack parameters and note down the following configurations, as you are going to use them later:
the instance types
root@rok-tools:~/ops/deployments# aws cloudformation describe-stacks \ > --stack-name ${CF_STACK_NAME?} \ > --query 'Stacks[].Parameters[?ParameterKey==`NodeInstanceType`].ParameterValue' \ > --output text m5d.4xlargethe subnets
root@rok-tools:~/ops/deployments# aws cloudformation describe-stacks \ > --stack-name ${CF_STACK_NAME?} \ > --query 'Stacks[].Parameters[?ParameterKey==`Subnets`].ParameterValue' \ > --output text subnet-0d631fff397d2eff1
Inspect the Auto Scaling group and note down the current size, as you are going to use it later:
root@rok-tools:~/ops/deployments# aws autoscaling describe-auto-scaling-groups \ > --auto-scaling-group-names ${ASG?} \ > --query 'AutoScalingGroups[].DesiredCapacity' \ > --output text 1Update the Auto Scaling group
minSize
to allow scaling down to zero:root@rok-tools:~/ops/deployments# aws autoscaling update-auto-scaling-group \ > --auto-scaling-group-name ${ASG?} \ > --min-size 0Follow the Create EKS Self-managed Node Group guide to create a new self-managed node group with a new name and the same scaling configuration, instance types, and subnets of your existing node group, so that existing workloads can safely fit on the new nodes. Then, come back to this guide and continue with this procedure.
Wait for all new nodes to be added. Choose one of the following options, based on the upgrade you need to make:
root@rok-tools:~/ops/deployments# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-32-188.eu-central-1.compute.internal Ready <none> 1h v1.20.11-eks-f17b81 ip-172-31-34-84.eu-central-1.compute.internal Ready <none> 1h v1.20.11-eks-f17b81 ip-172-31-44-254.eu-central-1.compute.internal Ready <none> 1m v1.21.5-eks-bc4871b ip-172-31-47-215.eu-central-1.compute.internal Ready <none> 1m v1.21.5-eks-bc4871broot@rok-tools:~/ops/deployments# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-32-188.eu-central-1.compute.internal Ready <none> 1h v1.19.13-eks-8c579e ip-172-31-34-84.eu-central-1.compute.internal Ready <none> 1h v1.19.13-eks-8c579e ip-172-31-44-254.eu-central-1.compute.internal Ready <none> 1m v1.20.11-eks-f17b81 ip-172-31-47-215.eu-central-1.compute.internal Ready <none> 1m v1.20.11-eks-f17b81Wait for the Rok cluster to scale out itself:
root@rok-tools:~/ops/deployments# kubectl get rokcluster -n rok rok NAME VERSION HEALTH TOTAL MEMBERS READY MEMBERS PHASE AGE rok release... OK 4 4 4 Running 1hFind the old nodes that you should drain. Self-managed node groups do not add explicit labels on Kubernetes nodes so you have to find the nodes either via their Kubelet version or via the provider ID. Choose one of the following options based on how you want to find the nodes of the self-managed node group.
Retrieve the Kubernetes versions running on your nodes currently:
root@rok-tools:~/ops/deployments# kubectl get nodes -o json | \ > jq -r '.items[].status.nodeInfo.kubeletVersion' | sort -u v1.20.11-eks-f17b81 v1.21.5-eks-bc4871broot@rok-tools:~/ops/deployments# kubectl get nodes -o json | \ > jq -r '.items[].status.nodeInfo.kubeletVersion' | sort -u v1.19.13-eks-8c579e v1.20.11-eks-f17b81Specify the old version:
root@rok-tools:~/ops/deployments# K8S_VERSION=<VERSION>Replace
<VERSION>
with the old Kubernetes version. For example:root@rok-tools:~/ops/deployments# K8S_VERSION=v1.20.11-eks-f17b81root@rok-tools:~/ops/deployments# K8S_VERSION=v1.19.13-eks-8c579eFind the nodes that run with this version:
root@rok-tools:~/ops/deployments# nodes=$(kubectl get nodes \ > -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==\"$K8S_VERSION\")].metadata.name}") \ > && echo ${nodes?} ip-172-31-32-188.eu-central-1.compute.internal ip-172-31-34-84.eu-central-1.compute.internal
Retrieve the instances of your Auto Scaling group and construct the corresponding provider ID:
root@rok-tools:~# ids=$(aws autoscaling describe-auto-scaling-groups \ > --auto-scaling-group-names $ASG \ > --query 'AutoScalingGroups[].Instances[].[AvailabilityZone,InstanceId]' \ > --output text \ > | xargs -n2 printf "aws:///%s/%s")Find the nodes with these provider IDs:
root@rok-tools:~# nodes=$(for id in $ids; do kubectl get nodes \ > -o jsonpath='{.items[?(@.spec.providerID=="'$id'")].metadata.name}'; done) \ > && echo ${nodes?} ip-172-31-32-188.eu-central-1.compute.internal ip-172-31-34-84.eu-central-1.compute.internal
Cordon old nodes, that is, disable scheduling on them:
root@rok-tools:~/ops/deployments# for node in $nodes; do kubectl cordon $node ; done node/ip-172-31-32-188.eu-central-1.compute.internal cordoned node/ip-172-31-34-84.eu-central-1.compute.internal cordonedDrain the old nodes one-by-one. Repeat steps i-iv for each one of the old nodes:
Pick a node from the old node group:
root@rok-tools:~/ops/deployments# export node=<node>Replace
<node>
with the node you want to drain, for example:root@rok-tools:~/ops/deployment# export node=ip-172-31-32-188.eu-central-1.compute.internalDrain the node:
root@rok-tools:~# kubectl drain --ignore-daemonsets --delete-local-data $node node/ip-172-31-32-188.eu-central-1.compute.internal already cordoned evicting pod "rok-redis-0" evicting pod "ml-pipeline-scheduledworkflow-7bddd546b-4f4j5" ...Note
This may take a while, since Rok is unpinning all volumes on this node, and is evicting
rok-csi-guard
Pods last.Warning
Do not delete
rok-csi-guard
Pods manually, since this might cause data loss.Troubleshooting
The command does not complete.
Most likely the unpinning of a Rok PVC fails. Inspect the logs of Rok CSI controller to debug further.
Wait for the drain command to finish successfully.
Ensure that all Pods that got evicted have migrated correctly and are up and running again.
Ensure that Rok is up and running:
root@rok-tools:~/ops/deployments# kubectl get rokcluster \ > -n rok rok \ > -o jsonpath='{.status.health}{"\n"}' OKEnsure that rest of the Pods are running. Verify field STATUS is Running and field READY is n/n for all Pods:
root@rok-tools:~/ops/deployments# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE auth dex-7747dff999-xqxp 2/2 Running 0 1h cert-manager cert-manager-686bcc964d 1/1 Running 0 1h ...Note
rok-csi-guard
Pods are expected to be in Pending status.
Go back to step i, and repeat the steps for the remaining old nodes.
Start the Cluster Autoscaler so that it sees the drained nodes, marks them as unneeded, terminates them, and modifies the
desiredCapacity
of the old Auto Scaling group accordingly:root@rok-tools:~# kubectl scale deploy -n kube-system cluster-autoscaler --replicas=1 deployment.apps/cluster-autoscaler scaledNote
The Cluster Autoscaler will not start deleting instances immediately, since after startup it considers the cluster to be in cool down state. In that state, it will not perform any scale down operations. After the cool down period has passed (10 minutes by default, configurable with the
scale-down-delay-after-add
argument), it will remove all drained nodes at once.Ensure that the Autoscaler has scaled the old Auto Scaling group to zero:
root@rok-tools:~/ops/deployments# aws autoscaling describe-auto-scaling-groups \ > --auto-scaling-groups-names ${ASG?} \ > --query 'AutoScalingGroups[].DesiredCapacity' \ > --output text 0Ensure that the Autoscaler has terminated the old instances:
root@rok-tools:~/ops/deployments# aws autoscaling describe-auto-scaling-groups \ > --auto-scaling-groups-names ${ASG?} \ > --query 'AutoScalingGroups[].Instances[].[InstanceId]' \ > --output text \ > | wc -l 0Delete the old Auto Scaling group:
root@rok-tools:~/ops/deployments# aws autoscaling delete-auto-scaling-group \ > --auto-scaling-group-name ${ASG?}Delete the CloudFormation stack for you old self-managed node group:
root@rok-tools:~/ops/deployments# aws cloudformation delete-stack \ > --stack-name ${CF_STACK_NAME?}
Verify¶
Ensure that all nodes in the node group are ready and run the new Kubernetes version. Check that field STATUS is Ready and field VERSION shows the new Kubernetes version. Choose one of the following options, based on the upgrade you’ve made:
root@rok-tools:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-32-188.eu-central-1.compute.internal Ready <none> 1h v1.21.5-eks-bc4871b ip-172-31-34-84.eu-central-1.compute.internal Ready <none> 1h v1.21.5-eks-bc4871broot@rok-tools:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-32-188.eu-central-1.compute.internal Ready <none> 1h v1.20.11-eks-f17b81 ip-172-31-34-84.eu-central-1.compute.internal Ready <none> 1h v1.20.11-eks-f17b81
What’s Next¶
The next step is to configure the Rok Scheduler for the Kubernetes version of your EKS cluster.