Kubernetes multiple zones, autoscaling, and persistent volume

August 25, 2020

After a wild using Kubernetes in AWS and set-up persistent volumes via EBS, I faced a problem with evicted pods after they are re-schedule to another node; The issue was the EBS volumes are dedicated by zone, makes sense because the volumes work via networking and are dedicated per datacenter, for the network latency.

Read more

Kops: Kubernetes cluster with Autoscaling on AWS

November 13, 2019

The autoscaling groups created by Kops is only to keep a number of nodes up without any scaling policy; The idea of the following article is create a dynamic Kubernetes cluster, measure the load of the nodes and scale up or down. To do that we need to set up some tags to the worker nodes, give permission to the worker nodes to reach the AWS service autoscaling and deploy the tool Kubernetes autoscaler.

The cluster will be scale when:

  • There are pods that failed to run in the cluster due to insufficient resources.
  • There are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

The environment:

The version of "Kubernetes autoscaler" need to be the same as your Kubernetes cluster version.

Read more