Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Horizontal pod autoscaling in Kubernetes

I have a cluster that scales based on the CPU usage of my pods. The documentation states that i should prevent thrashing by scaling to fast. I want to play around with the autoscaling speed but i can't seem to find where to apply the following flags:

  • --horizontal-pod-autoscaler-downscale-delay
  • --horizontal-pod-autoscaler-upscale-delay

My goal is to set the cooldown timer lower then 5m or 3m, does anyone know how this is done or where I can find documentation on how to configure this? Also if this has to be configured in the hpa autoscaling YAML file, does anyone know what definition should be used for this or where I can find documentation on how to configure the YAML? This is a link to the Kubernetes documentation about scaling cooldowns i used.

like image 820
Dimitrih Avatar asked May 22 '18 14:05

Dimitrih


People also ask

What is horizontal pod autoscaling?

The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster.

What is horizontal scaling in Kubernetes?

In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand. Horizontal scaling means that the response to increased load is to deploy more Pods.

How does horizontal pod Autoscaler work with cluster Autoscaler?

The Horizontal Pod Autoscaler (HPA) scales the number of pods available in a cluster in response to the present computational needs. You specify the metrics that will determine the number of pods needed, and set the thresholds at which pods should be created or removed.

Is Kubernetes a vertical or horizontal scaling?

This is were Kubernetes Autoscaling comes in: Kubernetes provides multiple layers of autoscaling functionality: Pod-based scaling with the Horizontal Pod Autoscaler and the Vertical Pod Autoscaler, as well as node-based with the Cluster Autoscaler.


2 Answers

The HPA controller is part of the controller manager and you'll need to pass the flags to it, see also the docs. It is not something you'd do via kubectl. It's part of the control plane (master) so depends on how you set up Kubernetes and/or which offering you're using. For example, in GKE the control plane is not accessible, in Minikube you'd ssh into the node, etc.

like image 52
Michael Hausenblas Avatar answered Sep 25 '22 21:09

Michael Hausenblas


As per all the discussion over here is my experience and it's working for me, may be it can help someone.

ssh to master node and edit /etc/kubernetes/manifests/kube-controller-manager.manifest like below

command:
- /hyperkube
- controller-manager
- --kubeconfig=/etc/kubernetes/kube-controller-manager-kubeconfig.yaml
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
- --enable-hostpath-provisioner=false
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --pod-eviction-timeout=5m0s
- --profiling=false
- --terminated-pod-gc-threshold=12500
- --horizontal-pod-autoscaler-downscale-delay=2m0s
- --horizontal-pod-autoscaler-upscale-delay=2m0s
- --v=2
- --use-service-account-credentials=true
- --feature-gates=Initializers=False,PersistentLocalVolumes=False,VolumeScheduling=False,MountPropagation=False

The quoted part is the parameters I have added. without restarting the kubelet service it's updated.

If you don't find this value updated you can restart systemctl restart kubelet.

Note : I have created HA-cluster using kubespray

Hope this can be savior for someone.

Thank you!

like image 36
chintan thakar Avatar answered Sep 25 '22 21:09

chintan thakar