Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between API versions v2beta1 and v2beta2 in Horizontal Pod Autoscaler?

The Kubernetes Horizontal Pod Autoscaler walkthrough in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it.

Thanks in advance.

like image 590
Ajay Maity Avatar asked Jan 31 '19 11:01

Ajay Maity


People also ask

What is horizontal pod Autoscaler in Kubernetes?

The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster.

Which controller in Kubernetes handles autoscaling of pods?

The horizontal pod autoscaling controller, running within the Kubernetes control plane, periodically adjusts the desired scale of its target (for example, a Deployment) to match observed metrics such as average CPU utilization, average memory utilization, or any other custom metric you specify.

What is horizontal and vertical scaling in Kubernetes?

Horizontal Scaling means modifying the compute resources of an existing cluster, for example, by adding new nodes to it or by adding new pods by increasing the replica count of pods (Horizontal Pod Autoscaler). Vertical Scaling means to modify the attributed resources (like CPU or RAM) of each node in the cluster.


1 Answers

The first metrics autoscaling/V2beta1 doesn't allow you to scale your pods based on custom metrics. That only allows you to scale your application based on CPU and memory utilization of your application

The second metrics autoscaling/V2beta2 allows users to autoscale based on custom metrics. It allow autoscaling based on metrics coming from outside of Kubernetes. A new External metric source is added in this api.

metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

It will identify a specific metric to autoscale on based on metric name and a label selector. Those metrics can come from anywhere like a stackdriver or prometheus monitoring application and based on some query from prometheus you want to scale your application.

It would always better to use V2beta2 api because it can do scaling on CPU and memory as well as on custom metrics, while V2beta1 API can scale only on internal metrics.

The snippet I mentioned in answer denotes how you can specify the target CPU utilisation in V2beta2 API

like image 152
Prafull Ladha Avatar answered Oct 12 '22 22:10

Prafull Ladha