Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Prevent killing some pods when scaling down possible?

I need to scale a set of pods that run queue-based workers. Jobs for workers can run for a long time (hours) and should not get interrupted. The number of pods is based on the length of the worker queue. Scaling would be either using the horizontal autoscaler using custom metrics, or a simple controller that changes the number of replicas.

Problem with either solution is that, when scaling down, there is no control over which pod(s) get terminated. At any given time, most workers are likely working on short running jobs, idle, or (more rare) processing a long running job. I'd like to avoid killing the long running job workers, idle or short running job workers can be terminated without issue.

What would be a way to do this with low complexity? One thing I can think of is to do this based on CPU usage of the pods. Not ideal, but it could be good enough. Another method could be that workers somehow expose a priority indicating whether they are the preferred pod to be deleted. This priority could change every time a worker picks up a new job though.

Eventually all jobs will be short running and this problem will go away, but that is a longer term goal for now.

like image 649
Stragulus Avatar asked Apr 24 '19 18:04

Stragulus


People also ask

How do you scale down a specific pod in Kubernetes?

When you scale down the number of replicas, the system will choose one to remove; there isn't a way to "hint" at which one you'd like it to remove. One thing you can do is you can change the labels on running pods which can affect their membership in the replication controller.

How do you remove auto scaling in Kubernetes?

When you autoscale, it creates a HorizontalPodScaler. You can delete it by: kubectl delete hpa NAME-OF-HPA .


1 Answers

I think running this type of workload using a Deployment or similar, and using a HorizontalPodAutoscaler for scaling, is the wrong way to go. One way you could go about this is to:

  1. Define a controller (this could perhaps be a Deployment) whose task is to periodically create a Kubernetes Job object.
  2. The spec of the Job should contain a value for .spec.parallelism equal to the maximum number of concurrent executions you will accept.
  3. The Pods spawned by the Job then run your processing logic. They should each pull a message from the queue, process it, and then delete it from the queue (in the case of success).
  4. The Job must exit with the correct status (success or failure). This ensures that the Job recognises when the processing has completed, and so will not spin up additional Pods.

Using this method, .spec.parallelism controls the autoscaling based on how much work there is to be done, and scale-down is an automatic benefit of using a Job.

like image 158
Matt Dunn Avatar answered Oct 21 '22 04:10

Matt Dunn