Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Stop all Pods in a StatefulSet before scaling it up or down

My team is currently working on migrating a Discord chat bot to Kubernetes. We plan on using a StatefulSet for the main bot service, as each Shard (pod) should only have a single connection to the Gateway. Whenever a shard connects to said Gateway, it tells it its ID (in our case the pod's ordinal index) and how many shards we are running in total (the amount of replicas in the StatefulSet).

Having to tell the gateway the total number of shards means that in order to scale our StatefulSet up or down we'd have to stop all pods in that StatefulSet before starting new ones with the updated value.

How can I achieve that? Preferrably through configuration so I don't have to run a special command each time.

like image 672
Pedro Fracassi Avatar asked May 28 '20 14:05

Pedro Fracassi


People also ask

Can we scale down StatefulSet?

You cannot scale down a StatefulSet when any of the stateful Pods it manages is unhealthy. Scaling down only takes place after those stateful Pods become running and ready. If spec. replicas > 1, Kubernetes cannot determine the reason for an unhealthy Pod.

How do you stop all the pods in Kubernetes?

Stopping the Kubernetes clusterStop all worker nodes, simultaneously or individually. After all the worker nodes are shut down, shut down the Kubernetes master node. Note: If the NFS server is on a different host than the Kubernetes master, you can shut down the Kubernetes master when you shut down the worker nodes.

How do you stop a stateful set?

You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the kubectl delete command, and specify the StatefulSet either by file or by name. You may need to delete the associated headless service separately after the StatefulSet itself is deleted.

Does deleting StatefulSet delete PVC?

The PVC is deleted only when the replica is no longer needed as signified by a scale-down or StatefulSet deletion. This use case is for when data does not need to live beyond the life of its replica.


1 Answers

Try kubectl rollout restart sts <sts name> command. It'll restart the pods one by one in a RollingUpdate way.

Scale down the sts kubectl scale --replicas=0 sts <sts name>

Scale up the sts kubectl scale --replicas=<number of replicas> sts <sts name>

like image 63
hariK Avatar answered Nov 07 '22 14:11

hariK