Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I distribute a deployment across nodes?

I have a Kubernetes deployment that looks something like this (replaced names and other things with '....'):

# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: extensions/v1beta1 kind: Deployment metadata:   annotations:     deployment.kubernetes.io/revision: "3"     kubernetes.io/change-cause: kubectl replace deployment ....       -f - --record   creationTimestamp: 2016-08-20T03:46:28Z   generation: 8   labels:     app: ....   name: ....   namespace: default   resourceVersion: "369219"   selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/....   uid: aceb2a9e-6688-11e6-b5fc-42010af000c1 spec:   replicas: 2   selector:     matchLabels:       app: ....   strategy:     rollingUpdate:       maxSurge: 1       maxUnavailable: 1     type: RollingUpdate   template:     metadata:       creationTimestamp: null       labels:         app: ....     spec:       containers:       - image: gcr.io/..../....:0.2.1         imagePullPolicy: IfNotPresent         name: ....         ports:         - containerPort: 8080           protocol: TCP         resources:           requests:             cpu: "0"         terminationMessagePath: /dev/termination-log       dnsPolicy: ClusterFirst       restartPolicy: Always       securityContext: {}       terminationGracePeriodSeconds: 30 status:   availableReplicas: 2   observedGeneration: 8   replicas: 2   updatedReplicas: 2 

The problem I'm observing is that Kubernetes places both replicas (in the deployment I've asked for two) on the same node. If that node goes down, I lose both containers and the service goes offline.

What I want Kubernetes to do is to ensure that it doesn't double up containers on the same node where the containers are the same type - this only consumes resources and doesn't provide any redundancy. I've looked through the documentation on deployments, replica sets, nodes etc. but I couldn't find any options that would let me tell Kubernetes to do this.

Is there a way to tell Kubernetes how much redundancy across nodes I want for a container?

EDIT: I'm not sure labels will work; labels constrain where a node will run so that it has access to local resources (SSDs) etc. All I want to do is ensure no downtime if a node goes offline.

like image 921
June Rhodes Avatar asked Aug 23 '16 03:08

June Rhodes


People also ask

How do you distribute pods evenly across nodes?

In order to distribute pods evenly across all cluster worker nodes in an absolute even manner, we can use the well-known node label called kubernetes.io/hostname as a topology domain, which ensures each worker node is in its own topology domain.

Can a pod run across multiple nodes?

So, no, per definition, since all containers in a pod will be scheduled on the same node, a pod can not span nodes.

Do all containers in a pod run on the same node?

The key thing about pods is that when a pod does contain multiple containers, all of them are always run on a single worker node—it never spans multiple worker nodes, as shown in figure 3.1.

How do you cordon node Kubernetes?

When the node is cordoned. This means that we cannot place a new pod on this node. Node draining is a Kubernetes process that securely removes pods from a node. We use the 'kubectl drain minikube' command to remove all pods from the node securely.


2 Answers

There is now a proper way of doing this. You can use the label in "kubernetes.io/hostname" if you just want to spread it out across all nodes. Meaning if you have two replicas of a pod, and two nodes, each should get one if their names aren't the same.

Example:

apiVersion: apps/v1 kind: Deployment metadata:   name: my-service   labels:     app: my-service spec:   replicas: 2   selector:     matchLabels:       app: my-service   template:     metadata:       labels:         app: my-service     spec:       topologySpreadConstraints:       - maxSkew: 1         topologyKey: kubernetes.io/hostname         whenUnsatisfiable: DoNotSchedule         labelSelector:           matchLabels:             app: my-service       containers:       - name: pause         image: k8s.gcr.io/pause:3.1 
like image 66
Anton Blomström Avatar answered Sep 18 '22 05:09

Anton Blomström


I think you're looking for the Affinity/Anti-Affinity Selectors.

Affinity is for co-locating pods, so I want my website to try and schedule on the same host as my cache for example. On the other hand, Anti-affinity is the opposite, don't schedule on a host as per a set of rules.

So for what you're doing, I would take a closer look at this two links: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node

https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure

like image 40
Kevin Nisbet Avatar answered Sep 21 '22 05:09

Kevin Nisbet