Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it possible to add swap space on kubernetes nodes?

Tags:

kubernetes

I am trying to add swap space on kubernetes node to prevent it from out of memory issue. Is it possible to add swap space on node (previously known as minion)? If possible what procedure should I follow and how it effects pods acceptance test?

like image 448
degendra Avatar asked Apr 09 '16 13:04

degendra


People also ask

Does Kubernetes support swap?

LimitedSwap (default): Kubernetes workloads are limited in how much swap they can use. Workloads on the node not managed by Kubernetes can still swap. UnlimitedSwap : Kubernetes workloads can use as much swap memory as they request, up to the system limit.

What happens if swap space is full?

Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM.

Can multiple pods run on a node?

A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster.


2 Answers

Kubernetes doesn't support container memory swap. Even if you add swap space, kubelet will create the container with --memory-swappiness=0 (when using Docker). There have been discussions about adding support, but the proposal was not approved. https://github.com/kubernetes/kubernetes/issues/7294

like image 154
Yu-Ju Hong Avatar answered Oct 21 '22 12:10

Yu-Ju Hong


Technically you can do it.
There is a broad discussion weather to give K8S users the privilege to decide enabling swap or not.

I'll first refer directly to your question and then continue with the discussion.

If you run K8S on Kubeadm and you've added swap to your nodes - follow the steps below:

1 ) Reset the current cluster setup and then add the fail-swap-on=false flag to the kubelet configuration:

kubeadm reset 
echo 'Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"' >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

(*) If you're running on Ubuntu replace the path for the Kubelet config from etc/systemd/syste,/kubelet to /etc/default/kubelet.

2 ) Reload the service:

systemctl daemon-reload
systemctl restart kubelet    

3 ) Initialize the cluster settings again and ignore the swap error:

kubeadm init --ignore-preflight-errors Swap

OR:

If you prefer working with kubeadm-config.yaml:

1 ) Add the failSwapOn flag:

---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false # <---- Here

2 ) And run:

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=Swap

Returning back to discussion weather to allow swapping or not.

From the one hand, K8S is very clear about this - Kubelet is not designed to support swap - you can see it mentioned in the Kubeadm link I shared above:

Swap disabled. You MUST disable swap in order for the kubelet to work properly

From the other hand, you can see users reporting that there are cases where there deployments require swap enabled.

I would suggest that you first try without enabling swap.
(Not because swap is a function that the kernel can't manage, but merely because it is not recommended by Kube - probably related to the design of Kubelet).

Make sure that you are familiar with the features that K8S provides to prioritize memory of pods:

1 ) The 3 qos classes - Make sure that your high priority workloads are running with the Guaranteed (or at least Burstable) class.

2 ) Pod Priority and Preemption.

I would recommend also reading Evicting end-user Pods:

If the kubelet is unable to reclaim sufficient resource on the node, kubelet begins evicting Pods.

The kubelet ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests, then by Priority, and then by the consumption of the starved compute resource relative to the Pods' scheduling requests.

As a result, kubelet ranks and evicts Pods in the following order:

  • BestEffort or Burstable Pods whose usage of a starved resource exceeds its request. Such pods are ranked by Priority, and then usage above request.

  • Guaranteed pods and Burstable pods whose usage is beneath requests are evicted last. Guaranteed Pods are guaranteed only when requests and limits are specified for all the containers and they are equal. Such pods are guaranteed to never be evicted because of another Pod's resource consumption. If a system daemon (such as kubelet, docker, and journald) is consuming more resources than were reserved via system-reserved or kube-reserved allocations, and the node only has Guaranteed or Burstable Pods using less than requests remaining, then the node must choose to evict such a Pod in order to preserve node stability and to limit the impact of the unexpected consumption to other Pods. In this case, it will choose to evict pods of Lowest Priority first.

Good luck (:


A few relevant discussions:

Kubelet/Kubernetes should work with Swap Enabled

[ERROR Swap]: running with swap on is not supported. Please disable swap

Kubelet needs to allow configuration of container memory-swap

like image 34
RtmY Avatar answered Oct 21 '22 12:10

RtmY