I'd like to keep the number of cores in my GKE cluster below 3. This becomes much more feasible if the CPU limits of the K8s replication controllers and pods are reduced from 100m to at most 50m. Otherwise, the K8s pods alone take 70% of one core.
I decided against increasing the CPU power of a node. This would be conceptually wrong in my opinion because the CPU limit is defined to be measured in cores. Instead, I did the following:
This is a lot of work and probably fragile. Any further changes in upcoming versions of K8s, or changes in the GKE configuration, may break it.
So, is there a better way?
Specify a CPU request and a CPU limit To specify a CPU request for a container, include the resources:requests field in the Container resource manifest. To specify a CPU limit, include resources:limits . The args section of the configuration file provides arguments for the container when it starts.
CPU limits and Throttling Google, among others, highly recommends it. The danger of not setting a CPU limit is that containers running in the node could exhaust all CPU available. This can trigger a cascade of unwanted events such as having key Kubernetes processes (such as kubelet ) to become unresponsive.
If a container exceeds its memory request and the node that it runs on becomes short of memory overall, it is likely that the Pod the container belongs to will be evicted. A container might or might not be allowed to exceed its CPU limit for extended periods of time.
New Kubernetes clusters have a single predefined LimitRange named “limits” in the default namespace with CPU limit set to 100m (that's 1/10 of a CPU core). Other namespaces don't have a default LimitRange but you can create them on your own. There may be multiple LimitRanges per namespace.
Changing the default Namespace's LimitRange spec.limits.defaultRequest.cpu
should be a legitimate solution for changing the default for new Pods. Note that LimitRange objects are namespaced, so if you use extra Namespaces you probably want to think about what a sane default is for them.
As you point out, this will not affect existing objects or objects in the kube-system Namespace.
The objects in the kube-system Namespace were mostly sized empirically - based on observed values. Changing those might have detrimental effects, but maybe not if your cluster is very small.
We have an open issue (https://github.com/kubernetes/kubernetes/issues/13048) to adjust the kube-system requests based on total cluster size, but that is not is not implemented yet. We have another open issue (https://github.com/kubernetes/kubernetes/issues/13695) to perhaps use a lower QoS for some kube-system resources, but again - not implemented yet.
Of these, I think that #13048 is the right way to implement what you 're asking for. For now, the answer to "is there a better way" is sadly "no". We chose defaults for medium sized clusters - for very small clusters you probably need to do what you are doing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With