I have a set of daemons I need to run, generally, they do not consume much memory or CPU and I have their limits
to cpu: 150m
and memory: 150m
.
Occasionally they will spike to quite a bit higher than this and this seems to be causing evictions and unstable node.
It is critical that the daemons remain running 24/7, even if they are throttled by CPU and/or memory when they spike. Is it possible to prevent their eviction and to cap their resources?
As I understand the CPU usage is throttled but over memory use results in an OOM eviction, is there any way to prevent this eviction?
As of 1.11, you can set pod priorities.
apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 globalDefault: false description: "This priority class should be used for XYZ service pods only."
apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: high-priority
Sounds like you need to track the resources consumption trends with something like Prometheus + Grafana to check what sort of spikes you expect from your DaemonSets.
Then you can allocate more resources to these pods or remove this config (which, by default, will leave them in unbounded
mode). But, of course, you don't want to risk a full node / host crash so you can consider tweaking your eviction threshold
:
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#eviction-thresholds
More details: https://kubernetes-v1-4.github.io/docs/admin/limitrange/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With