After creating the pod-definition.yml file.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: server
spec:
containers:
- name: nginx-container
image: nginx
The linter is giving this warning.
One or more containers do not have resource limits - this could starve other processes
Each container has a limit of 0.5 CPU and 128MiB of memory. You can say the Pod has a request of 0.5 CPU and 128 MiB of memory, and a limit of 1 CPU and 256MiB of memory.
Kubernetes doesn't provide default resource limits out-of-the-box. This means that unless you explicitly define limits, your containers can consume unlimited CPU and memory.
You can create a Kubernetes cluster running on Azure using the Kubernetes extension in VS Code. Once you have installed the Kubernetes extension, you will see KUBERNETES in the Explorer. Click on More and choose Create Cluster.
Containers without Kubernetes resource limits can cause very critical consequences in your nodes. In the best case, the nodes will start evicting pods in order or scoring. They also will have performance issues due to CPU throttling.
Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value.
Including a resource request ensures that the container will receive at least what is requested. When a container includes a resource request, the scheduler can ensure that the node to which it assigns the pod will guarantee the availability of the necessary resources.
Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit. By configuring the CPU requests and limits of the Containers that run in your cluster, you can make efficient use of the CPU resources available on your cluster Nodes.
It is a good practice to declare resource requests and limits for both memory and cpu for each container. This helps to schedule the container to a node that has available resources for your Pod, and also so that your Pod does not use resources that other Pods needs - therefore the "this could starve other processes" message.
E.g. to add resource requests and limits to your example
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: server
spec:
containers:
- name: nginx-container
image: nginx
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
As you know, that warning comes from the linter in VS Code Extension ms-kubernetes-tools.vscode-kubernetes-tools
. If you want the linter to disable the warning
One or more containers do not have resource limits - this could starve other processes
then edit VS Code setting.json
to look like this:
{
"vs-kubernetes": {
"disable-linters": ["resource-limits"],
...
},
...
}
I was working with my YAML object files and previously I have every single object in a separated file and I recently noticed that for a "Deployment" object file I have the following linting warning:
One or more containers do not have resource limits - this could starve other processes
Before fixing that issue, I decided to refactor my object definition a bit and define more than one object in a single file if they are related. So now I have the same Deployment as before along with a Volume Claim and a Service, everything in the same file.
But then I noted that the linting warning doesn't show up for the Deployment, but it does show up if I delete the Service and Volume Claim from the file, leaving the Deployment alone.
So I suppose the linting code is not taking into account the possibility of having many objects definition per file.
Thanks!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With