Configuration for cgroup driver
is right in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
I also checked the Environment
with cli
$ systemctl show --property=Environment kubelet | cat
Environment=KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf\x20--require-kubeconfig=true KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests\x20--allow-privileged=true KUBELET_NETWORK_ARGS=--network-plugin=cni\x20--cni-conf-dir=/etc/cni/net.d\x20--cni-bin-dir=/opt/cni/bin KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10\x20--cluster-domain=cluster.local KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook\x20--client-ca-file=/etc/kubernetes/pki/ca.crt KUBELET_CADVISOR_ARGS=--cadvisor-port=0 KUBELET_CGROUP_ARGS=--cgroup-driver=systemd
KUBELET_CGROUP_ARGS=--cgroup-driver=systemd
How to reproduce it:
Environment:
kubectl version
): 1.7.3uname -a
): Linux 10-8-108-92 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxCgroup drivers. On Linux, control groups are used to constrain resources that are allocated to processes. Both kubelet and the underlying container runtime need to interface with control groups to enforce resource management for pods and containers and set resources such as cpu/memory requests and limits.
When you use the resource management capabilities in Kubernetes, such as configuring requests and limits for Pods and containers, Kubernetes uses cgroups to enforce your resource requests and limits. The Linux kernel offers two versions of cgroups: cgroup v1 and cgroup v2.
On my environment it only worked the other way around. Setting systemd results always in an error. Here is my current setup
OS: CentOS 7.6.1810
Minikube Version v1.0.0
Docker Version 18.06.2-ce
The solution for me was:
Check /etc/docker/daemon.json
and change systemd to cgroupfs
{
"exec-opts": ["native.cgroupdriver=cgroupfs"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
Then reload systemctl systemctl daemon-reload
Kill the previous minikub config minikube delete
and start the minikube again minikube start --vm-driver=none
Now check the command line the output should find cgroupfs
in both outputs
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
In the end you should see
kubectl is now configured to use "minikube"
= Done! Thank you for using minikube!
Simple solution: Start your minikube with the Extra config parameter
--extra-config=kubelet.cgroup-driver=systemd
The complete command to start up minikube is the next line
minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd
All the best and have fun
kubelet 1.7.3 not reading config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf #50748
Troubleshooting kubeadm
If you are using CentOS and encounter difficulty while setting up the master node, verify that your Docker cgroup driver matches the kubelet config:
docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
If the Docker cgroup driver and the kubelet config don’t match, change the kubelet config to match the Docker cgroup driver. The flag you need to change is --cgroup-driver. If it’s already set, you can update like so:
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
This can be replaced with:
CG=$(sudo docker info 2>/dev/null | sed -n 's/Cgroup Driver: \(.*\)/\1/p')
sed -i "s/cgroup-driver=systemd/cgroup-driver=$CG/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With