How do you get logs from kube-system pods? Running kubectl logs pod_name_of_system_pod
does not work:
λ kubectl logs kube-dns-1301475494-91vzs Error from server (NotFound): pods "kube-dns-1301475494-91vzs" not found
Here is the output from get pods
:
λ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default alternating-platypus-rabbitmq-3309937619-ddl6b 1/1 Running 1 1d kube-system kube-addon-manager-minikube 1/1 Running 1 1d kube-system kube-dns-1301475494-91vzs 3/3 Running 3 1d kube-system kubernetes-dashboard-rvm78 1/1 Running 1 1d kube-system tiller-deploy-3703072393-x7xgb 1/1 Running 1 1d
To get Kubectl pod logs, you can access them by adding the -p flag. Kubectl will then get all of the logs stored for the pod. This includes lines that were emitted by containers that were terminated.
These logs are usually located in the /var/log/containers directory on your host. If a container restarts, kubelet keeps logs on the node. To prevent logs from filling up all the available space on the node, Kubernetes has a log rotation policy set in place.
They use the klog logging library. Master components logs: Get them from those containers running on master nodes. But all of this approach to logging is just for testing , you should stream all the logs , ( app logs , container logs , cluster level logs , everything) to a centeral logging system such as ELK or EFK.
Use the namespace param to kubectl : kubectl --namespace kube-system logs kubernetes-dashboard-rvm78
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With