I can see the logs for a particular pod by running 'kubectl logs podName'. I have also seen that logs contains an option --log-dir flag, but it doesn't seem to be working. Is there some kind of configuration I can change, logs will be saved to a particular file on my host machine?
By default, the kubelet writes logs to files within the directory C:\var\logs (notice that this is not C:\var\log ). Although C:\var\log is the Kubernetes default location for these logs, several cluster deployment tools set up Windows nodes to log to C:\var\log\kubelet instead.
These logs are usually located in the /var/log/containers directory on your host. If a container restarts, kubelet keeps logs on the node. To prevent logs from filling up all the available space on the node, Kubernetes has a log rotation policy set in place.
kubectl logs pod_name > app.log
for example, if you have a kube pod named app-6b8bdd458b-kskjh
and you intend to save the logs from this pod to a file name app.log
then the command should be
kubectl logs app-6b8bdd458b-kskjh > app.log
Note: This may not be an direct answer of how to make the logs go to the host machine, but it does provide a way to get logs onto the machine. This may not be the best answer, but it's what I know so I'm sharing. Please answer if you have a more direct solution.
You can do this using fluentd. Here's a tutorial about how to set it up. You can configure it to write to a file in a mounted hostdir or have it write to S3. It also allows you to aggregate all your logs from all your containers which may or may not be useful. Combined with ElasticSearch and Kibana you can put together a pretty strong logging stack. It will depend on your use-case of course though.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With