I am creating an InfluxDB deployment in a Kubernetes cluster (v1.15.2), this is my yaml file:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-influxdb namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: influxdb spec: containers: - name: influxdb image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2 volumeMounts: - mountPath: /data name: influxdb-storage volumes: - name: influxdb-storage emptyDir: {} --- apiVersion: v1 kind: Service metadata: labels: task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-influxdb name: monitoring-influxdb namespace: kube-system spec: ports: - port: 8086 targetPort: 8086 selector: k8s-app: influxdb
And this is the pod status:
$ kubectl get deployment -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE coredns 1/1 1 1 163d kubernetes-dashboard 1/1 1 1 164d monitoring-grafana 0/1 0 0 12m monitoring-influxdb 0/1 0 0 11m
Now, I've been waiting 30 minutes and there is still no pod available, how do I check the deployment log from command line? I could not access the Kubernetes dashboard now. I am searching a command to get the pod log, but now there is no pod available. I already tried to add label in node:
kubectl label nodes azshara-k8s03 k8s-app=influxdb
This is my deployment describe content:
$ kubectl describe deployments monitoring-influxdb -n kube-system Name: monitoring-influxdb Namespace: kube-system CreationTimestamp: Wed, 04 Mar 2020 11:15:52 +0800 Labels: k8s-app=influxdb task=monitoring Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"monitoring-influxdb","namespace":"kube-system"... Selector: k8s-app=influxdb,task=monitoring Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: k8s-app=influxdb task=monitoring Containers: influxdb: Image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2 Port: <none> Host Port: <none> Environment: <none> Mounts: /data from influxdb-storage (rw) Volumes: influxdb-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> OldReplicaSets: <none> NewReplicaSet: <none> Events: <none>
This is another way to get logs:
$ kubectl -n kube-system logs -f deployment/monitoring-influxdb error: timed out waiting for the condition
There is no output for this command:
kubectl logs --selector k8s-app=influxdb
There is all my pod in kube-system namespace:
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/heapster/heapster-deployment ⌚ 11:57:40 $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-569fd64d84-5q5pj 1/1 Running 0 46h kubernetes-dashboard-6466b68b-z6z78 1/1 Running 0 11h traefik-ingress-controller-hx4xd 1/1 Running 0 11h
Checking the logs of a running pod All that you need to do to do that is to run the following command: kubectl logs nginx-7d8b49557c-c2lx9.
To do this, you'll have to look at kubelet log. Accessing the logs depends on your Node OS. On some OSes it is a file, such as /var/log/kubelet. log, while other OSes use journalctl to access logs.
To get Kubectl pod logs, you can access them by adding the -p flag. Kubectl will then get all of the logs stored for the pod. This includes lines that were emitted by containers that were terminated.
To get all logs, set tail to -1 . Add -f or --follow to the example to follow the logs. If you don't need all of the logs, change the value of the --tail option.
kubectl logs deployment/<name-of-deployment> # logs of deployment kubectl logs -f deployment/<name-of-deployment> # follow logs
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With