Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

kubectl logs returns nothing (blank)

kubectl logs web-deployment-76789f7f64-s2b4r

returns nothing! The console prompt returns without error.

I have a pod which is in a CrashLoopbackOff cycle (but am unable to diagnose it) -->

web-deployment-7f985968dc-rhx52       0/1       CrashLoopBackOff   6          7m

I am using Azure AKS with kubectl on Windows. I have been running this cluster for a few months without probs. The container runs fine on my workstation with docker-compose.

kubectl describe doesn't really help much - no useful information there.

kubectl describe pod web-deployment-76789f7f64-s2b4r

Name:           web-deployment-76789f7f64-j6z5h
Namespace:      default
Node:           aks-nodepool1-35657602-0/10.240.0.4
Start Time:     Thu, 10 Jan 2019 18:58:35 +0000
Labels:         app=stweb
                pod-template-hash=3234593920
Annotations:    <none>
Status:         Running
IP:             10.244.0.25
Controlled By:  ReplicaSet/web-deployment-76789f7f64
Containers:
  stweb:
    Container ID:   docker://d1e184a49931bd01804ace51cb44bb4e3479786ec0df6e406546bfb27ab84e31
    Image:          virasana/stwebapi:2.0.20190110.20
    Image ID:       docker-pullable://virasana/stwebapi@sha256:2a1405f30c358f1b2a2579c5f3cc19b7d3cc8e19e9e6dc0061bebb732a05d394
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 10 Jan 2019 18:59:27 +0000
      Finished:     Thu, 10 Jan 2019 18:59:27 +0000
    Ready:          False
    Restart Count:  3
    Environment:
      SUPPORT_TICKET_DEPLOY_DB_CONN_STRING_AUTH:  <set to the key 'SUPPORT_TICKET_DEPLOY_DB_CONN_STRING_AUTH' in secret 'mssql'>  Optional: false
      SUPPORT_TICKET_DEPLOY_DB_CONN_STRING:       <set to the key 'SUPPORT_TICKET_DEPLOY_DB_CONN_STRING' in secret 'mssql'>       Optional: false
      SUPPORT_TICKET_DEPLOY_JWT_SECRET:           <set to the key 'SUPPORT_TICKET_DEPLOY_JWT_SECRET' in secret 'mssql'>           Optional: false
      KUBERNETES_PORT_443_TCP_ADDR:               kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io
      KUBERNETES_PORT:                            tcp://kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:                    tcp://kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:                    kscluster-rgksk8s-2cfe9c-8af10e3f.hcp.eastus.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-98c7q (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  default-token-98c7q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-98c7q
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age               From                               Message
  ----     ------                 ----              ----                               -------
  Normal   Scheduled              1m                default-scheduler                  Successfully assigned web-deployment-76789f7f64-j6z5h to aks-nodepool1-35657602-0
  Normal   SuccessfulMountVolume  1m                kubelet, aks-nodepool1-35657602-0  MountVolume.SetUp succeeded for volume "default-token-98c7q"
  Normal   Pulled                 24s (x4 over 1m)  kubelet, aks-nodepool1-35657602-0  Container image "virasana/stwebapi:2.0.20190110.20" already present on machine
  Normal   Created                22s (x4 over 1m)  kubelet, aks-nodepool1-35657602-0  Created container
  Normal   Started                22s (x4 over 1m)  kubelet, aks-nodepool1-35657602-0  Started container
  Warning  BackOff                7s (x6 over 1m)   kubelet, aks-nodepool1-35657602-0  Back-off restarting failed container

Any ideas on how to proceed?

Many Thanks!

like image 578
Banoona Avatar asked Jan 10 '19 18:01

Banoona


People also ask

How do I check my kubectl logs?

To get Kubectl pod logs, you can access them by adding the -p flag. Kubectl will then get all of the logs stored for the pod. This includes lines that were emitted by containers that were terminated.

Where do kubectl logs read?

These logs are usually stored in files under the /var/log directory of the server on which the service runs. For most services, that server is the Kubernetes master node.

How do I get old pod logs in Kubernetes?

In case that a pod restarts, and you wanted to check the logs of the previous run, what you need to do is to use the --previous flag: kubectl logs nginx-7d8b49557c-c2lx9 --previous.

How do I get logs of service in Kubernetes?

To do this, you'll have to look at kubelet log. Accessing the logs depends on your Node OS. On some OSes it is a file, such as /var/log/kubelet. log, while other OSes use journalctl to access logs.


Video Answer


2 Answers

I am using a multi-stage docker build, and was building using the wrong target! (I had cloned a previous Visual Studio docker build task, which had the following argument:

--target=test

Because the "test" build stage has no defined entry point, the container was launching and then exiting without logging anything! So that's why kubectl logs returned blank.

I changed this to

--target=final

and all is working!

My Dockerfile looks like this:

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80

FROM microsoft/dotnet:2.1-sdk AS build

WORKDIR /src

COPY . .

WORKDIR "/src"

RUN dotnet clean ./ST.Web/ST.Web.csproj
RUN dotnet build ./ST.Web/ST.Web.csproj -c Release -o /app

FROM build AS test
RUN dotnet tool install -g dotnet-reportgenerator-globaltool
RUN chmod 755 ./run-tests.sh && ./run-tests.sh

FROM build AS publish
RUN dotnet publish ./ST.Web/ST.Web.csproj -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ST.Web.dll"]
like image 94
Banoona Avatar answered Nov 21 '22 19:11

Banoona


that happens because pod is already destroyed, try doing:

kubectl logs web-deployment-76789f7f64-s2b4r --previous

this will show logs from the previous pod.

like image 35
4c74356b41 Avatar answered Nov 21 '22 20:11

4c74356b41