Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes pod still in "ContainerCreating" status

Tags:

kubernetes

I have a question about Kubernetes enviroment. I have K8s cloud and after what I add assign one Persistent volume to one pod, this pod is still in "ContainerCreating" status. This PV has assign PVC correctly. PVC is on two external GlusterFS servers with replica 2.

PV look like this:

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    definitionVersion: "20170919"
  name: tarsier-pv
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 50Gi
  glusterfs:
    endpoints: glusterfs-cluster
    path: tarsier-prep
  persistentVolumeReclaimPolicy: Recycle

PVC look like this:

    apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tarsier-pvc
  annotations:
    definitionVersion: "20170919"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
  volumeName: tarsier-pv
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 50Gi
  phase: Bound

Pods RC look like this:

    apiVersion: v1
kind: ReplicationController
metadata:
  name: xxx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: xxxxx
    spec:
      volumes:
      - name: tarsier-pv
        persistentVolumeClaim:
          claimName: tarsier-pvc
        ...
      containers:
      - name: xxx
        ...
        volumeMounts:
        - name: tarsier-pv
          mountPath: "/shared_data/storage"

Kubectl describe pod xxx return no errors.

kubectl logs xxx return this:

Error from server (BadRequest): container "xxx" in pod "xxx" is waiting to start: ContainerCreating.

Do you have any ideas what could be wrong or where I can find any detailed logs? THX in advance.

Edit: Gluster mount is mounted on Master correctly and if I manually add there any file, it is correctly duplicated on both Gluster servers

like image 758
Bendik Avatar asked Sep 19 '17 12:09

Bendik


People also ask

Why is POD in ContainerCreating state?

1. In vSphere 7.0 U3, after an HA failover or reboot of a TKGS Worker Node, pods will show stuck in ContainerCreating state. 2. This condition is specifically seen when the TKGS Guest Cluster has Worker Nodes configured to use /var/lib/containerd ephemeral volumes.

How do you know if a pod is running in Kubernetes?

To check the version, enter kubectl version . In this exercise you will use kubectl to fetch all of the Pods running in a cluster, and format the output to pull out the list of Containers for each.

How do you get the status of a pod in Kubernetes?

If the output from a specific pod is desired, run the command kubectl describe pod pod_name --namespace kube-system . The Status field should be "Running" - any other status will indicate issues with the environment. In the Conditions section, the Ready field should indicate "True".

Why Kubernetes pod is not ready?

If a Pod is Running but not Ready it means that the Readiness probe is failing. When the Readiness probe is failing, the Pod isn't attached to the Service, and no traffic is forwarded to that instance.


2 Answers

To see what is wrong, check the events:

kubectl get events --sort-by=.metadata.creationTimestamp
like image 198
Sander Avatar answered Oct 21 '22 08:10

Sander


thank all of you. There was wrong configuration on EP. Anyway, there was not any information in all possible logs or "or kubectl describe pod xxx".

Cheers

like image 29
Bendik Avatar answered Oct 21 '22 07:10

Bendik