I have a question about Kubernetes enviroment. I have K8s cloud and after what I add assign one Persistent volume to one pod, this pod is still in "ContainerCreating" status. This PV has assign PVC correctly. PVC is on two external GlusterFS servers with replica 2.
PV look like this:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
definitionVersion: "20170919"
name: tarsier-pv
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 50Gi
glusterfs:
endpoints: glusterfs-cluster
path: tarsier-prep
persistentVolumeReclaimPolicy: Recycle
PVC look like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tarsier-pvc
annotations:
definitionVersion: "20170919"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
volumeName: tarsier-pv
status:
accessModes:
- ReadWriteMany
capacity:
storage: 50Gi
phase: Bound
Pods RC look like this:
apiVersion: v1
kind: ReplicationController
metadata:
name: xxx
spec:
replicas: 1
template:
metadata:
labels:
app: xxxxx
spec:
volumes:
- name: tarsier-pv
persistentVolumeClaim:
claimName: tarsier-pvc
...
containers:
- name: xxx
...
volumeMounts:
- name: tarsier-pv
mountPath: "/shared_data/storage"
Kubectl describe pod xxx return no errors.
kubectl logs xxx return this:
Error from server (BadRequest): container "xxx" in pod "xxx" is waiting to start: ContainerCreating.
Do you have any ideas what could be wrong or where I can find any detailed logs? THX in advance.
Edit: Gluster mount is mounted on Master correctly and if I manually add there any file, it is correctly duplicated on both Gluster servers
1. In vSphere 7.0 U3, after an HA failover or reboot of a TKGS Worker Node, pods will show stuck in ContainerCreating state. 2. This condition is specifically seen when the TKGS Guest Cluster has Worker Nodes configured to use /var/lib/containerd ephemeral volumes.
To check the version, enter kubectl version . In this exercise you will use kubectl to fetch all of the Pods running in a cluster, and format the output to pull out the list of Containers for each.
If the output from a specific pod is desired, run the command kubectl describe pod pod_name --namespace kube-system . The Status field should be "Running" - any other status will indicate issues with the environment. In the Conditions section, the Ready field should indicate "True".
If a Pod is Running but not Ready it means that the Readiness probe is failing. When the Readiness probe is failing, the Pod isn't attached to the Service, and no traffic is forwarded to that instance.
To see what is wrong, check the events:
kubectl get events --sort-by=.metadata.creationTimestamp
thank all of you. There was wrong configuration on EP. Anyway, there was not any information in all possible logs or "or kubectl describe pod xxx".
Cheers
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With