I have my deployment, where I have defined postgres statefulSet, however I have it without PVC so if pod is dead - all data is gone. If I will list all pods I see below picture:
pod1 - Running - 10 min
pod2 - Running - 10 min
postgresPod - Running - 10 min
After some time I list pods again and see below:
pod1 - Running - 10 min
pod2 - Running - 10 min
postgresPod - Running - 5 min
As you can see postgresPod running 5 min. I "described" statefulset and see there below:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 5m **(x2 over 10m)** statefulset-controller create Pod postgresPod in StatefulSet x-postgres successful
Warning RecreatingFailedPod 5m statefulset-controller StatefulSet xx/x-postgres is recreating failed Pod postgresPod
Normal SuccessfulDelete 5m statefulset-controller **delete Pod postgresPod** in StatefulSet x-postgres successful
So my question is how I can know why statefulSet recreates the pods? Is there any additional log? Might be it is somehow related to resources of the machines, or pod was created on another node that has more resources on that specific moment?
Any Ideas?
StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec.
Deployments are used for stateless applications, StatefulSets for stateful applications. The pods in a deployment are interchangeable, whereas the pods in a StatefulSet are not. Deployments require a service to enable interaction with pods, while a headless service handles the pods' network ID in StatefulSets.
StatefulSets enable us to deploy stateful applications and clustered applications. They save data to persistent storage, such as Compute Engine persistent disks. They are suitable for deploying Kafka, MySQL, Redis, ZooKeeper, and other applications (needing unique, persistent identities and stable hostnames).
For a StatefulSet to work, it needs a Headless Service. A Headless Service does not have an IP address. Internally, it creates the necessary endpoints to expose pods with DNS names. The StatefulSet definition includes a reference to the Headless Service, but you have to create it separately.
Another nifty little trick I came up with is to describe
the pod as soon as it stops logging, by using
kubectl logs -f mypod && kubectl describe pod mypod
When the pod fails and stops logging, the kubectl logs -f mypod
will terminate and then the shell will immediately execute kubectl describe pod mypod
, (hopefully) letting you catch the state of the failing pod before it is recreated.
In my case it was showing
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
in line with what Timothy is saying
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With