I am using the Kubernetes-client java client to create Deployments on a Kubernetes cluster. THis is the code
Deployment deployment = new DeploymentBuilder()
.withNewMetadata()
.withName("first-deployment")
.endMetadata()
.withNewSpec()
.withReplicas(3)
.withNewTemplate()
.withNewMetadata()
.addToLabels(namespaceID, "hello-world-example")
.endMetadata()
.withNewSpec()
.addNewContainer()
.withName("nginx-one")
.withImage("nginx")
.addNewPort()
.withContainerPort(80)
.endPort()
.withResources(resourceRequirements)
.endContainer()
.endSpec()
.endTemplate()
.endSpec()
.build();
deployment = client.extensions().deployments().inNamespace(namespace).create(deployment);
I add a3 min wait time and then test the status of the pod
PodList podList = client.pods().withLabel(namespaceID, "hello-world-example").list();
System.out.println("Number of pods " + podList.getItems().size());
for (Pod pod : podList.getItems()) {
System.out.println("Name " + pod.getMetadata().getName()
+ " Status " + pod.getStatus().getPhase()
+ " Reason " + pod.getStatus().getReason()
+ " Containers " + pod.getSpec().getContainers().get(0).getResources().getLimits());
}
This returns the following sttaus
Name first-deployment-2418943216-9915m Status Pending Reason null Containers null
Name first-deployment-2418943216-fnk21 Status Pending Reason null Containers null
Name first-deployment-2418943216-zb5hr Status Pending Reason null Containers null
However from the commandline if I get kubectl get pods --all-namespaces
. It returns the pod state as running . Am I using the right API? what did I miss?
Debugging Services First, verify that there are endpoints for the service. For every Service object, the apiserver makes an endpoints resource available. Make sure that the endpoints match up with the number of pods that you expect to be members of your service.
Pods are evicted according to the resource, like memory or disk space, causing the node pressure. The first pods to be evicted are those in a failed state, since they are not running but could still be using resources. After this, Kubernetes evaluates the running pods.
Maybe a better way to check this is to have a loop and sleep inside to loop and continuously keep checking the status until all pods are up are running. I had done something similar to check if all the required pods were up by checking the status. But you might also want to consider adding the liveness and readiness probe on the pods before you make such a check. There are additional details provided here.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With