The answer is Kubernetes PODs are managed by the replication controller, so even though you deleted the PODs manually but still, there is a reference of deleted POD which is present in the replication controller, and as you delete the POD manually the Kubernetes cluster sense that POD is down and it recreates another ...
First, confirm the name of the node you want to remove using kubectl get nodes , and make sure that all of the pods on the node can be safely terminated without any special procedures. Next, use the kubectl drain command to evict all user pods from the node.
If you manually deploy a single pod and then delete it, your service will go down and won't come back up. If a service is running through a replica set but with only one pod, the service will become unavailable after deleting the pod.
A restarting container can indicate problems with memory (see the Out of Memory section), cpu usage, or just an application exiting prematurely. If a container is being restarted because of CPU usage, try increasing the requested and limit amounts for CPU in the pod spec.
You need to delete the deployment, which should in turn delete the pods and the replica sets https://github.com/kubernetes/kubernetes/issues/24137
To list all deployments:
kubectl get deployments --all-namespaces
Then to delete the deployment:
kubectl delete -n NAMESPACE deployment DEPLOYMENT
Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment. If NAMESPACE is default
, leave off the -n
option altogether.
In some cases it could also be running due to a job or daemonset. Check the following and run their appropriate delete command.
kubectl get jobs
kubectl get daemonsets.app --all-namespaces
kubectl get daemonsets.extensions --all-namespaces
Instead of trying to figure out whether it is a deployment, deamonset, statefulset... or what (in my case it was a replication controller that kept spanning new pods :) In order to determine what it was that kept spanning up the image I got all the resources with this command:
kubectl get all
Of course you could also get all resources from all namespaces:
kubectl get all --all-namespaces
or define the namespace you would like to inspect:
kubectl get all -n NAMESPACE_NAME
Once I saw that the replication controller was responsible for my trouble I deleted it:
kubectl delete replicationcontroller/CONTROLLER_NAME
if your pod has name like name-xxx-yyy
, it could be controlled by a replicasets.apps named name-xxx
, you should delete that replicaset first before deleting the pod
kubectl delete replicasets.apps name-xxx
Look out for stateful sets as well
kubectl get sts --all-namespaces
to delete all the stateful sets in a namespace
kubectl --namespace <yournamespace> delete sts --all
to delete them one by one
kubectl --namespace ag1 delete sts mssql1
kubectl --namespace ag1 delete sts mssql2
kubectl --namespace ag1 delete sts mssql3
Many answers here tells to delete a specific k8s object, but you can delete multiple objects at once, instead of one by one:
kubectl delete deployments,jobs,services,pods --all -n <namespace>
In my case, I'm running OpenShift cluster with OLM - Operator Lifecycle Manager. OLM is the one who controls the deployment, so when I deleted the deployment, it was not sufficient to stop the pods from restarting.
Only when I deleted OLM and its subscription, the deployment, services and pods were gone.
First list all k8s objects in your namespace:
$ kubectl get all -n openshift-submariner
NAME READY STATUS RESTARTS AGE
pod/submariner-operator-847f545595-jwv27 1/1 Running 0 8d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/submariner-operator-metrics ClusterIP 101.34.190.249 <none> 8383/TCP 8d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/submariner-operator 1/1 1 1 8d
NAME DESIRED CURRENT READY AGE
replicaset.apps/submariner-operator-847f545595 1 1 1 8d
OLM is not listed with get all
, so I search for it specifically:
$ kubectl get olm -n openshift-submariner
NAME AGE
operatorgroup.operators.coreos.com/openshift-submariner 8d
NAME DISPLAY VERSION
clusterserviceversion.operators.coreos.com/submariner-operator Submariner 0.0.1
Now delete all objects, including OLMs, subscriptions, deployments, replica-sets, etc:
$ kubectl delete olm,svc,rs,rc,subs,deploy,jobs,pods --all -n openshift-submariner
operatorgroup.operators.coreos.com "openshift-submariner" deleted
clusterserviceversion.operators.coreos.com "submariner-operator" deleted
deployment.extensions "submariner-operator" deleted
subscription.operators.coreos.com "submariner" deleted
service "submariner-operator-metrics" deleted
replicaset.extensions "submariner-operator-847f545595" deleted
pod "submariner-operator-847f545595-jwv27" deleted
List objects again - all gone:
$ kubectl get all -n openshift-submariner
No resources found.
$ kubectl get olm -n openshift-submariner
No resources found.
This will provide information about all the pods,deployments, services and jobs in the namespace.
kubectl get pods,services,deployments,jobs
pods can either be created by deployments or jobs
kubectl delete job [job_name]
kubectl delete deployment [deployment_name]
If you delete the deployment or job then restart of the pods can be stopped.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With