I had a "stuck" namespace that I deleted showing in this eternal "terminating" status.
To force delete a Kubernetes namespace, remove the finalizer from the namespace's configuration. The finalizer is a Kubernetes resource whose purpose is to prohibit the force removal of an object.
To delete a namespace, Kubernetes must delete all the resources in the namespace and then check registered API services for the status. If the namespace contains resources that Kubernetes wasn't able to delete, or if an API service has a "False" status, then the namespace is stuck in the "Terminating" status.
If the resource defined in the finalizer cannot be deleted for any reason, then the namespace is not deleted either. This puts the namespace into a terminating state awaiting the removal of the resource, which never occurs.
Assuming you've already tried to force-delete resources like: Pods stuck at terminating status, and your at your wits' end trying to recover the namespace...
You can force-delete the namespace (perhaps leaving dangling resources):
(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)
This is a refinement of the answer here, which is based on the comment here.
I'm using the jq
utility to programmatically delete elements in the finalizers section. You could do that manually instead.
kubectl proxy
creates the listener at 127.0.0.1:8001
by default. If you know the hostname/IP of your cluster master, you may be able to use that instead.
The funny thing is that this approach seems to work even when using kubectl edit
making the same change has no effect.
This is caused by resources still existing in the namespace that the namespace controller is unable to remove.
This command (with kubectl 1.11+) will show you what resources remain in the namespace:
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -n <namespace>
Once you find those and resolve and remove them, the namespace will be cleaned up
As mentioned before in this thread there is another way to terminate a namespace using API not exposed by kubectl by using a modern version of kubectl where kubectl replace --raw
is available (not sure from which version). This way you will not have to spawn a kubectl proxy
process and avoid dependency with curl (that in some environment like busybox is not available). In the hope that this will help someone else I left this here:
kubectl get namespace "stucked-namespace" -o json \
| tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" \
| kubectl replace --raw /api/v1/namespaces/stucked-namespace/finalize -f -
Need to remove the finalizer for kubernetes.
Step 1:
kubectl get namespace <YOUR_NAMESPACE> -o json > <YOUR_NAMESPACE>.json
Step 2:
kubectl replace --raw "/api/v1/namespaces/<YOUR_NAMESPACE>/finalize" -f ./<YOUR_NAMESPACE>.json
Step 3:
kubectl get namespace
You can see that the annoying namespace is gone.
I loved this answer extracted from here It is just 2 commands.
In one terminal:
kubectl proxy
In another terminal:
kubectl get ns delete-me -o json | \
jq '.spec.finalizers=[]' | \
curl -X PUT http://localhost:8001/api/v1/namespaces/delete-me/finalize -H "Content-Type: application/json" --data @-
Solution:
Use command below without any changes. it works like a charm.
NS=`kubectl get ns |grep Terminating | awk 'NR==1 {print $1}'` && kubectl get namespace "$NS" -o json | tr -d "\n" | sed "s/\"finalizers\": \[[^]]\+\]/\"finalizers\": []/" | kubectl replace --raw /api/v1/namespaces/$NS/finalize -f -
Enjoy
Single line command
kubectl patch ns <Namespace_to_delete> -p '{"metadata":{"finalizers":null}}'
Simple trick
You can edit namespace on console only kubectl edit <namespace name>
remove/delete "Kubernetes" from inside the finalizer section and press enter or save/apply changes.
in one step also you can do it.
Trick : 1
kubectl get namespace annoying-namespace-to-delete -o json > tmp.json
then edit tmp.json
and remove"kubernetes"
Open another terminal and Run kubectl proxy
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json https://localhost:8001/api/v1/namespaces/<NAMESPACE NAME TO DELETE>
/finalize
and it should delete your namespace.
Trick : 2
Check the kubectl cluster-info
1. kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use
2. kubectl cluster-info dump
now start the proxy using command :
3. kubectl proxy
kubectl proxy & Starting to serve on 127.0.0.1:8001
find namespace
4. `kubectl get ns`
{Your namespace name} Terminating 1d
put it in file
5. kubectl get namespace {Your namespace name} -o json > tmp.json
edit the file tmp.json
and remove the finalizers
}, "spec": { "finalizers": [ "kubernetes" ] },
after editing it should look like this
}, "spec": { "finalizers": [ ] },
we almost there simply now run the command
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/{Your namespace name}/finalize
and it's gone
**
For us it was the metrics-server
crashing.
So to check if this is relevant to you'r case with the following run: kubectl api-resources
If you get
error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Then its probably the same issue
Credits goes to @javierprovecho here
I've written a one-liner Python3 script based on the common answers here. This script removes the finalizers
in the problematic namespace.
python3 -c "namespace='<my-namespace>';import atexit,subprocess,json,requests,sys;proxy_process = subprocess.Popen(['kubectl', 'proxy']);atexit.register(proxy_process.kill);p = subprocess.Popen(['kubectl', 'get', 'namespace', namespace, '-o', 'json'], stdout=subprocess.PIPE);p.wait();data = json.load(p.stdout);data['spec']['finalizers'] = [];requests.put('http://127.0.0.1:8001/api/v1/namespaces/{}/finalize'.format(namespace), json=data).raise_for_status()"
💡 rename
namespace='<my-namespace>'
with your namespace. e.g.namespace='trust'
Full script: https://gist.github.com/jossef/a563f8651ec52ad03a243dec539b333d
Run kubectl get apiservice
For the above command you will find an apiservice with Available Flag=Flase.
So, just delete that apiservice using kubectl delete apiservice <apiservice name>
After doing this, the namespace with terminating status will disappear.
Please try with below command:
kubectl patch ns <your_namespace> -p '{"metadata":{"finalizers":null}}'
Forcefully deleting the namespace or removing finalizers is definitely not the way to go since it could leave resources registered to a non existing namespace.
This is often fine but then one day you won't be able to create a resource because it is still dangling somewhere.
The upcoming Kubernetes version 1.16 should give more insights into namespaces finalizers, for now I would rely on identification strategies. A cool script which tries to automate these is: https://github.com/thyarles/knsk
However it works across all namespaces and it could be dangerous. The solution it s based on is: https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-524772920
tl;dr
kubectl get apiservice|grep False
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n $your-ns-to-delete
(credit: https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-524772920)
I write simple script to delete your stucking namespace based on @Shreyangi Saxena 's solution.
cat > delete_stuck_ns.sh << "EOF"
#!/usr/bin/env bash
function delete_namespace () {
echo "Deleting namespace $1"
kubectl get namespace $1 -o json > tmp.json
sed -i 's/"kubernetes"//g' tmp.json
kubectl replace --raw "/api/v1/namespaces/$1/finalize" -f ./tmp.json
rm ./tmp.json
}
TERMINATING_NS=$(kubectl get ns | awk '$2=="Terminating" {print $1}')
for ns in $TERMINATING_NS
do
delete_namespace $ns
done
EOF
chmod +x delete_stuck_ns.sh
This Script can detect all namespaces in Terminating
state, and delete it.
PS:
This may not work in MacOS, cause the native sed
in macos is not compatible with GNU sed
.
you may need install GNU sed in your MacOS, refer to this answer.
Please confirm that you can access your kubernetes cluster through command kubectl
.
Has been tested on kubernetes version v1.15.3
I found a easier solution:
kubectl patch RESOURCE NAME -p '{"metadata":{"finalizers":[]}}' --type=merge
My case the problem was caused by a custom metrics.
To know what is causing pains just run this command:
kubectl api-resources | grep -i false
That should give you which api resources cause the problem, once identified just delete it
kubectl delete apiservice v1beta1.custom.metrics.k8s.io
Once deleted the namespace should disappear
Run the following command to view the namespaces that are stuck in the Terminating state:
kubectl get namespaces
Select a terminating namespace and view the contents of the namespace to find out the finalizer. Run the following command:
kubectl get namespace -o yaml
Your YAML contents might resemble the following output:
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: 2019-12-25T17:38:32Z
deletionTimestamp: 2019-12-25T17:51:34Z
name: <terminating-namespace>
resourceVersion: "4779875"
selfLink: /api/v1/namespaces/<terminating-namespace>
uid: ******-****-****-****-fa1dfgerz5
spec:
finalizers:
- kubernetes
status:
phase: Terminating
Run the following command to create a temporary JSON file:
kubectl get namespace -o json >tmp.json
Edit your tmp.json file. Remove the kubernetes value from the finalizers field and save the file. Output would be like:
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"creationTimestamp": "2018-11-19T18:48:30Z",
"deletionTimestamp": "2018-11-19T18:59:36Z",
"name": "<terminating-namespace>",
"resourceVersion": "1385077",
"selfLink": "/api/v1/namespaces/<terminating-namespace>",
"uid": "b50c9ea4-ec2b-11e8-a0be-fa163eeb47a5"
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
To set a temporary proxy IP and port, run the following command. Be sure to keep your terminal window open until you delete the stuck namespace:
kubectl proxy
Your proxy IP and port might resemble the following output:
Starting to serve on 127.0.0.1:8001
From a new terminal window, make an API call with your temporary proxy IP and port:
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/your_terminating_namespace/finalize
Your output would be like:
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "<terminating-namespace>",
"selfLink": "/api/v1/namespaces/<terminating-namespace>/finalize",
"uid": "b50c9ea4-ec2b-11e8-a0be-fa163eeb47a5",
"resourceVersion": "1602981",
"creationTimestamp": "2018-11-19T18:48:30Z",
"deletionTimestamp": "2018-11-19T18:59:36Z"
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
The finalizer parameter is removed. Now verify that the terminating namespace is removed, run the following command:
kubectl get namespaces
Replace ambassador with your namespace
Check if the namespace is stuck
kubectl get ns ambassador
NAME STATUS AGE
ambassador Terminating 110d
This is stuck from a long time
Open a admin terminal/cmd prompt or powershell and run
kubectl proxy
This will start a local web server: Starting to serve on 127.0.0.1:8001
Open another terminal and run
kubectl get ns ambassador -o json >tmp.json
edit the tmp.json using vi or nano
from this
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"ambassador\"}}\n"
},
"creationTimestamp": "2021-01-07T18:23:28Z",
"deletionTimestamp": "2021-04-28T06:43:41Z",
"name": "ambassador",
"resourceVersion": "14572382",
"selfLink": "/api/v1/namespaces/ambassador",
"uid": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"conditions": [
{
"lastTransitionTime": "2021-04-28T06:43:46Z",
"message": "Discovery failed for some groups, 3 failing: unable to retrieve the complete list of server APIs: compose.docker.com/v1alpha3: an error on the server (\"Internal Server Error: \\\"/apis/compose.docker.com/v1alpha3?timeout=32s\\\": Post https://0.0.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: write tcp 0.0.0.0:53284-\u0026gt;0.0.0.0:443: write: broken pipe\") has prevented the request from succeeding, compose.docker.com/v1beta1: an error on the server (\"Internal Server Error: \\\"/apis/compose.docker.com/v1beta1?timeout=32s\\\": Post https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: write tcp 0.0.0.0:5284-\u0026gt;10.96.0.1:443: write: broken pipe\") has prevented the request from succeeding, compose.docker.com/v1beta2: an error on the server (\"Internal Server Error: \\\"/apis/compose.docker.com/v1beta2?timeout=32s\\\": Post https://0.0.0.0:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews: write tcp 1.1.1.1:2284-\u0026gt;0.0.0.0:443: write: broken pipe\") has prevented the request from succeeding",
"reason": "DiscoveryFailed",
"status": "True",
"type": "NamespaceDeletionDiscoveryFailure"
},
{
"lastTransitionTime": "2021-04-28T06:43:49Z",
"message": "All legacy kube types successfully parsed",
"reason": "ParsedGroupVersions",
"status": "False",
"type": "NamespaceDeletionGroupVersionParsingFailure"
},
{
"lastTransitionTime": "2021-04-28T06:43:49Z",
"message": "All content successfully deleted",
"reason": "ContentDeleted",
"status": "False",
"type": "NamespaceDeletionContentFailure"
}
],
"phase": "Terminating"
}
}
to
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"ambassador\"}}\n"
},
"creationTimestamp": "2021-01-07T18:23:28Z",
"deletionTimestamp": "2021-04-28T06:43:41Z",
"name": "ambassador",
"resourceVersion": "14572382",
"selfLink": "/api/v1/namespaces/ambassador",
"uid": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"spec": {
"finalizers": []
}
}
by deleting status and kubernetes inside finalizers
Now use the command and replace ambassador with your namespace
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/ambassador/finalize
you will see another json like before then run
then run the command
kubectl get ns ambassador
Error from server (NotFound): namespaces "ambassador" not found
If it still says terminating or any other error make sure you format your json in a proper way and try the steps again.
here is a (yet another) solution. This uses jq
to remove the finalisers block from the json, and does not require kubectl proxy
:
namespaceToDelete=blah
kubectl get namespace "$namespaceToDelete" -o json \
| jq 'del(.spec.finalizers)' \
| kubectl replace --raw /api/v1/namespaces/$namespaceToDelete/finalize -f -
There are a couple of things you can run. But what this usually means, is that the automatic deletion of namespace was not able to finish, and there is a process running that has to be manually deleted. To find this you can do these things:
Get all prossesse attached to the name space. If this does not result in anything move on to next suggestions
$ kubectl get all -n your-namespace
Some namespaces have apiserivces attached to them and it can be troublesome to delete. This can for that matter be whatever resources you want. Then you delete that resource if it finds anything
$ kubectl get apiservice|grep False
But the main takeaway, is that there might be some things that is not completly removed. So you can see what you initially had in that namespace, and then see what things is spun up with your YAMLs to see the processes up. Or you can start to google why wont service X be properly removed, and you will find things.
If the namespace stuck in Terminating while the resources in that namespace have been already deleted, you can patch the finalizers
of the namespace before deleting it:
kubectl patch ns ns_to_be_deleted -p '{"metadata":{"finalizers":null}}';
then
kubectl delete ns ns_to_be_deleted;
Edit:
Please check @Antonio Gomez Alvarado's Answer first. The root cause could be the metrics server
that mentioned in that answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With