Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubectl get nodes return "the server doesn't have a resource type "nodes""

I installed the Kubernetes and performed kubeadm init and join from the worker too. But when i run kubectl get nodes it gives the following response

the server doesn't have a resource type "nodes"

What might be the problem here? COuld not see anything in the /var/log/messages

Any hints here?

like image 483
Prashant Avatar asked Sep 17 '17 07:09

Prashant


People also ask

How do I resolve Imagepullbackoff in Kubernetes?

To resolve it, double check the pod specification and ensure that the repository and image are specified correctly. If this still doesn't work, there may be a network issue preventing access to the container registry. Look in the describe pod text file to obtain the hostname of the Kubernetes node.


3 Answers

In my case, I wanted to see the description of my pods.

When I used kubectl describe postgres-deployment-866647ff76-72kwf, the error said error: the server doesn't have a resource type "postgres-deployment-866647ff76-72kwf".

I corrected it by adding pod, before the pod name, as follows:

kubectl describe pod postgres-deployment-866647ff76-72kwf
like image 189
vagdevi k Avatar answered Nov 11 '22 07:11

vagdevi k


It looks to me that the authentication credentials were not set correctly. Did you copy the kubeconfig file /etc/kubernetes/admin.conf to ~/.kube/config? If you used kubeadm the API server should be configured to run on 6443, not in 8080. Could you also check that the KUBECONFIG variable is not set?

It would also help to increase the verbose level using the flag --v=99. Moreover, are you accessing from the same machine where the Kubernetes master components are installed, or are you accessing from the outside?

like image 36
Javier Salmeron Avatar answered Nov 11 '22 07:11

Javier Salmeron


I got this message when I was trying to play around with Docker-Desktop. I had previously been doing a few experiments with Google Cloud and run some kubectl commands for that. The result was that in my ~/.kube/config file I still had stale config related to a now non-existent GCP cluster, and my default k8s context was set to that.

Try the following:

# Find what current contexts you have
kubectl config view

I get:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://kubernetes.docker.internal:6443
  name: docker-desktop
contexts:
- context:
    cluster: docker-desktop
    user: docker-desktop
  name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

So only one context now. If you have more than one context here, check that its the one you expect that is set to current-context. If not change it with:

# Get rid of old contexts that you don't use 
kubectl config delete-context some-old-context

# Selecting the context that I have auth for
kubectl config use-context docker-desktop
like image 25
Sez Avatar answered Nov 11 '22 06:11

Sez