Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

kubelet does not have ClusterDNS IP configured in Microk8s

I'm using microk8s in ubuntu

I'm trying to run a simple hello world program but I got the error when pod created.

kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy

Here is my deployment.yaml file which I'm trying to apply.

apiVersion: v1
kind: Service
metadata:
  name: grpc-hello
spec:
  ports:
  - port: 80
    targetPort: 9000
    protocol: TCP
    name: http
  selector:
    app: grpc-hello
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpc-hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grpc-hello
  template:
    metadata:
      labels:
        app: grpc-hello
    spec:
      containers:
      - name: esp
        image: gcr.io/endpoints-release/endpoints-runtime:1
        args: [
          "--http2_port=9000",
          "--backend=grpc://127.0.0.1:50051",
          "--service=hellogrpc.endpoints.octa-test-123.cloud.goog",
          "--rollout_strategy=managed",
        ]
        ports:
          - containerPort: 9000
      - name: python-grpc-hello
        image: gcr.io/octa-test-123/python-grpc-hello:1.0
        ports:
          - containerPort: 50051

Here is what I got when I try to describe the pod

Events:
  Type     Reason             Age                From                   Message
  ----     ------             ----               ----                   -------
  Normal   Scheduled          31s                default-scheduler      Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
  Normal   Started            30s                kubelet, azeem-ubuntu  Started container python-grpc-hello
  Normal   Pulled             30s                kubelet, azeem-ubuntu  Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
  Normal   Created            30s                kubelet, azeem-ubuntu  Created container python-grpc-hello
  Normal   Pulled             12s (x3 over 31s)  kubelet, azeem-ubuntu  Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
  Normal   Created            12s (x3 over 31s)  kubelet, azeem-ubuntu  Created container esp
  Normal   Started            12s (x3 over 30s)  kubelet, azeem-ubuntu  Started container esp
  Warning  MissingClusterDNS  8s (x10 over 31s)  kubelet, azeem-ubuntu  pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
  Warning  BackOff            8s (x2 over 23s)   kubelet, azeem-ubuntu  Back-off restarting failed container
Events:
  Type     Reason             Age                From                   Message
  ----     ------             ----               ----                   -------
  Normal   Scheduled          31s                default-scheduler      Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
  Normal   Started            30s                kubelet, azeem-ubuntu  Started container python-grpc-hello
  Normal   Pulled             30s                kubelet, azeem-ubuntu  Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
  Normal   Created            30s                kubelet, azeem-ubuntu  Created container python-grpc-hello
  Normal   Pulled             12s (x3 over 31s)  kubelet, azeem-ubuntu  Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
  Normal   Created            12s (x3 over 31s)  kubelet, azeem-ubuntu  Created container esp
  Normal   Started            12s (x3 over 30s)  kubelet, azeem-ubuntu  Started container esp
  Warning  MissingClusterDNS  8s (x10 over 31s)  kubelet, azeem-ubuntu  pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
  Warning  BackOff            8s (x2 over 23s)   kubelet, azeem-ubuntu  Back-off restarting failed container

I search alot about this I find some answers but no one is working for me I also create the kube-dns for this but don't know why this still is not working. These kube-dns are running. kube-dns are in kube-system namespace.

NAME                       READY   STATUS    RESTARTS   AGE
kube-dns-6dbd676f7-dfbjq   3/3     Running   0          22m

And here is what I apply to create the kube-dns

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.152.183.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  upstreamNameservers: |-
    ["8.8.8.8", "8.8.4.4"]
# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      volumes:
      - name: kube-dns-config
        configMap:
          name: kube-dns
          optional: true
      containers:
      - name: kubedns
        image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
        volumeMounts:
        - name: kube-dns-config
          mountPath: /kube-dns-config
      - name: dnsmasq
        image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - -v=2
        - -logtostderr
        - -configDir=/etc/k8s/dns/dnsmasq-nanny
        - -restartDnsmasq=true
        - --
        - -k
        - --cache-size=1000
        - --no-negcache
        - --log-facility=-
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 20Mi
        volumeMounts:
        - name: kube-dns-config
          mountPath: /etc/k8s/dns/dnsmasq-nanny
      - name: sidecar
        image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
      serviceAccountName: kube-dns

Please let me know what I'm missing.

like image 516
Azeem Haider Avatar asked Jan 01 '20 06:01

Azeem Haider


People also ask

Why can't kubelet create pods with clusterdns IP?

kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy. Any idea in what is going on and how to fix it? Here is the configuration:

How to deploy Kube DNS with microk8s enable DNS?

You have not specified how you deployed kube dns but with microk8s its recommended to use core dns. You should not deploy kube dns or core dns on your own rather you need to enable dns using this command microk8s enable dns which would deploy core DNS and setup DNS. I do have CoreDNS setup using microk8s enable dns since long time.

How to deploy coredns in Kubernetes cluster?

Here's a manifest you can use to deploy CoreDNS in your cluster with working DNS. I'm taking a guess your cluster DNS is 10.254.0.10 based on some of the output above. You should be able to save this code in a text file (eg called k8s-dns.yaml) and then install it with kubectl create -f k8s-dns.yaml

Why can't I get microk8s to work on NFS?

My home directory is not in /home or is on NFS and I can't get Microk8s to work... The certificates should then be automatically regenerated. The above file can then be returned to its original name. On systems which use firewall-cmd, pods are unable to communicate with each other because the firewall drops the packets.


1 Answers

You have not specified how you deployed kube dns but with microk8s its recommended to use core dns. You should not deploy kube dns or core dns on your own rather you need to enable dns using this command microk8s enable dns which would deploy core DNS and setup DNS.

like image 58
Arghya Sadhu Avatar answered Oct 23 '22 07:10

Arghya Sadhu