Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

pod are not getting created in kubernetes but deployment exists?

I have a cluster running on Azure cloud. I have a deployment of a peer service on that cluster. But pods for that deployment is not getting created. I have also scaled up replica set for that depolyment.

Even when i am trying to create simple deployment of docker busybox image then it is not able to create the pods.

Please guide me what could be the issue ?

EDIT

output for describe deployment

Name:               peer0-org-myorg
Namespace:          internal
CreationTimestamp:  Tue, 28 May 2019 06:12:21 +0000
Labels:             cattle.io/creator=norman
                    workload.user.cattle.io/workloadselector=deployment-internal-peer0-org-myorg
Annotations:        deployment.kubernetes.io/revision=1
                    field.cattle.io/creatorId=user-b29mj
                    field.cattle.io/publicEndpoints=null
Selector:           workload.user.cattle.io/workloadselector=deployment-internal-peer0-org-myorg
Replicas:           1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:       workload.user.cattle.io/workloadselector=deployment-internal-peer0-org-myorg
  Annotations:  cattle.io/timestamp=2019-06-11T08:19:40Z
                field.cattle.io/ports=[[{"containerPort":7051,"dnsName":"peer0-org-myorg-hostport","hostPort":7051,"kind":"HostPort","name":"7051tcp70510","protocol":"TCP","sourcePort":7051},{"containerPo...
  Containers:
   peer0-org-myorg:
    Image:       hyperledger/fabric-peer:1.4.0
    Ports:       7051/TCP, 7053/TCP
    Host Ports:  7051/TCP, 7053/TCP
    Environment:
      CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS:  couchdb0:5984
      CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD:        root
      CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME:        root
      CORE_LEDGER_STATE_STATEDATABASE:                 CouchDB
      CORE_LOGGING_CAUTHDSL:                           INFO
      CORE_LOGGING_GOSSIP:                             WARNING
      CORE_LOGGING_GRPC:                               WARNING
      CORE_LOGGING_MSP:                                WARNING
      CORE_PEER_ADDRESS:                               peer0-org-myorg-com:7051
      CORE_PEER_ADDRESSAUTODETECT:                     true
      CORE_PEER_FILESYSTEMPATH:                        /var/hyperledger/peers/peer0/production
      CORE_PEER_GOSSIP_EXTERNALENDPOINT:               peer0-org-myorg-com:7051
      CORE_PEER_GOSSIP_ORGLEADER:                      false
      CORE_PEER_GOSSIP_USELEADERELECTION:              true
      CORE_PEER_ID:                                    peer0.org.myorg.com
      CORE_PEER_LOCALMSPID:                            orgMSP
      CORE_PEER_MSPCONFIGPATH:                         /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/msp
      CORE_PEER_PROFILE_ENABLED:                       true
      CORE_PEER_TLS_CERT_FILE:                         /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/tls/server.crt
      CORE_PEER_TLS_ENABLED:                           false
      CORE_PEER_TLS_KEY_FILE:                          /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/tls/server.key
      CORE_PEER_TLS_ROOTCERT_FILE:                     /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/tls/ca.crt
      CORE_PEER_TLS_SERVERHOSTOVERRIDE:                peer0.org.myorg.com
      CORE_VM_ENDPOINT:                                unix:///host/var/run/docker.sock
      FABRIC_LOGGING_SPEC:                             DEBUG
    Mounts:
      /host/var/run from worker1-dockersock (ro)
      /mnt/crypto from crypto (ro)
      /var/hyperledger/peers from vol2 (rw)
  Volumes:
   crypto:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  worker1-crypto-pvc
    ReadOnly:   false
   vol2:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  worker1-pvc
    ReadOnly:   false
   worker1-dockersock:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  worker1-dockersock
    ReadOnly:   false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  peer0-org-myorg-6d6645ddd7 (1/1 replicas created)
NewReplicaSet:   <none>
Events:          <none>
like image 357
Pankaj Cheema Avatar asked Jun 11 '19 10:06

Pankaj Cheema


2 Answers

There are a million reasons why your pods could be broken and there is a bunch of information that you can get that would give you more information on why the pods are not being created. I would start with:

What are the pods saying:

kubectl get pods --all-namespaces -o wide

If you can see the pods but they have errors, what do the errors say. Further describe the broken pods.

kubectl describe pod <pod-name>

Or grab logs

kubectl logs <pod-name>

Maybe something went wrong with your deployment. Check the deployments.

kubectl get deployments

Describe the deployments (like pods above), look for errors.

We can't really help you until you provide way more information. What debugging attempts have you made so far? What errors are displayed and where are you seeing them? What is actually happening when there's an attempt to create the pods.

kubectl Get/Describe/Log everything and let us know what's actually happening.

Here's a good place to start:

  • Troubleshoot clusters
  • Kubectl command cheatsheet

EDIT: Added a pic of troubleshooting in Azure Portal (mentioned in comments below)

enter image description here

like image 71
Old Schooled Avatar answered Oct 22 '22 02:10

Old Schooled


It is the responsibility of the kube-apiserver (k8s master plane component) to serve your API requests which is for example : kubectl create .. or kubectl scale ... Now to actually maintain the state of those kubernetes resources to the desired state, is the job of kube-controller-manager (another k8s master plane component). Also, to schedule those resources to nodes is the job of kube-scheduler (another k8s master plane component).

Being said the above information and assuming (I think) you are using managed Kubernetes therefore the above components are managed by you cloud provider. But with my (on-premise kubernetes) experience I can say that if your deployment commands are being executed correctly that means kube-apiserver is working correctly but kube-controller is not functioning correctly. Also, if the pods show up but is stuck in creating status then it is the problem of the kube-scheduler which is not doing it's job.

All in all, it is worth checking the logs of kube-controller and kube-scheduler.

like image 37
garlicFrancium Avatar answered Oct 22 '22 03:10

garlicFrancium