Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Connection Refused to Kubernetes Service

I have been trying to test minikube to create a demo application with three services. The idea is to have a web UI which communicates with the other services. Each service will be written in different languages: nodejs, python and go.

I created 3 docker images, one for each app and tested the code, basically they provided a very simple REST endpoints. After that, I deployed them using minikube. Below is my current deployment yaml file:

---
apiVersion: v1
kind: Namespace
metadata:
  name: ngci

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-gateway
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: web-gateway
    spec:
      containers:
      - env:
        - name: VCSA_MANAGER
          value: http://vcsa-manager-service:7070
        name: web-gateway
        image: silvam11/web-gateway
        imagePullPolicy: Never
        ports:
          - containerPort: 8080 
        readinessProbe:
          httpGet:
            path: /status
            port: 8080
          periodSeconds: 5           

---
apiVersion: v1
kind: Service
metadata:
  name: web-gateway-service
  namespace: ngci
spec:
  selector:
    app: web-gateway
  ports:
    - protocol: "TCP"
      # Port accessible inside cluster
      port: 8080
      # Port forward to inside the pod
      #targetPort did not work with nodePort, why?
      #targetPort: 9090
      # Port accessible outside cluster
      nodePort: 30001
      #name: grpc
  type: LoadBalancer

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vcsa-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vcsa-manager
    spec:
      containers:
        - name: vcsa-manager
          image: silvam11/vcsa-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: vcsa-manager-service
  namespace: ngci
spec:
  selector:
    app: vcsa-manager
  ports:
    - protocol: "TCP"
      # Port accessible inside cluster
      port: 7070
      # Port forward to inside the pod
      #targetPort did not work with nodePort, why?
      targetPort: 9090
      # Port accessible outside cluster
      #nodePort: 30001
      #name: grpc

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: repo-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: repo-manager
    spec:
      containers:
        - name: repo-manager
          image: silvam11/repo-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8000

---
apiVersion: v1
kind: Service
metadata:
  name: repo-manager-service
  namespace: ngci
spec:
  selector:
    app: repo-manager
  ports:
    - protocol: "TCP"
      # Port accessible inside cluster
      port: 9090
      # Port forward to inside the pod
      #targetPort did not work with nodePort, why?
      #targetPort: 9090
      # Port accessible outside cluster
      #nodePort: 30001
      #name: grpc

As you can see, I created there services but only the web-gateway is defined as LoadBalancer type. It provides two endpoints. One named /status which allows me to test the service is up and running and reachable.

The second endpoint, named /user, communicates with another k8s service. The code is very simple:

app.post('/user', (req, res) => {
  console.log("/user called.");
  console.log("/user req.body : " + req.body);

  if(!req || !req.body)
  {
     var errorMsg = "Invalid argument sent";
     console.log(errorMsg);
     return res.status(500).send(errorMsg);
  }

  **console.log("calling " + process.env.VCSA_MANAGER);
  const options = {
    url: process.env.VCSA_MANAGER,
    method: 'GET',
    headers: {
      'Accept': 'application/json'
    }
  };**

  request(options, function(err, resDoc, body) {
    console.log("callback : " + body);
    if(err)
    {
      console.log("ERROR: " + err);
      return res.send(err);
    }

    console.log("statusCode : " + resDoc.statusCode);
    if(resDoc.statusCode != 200)
    {
      console.log("ERROR code: " + res.statusCode);
      return res.status(500).send(resDoc.statusCode);
    }

    return res.send({"ok" : body});
  });

});

The main idea of this snippet is to use the environment variable process.env.VCSA_MANAGER to send a request to the other service. This varaible was defined on my k8s deployment yaml file as http://vcsa-manager-service:7070

The issue is that, this request returns a connection error. Initially I thought that it would be a DNS issue but it seems that the web-gateway pod can resolve the name:

kubectl exec -it web-gateway-7b4689bff9-rvbbn -n ngci -- ping vcsa-manager-service

PING vcsa-manager-service.ngci.svc.cluster.local (10.99.242.121): 56 data bytes

The ping command from the web-gateway pod resolved the dns correctly. The IP is the correct, as can be seen below:

kubectl get svc -n ngci

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
repo-manager-service   ClusterIP      10.102.194.179   <none>        9090/TCP         35m
vcsa-manager-service   ClusterIP      10.99.242.121    <none>        7070/TCP         35m
web-gateway-service    LoadBalancer   10.98.128.210    <pending>     8080:30001/TCP   35m

Also, as suggested, the describe of them

kubectl describe pods -n ngci
Name:           repo-manager-6cf98f5b54-pd2ht
Namespace:      ngci
Node:           minikube/10.0.2.15
Start Time:     Wed, 09 May 2018 17:53:54 +0100
Labels:         app=repo-manager
                pod-template-hash=2795491610
Annotations:    <none>
Status:         Running
IP:             172.17.0.10
Controlled By:  ReplicaSet/repo-manager-6cf98f5b54
Containers:
  repo-manager:
    Container ID:   docker://d2d54e42604323c8a6552b3de6e173e5c71eeba80598bfc126fbc03cae93d261
    Image:          silvam11/repo-manager
    Image ID:       docker://sha256:dc6dcbb1562cdd5f434f86696ce09db46c7ff5907b991d23dae08b2d9ed53a8f
    Port:           8000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 10 May 2018 10:32:49 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 09 May 2018 17:53:56 +0100
      Finished:     Wed, 09 May 2018 18:31:24 +0100
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-tbkms:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbkms
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From               Message
  ----    ------                 ----  ----               -------
  Normal  Scheduled              16h   default-scheduler  Successfully assigned repo-manager-6cf98f5b54-pd2ht to minikube
  Normal  SuccessfulMountVolume  16h   kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  Pulled                 16h   kubelet, minikube  Container image "silvam11/repo-manager" already present on machine
  Normal  Created                16h   kubelet, minikube  Created container
  Normal  Started                16h   kubelet, minikube  Started container
  Normal  SuccessfulMountVolume  3m    kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  SandboxChanged         3m    kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled                 3m    kubelet, minikube  Container image "silvam11/repo-manager" already present on machine
  Normal  Created                3m    kubelet, minikube  Created container
  Normal  Started                3m    kubelet, minikube  Started container


Name:           vcsa-manager-8696b44dff-mzq5q
Namespace:      ngci
Node:           minikube/10.0.2.15
Start Time:     Wed, 09 May 2018 17:53:54 +0100
Labels:         app=vcsa-manager
                pod-template-hash=4252600899
Annotations:    <none>
Status:         Running
IP:             172.17.0.14
Controlled By:  ReplicaSet/vcsa-manager-8696b44dff
Containers:
  vcsa-manager:
    Container ID:   docker://3e19fd8ca21a678e18eda3cb246708d10e3f1929a31859f0bb347b3461761b53
    Image:          silvam11/vcsa-manager
    Image ID:       docker://sha256:1a9cd03166dafceaee22586385ecda1c6ad3ed095b498eeb96771500092b526e
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 10 May 2018 10:32:54 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 09 May 2018 17:53:56 +0100
      Finished:     Wed, 09 May 2018 18:31:15 +0100
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-tbkms:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbkms
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From               Message
  ----    ------                 ----  ----               -------
  Normal  Scheduled              16h   default-scheduler  Successfully assigned vcsa-manager-8696b44dff-mzq5q to minikube
  Normal  SuccessfulMountVolume  16h   kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  Pulled                 16h   kubelet, minikube  Container image "silvam11/vcsa-manager" already present on machine
  Normal  Created                16h   kubelet, minikube  Created container
  Normal  Started                16h   kubelet, minikube  Started container
  Normal  SuccessfulMountVolume  3m    kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal  SandboxChanged         3m    kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal  Pulled                 3m    kubelet, minikube  Container image "silvam11/vcsa-manager" already present on machine
  Normal  Created                3m    kubelet, minikube  Created container
  Normal  Started                3m    kubelet, minikube  Started container


Name:           web-gateway-7b4689bff9-rvbbn
Namespace:      ngci
Node:           minikube/10.0.2.15
Start Time:     Wed, 09 May 2018 17:53:55 +0100
Labels:         app=web-gateway
                pod-template-hash=3602456995
Annotations:    <none>
Status:         Running
IP:             172.17.0.12
Controlled By:  ReplicaSet/web-gateway-7b4689bff9
Containers:
  web-gateway:
    Container ID:   docker://677fbcbc053c57e4aa24c66d7f27d3e9910bc3dbb5fda4c1cdf5f99a67dfbcc3
    Image:          silvam11/web-gateway
    Image ID:       docker://sha256:b80fb05c087934447c93c958ccef5edb08b7c046fea81430819823cc382337dd
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 10 May 2018 10:32:54 +0100
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 09 May 2018 17:53:57 +0100
      Finished:     Wed, 09 May 2018 18:31:16 +0100
    Ready:          True
    Restart Count:  1
    Readiness:      http-get http://:8080/status delay=0s timeout=1s period=5s #success=1 #failure=3
    Environment:
      VCSA_MANAGER:  http://vcsa-manager-service:7070
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbkms (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-tbkms:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tbkms
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age              From               Message
  ----     ------                 ----             ----               -------
  Normal   Scheduled              16h              default-scheduler  Successfully assigned web-gateway-7b4689bff9-rvbbn to minikube
  Normal   SuccessfulMountVolume  16h              kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal   Pulled                 16h              kubelet, minikube  Container image "silvam11/web-gateway" already present on machine
  Normal   Created                16h              kubelet, minikube  Created container
  Normal   Started                16h              kubelet, minikube  Started container
  Warning  Unhealthy              16h              kubelet, minikube  Readiness probe failed: Get http://172.17.0.13:8080/status: dial tcp 172.17.0.13:8080: getsockopt: connection refused
  Normal   SuccessfulMountVolume  3m               kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-tbkms"
  Normal   SandboxChanged         3m               kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled                 3m               kubelet, minikube  Container image "silvam11/web-gateway" already present on machine
  Normal   Created                3m               kubelet, minikube  Created container
  Normal   Started                3m               kubelet, minikube  Started container
  Warning  Unhealthy              3m (x3 over 3m)  kubelet, minikube  Readiness probe failed: Get http://172.17.0.12:8080/status: dial tcp 172.17.0.12:8080: getsockopt: connection refused

Here are the pods on ngci namespace:

silvam11@ubuntu:~$ kubectl get pods -n ngci
NAME                            READY     STATUS    RESTARTS   AGE
repo-manager-6cf98f5b54-pd2ht   1/1       Running   1          16h
vcsa-manager-8696b44dff-mzq5q   1/1       Running   1          16h
web-gateway-7b4689bff9-rvbbn    1/1       Running   1          16h

What am I missing here? Is it a firewall?

Mauro

like image 335
Mauro Silva Avatar asked May 09 '18 17:05

Mauro Silva


People also ask

How can I check my Kubernetes service status?

Using kubectl describe pods to check kube-system If the output from a specific pod is desired, run the command kubectl describe pod pod_name --namespace kube-system . The Status field should be "Running" - any other status will indicate issues with the environment.

How do you curl service in Kubernetes?

Create a new Pod that runs curl To do that, I use the kubectl run command, which creates a single Pod. Kubernetes will now pull the curlimages/curl image, start the Pod, and drop you into a terminal session. So now you can use curl! Make sure you run curl in the same Kubernetes namespace which you want to debug.


2 Answers

You misconfigured the port numbers.

First, vcsa-manager was exposed on port 8080; after that, you tried to map the service vcsa-manager-service to port 9090. Then, repo-manager was exposed on port 8000; you commented targetPort and didn’t map the service to port.

You should map the service to the right ports.

Fixed config will look like:

---
apiVersion: v1
kind: Namespace
metadata:
  name: ngci

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: web-gateway
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: web-gateway
    spec:
      containers:
      - env:
        - name: VCSA_MANAGER
          value: http://vcsa-manager-service:7070
        name: web-gateway
        image: silvam11/web-gateway
        imagePullPolicy: Never
        ports:
          - containerPort: 8080 
        readinessProbe:
          httpGet:
            path: /status
            port: 8080
          periodSeconds: 5           

---
apiVersion: v1
kind: Service
metadata:
  name: web-gateway-service
  namespace: ngci
spec:
  selector:
    app: web-gateway
  ports:
    - protocol: "TCP"
      port: 8080
      targetPort: 8080
      nodePort: 30001
      
  type: LoadBalancer

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vcsa-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vcsa-manager
    spec:
      containers:
        - name: vcsa-manager
          image: silvam11/vcsa-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: vcsa-manager-service
  namespace: ngci
spec:
  selector:
    app: vcsa-manager
  ports:
    - protocol: "TCP"
      port: 7070
      targetPort: 8080

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: repo-manager
  namespace: ngci
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: repo-manager
    spec:
      containers:
        - name: repo-manager
          image: silvam11/repo-manager
          imagePullPolicy: Never
          ports:
            - containerPort: 8000

---
apiVersion: v1
kind: Service
metadata:
  name: repo-manager-service
  namespace: ngci
spec:
  selector:
    app: repo-manager
  ports:
    - protocol: "TCP"
      port: 9090
      targetPort: 8000

I’ve just fixed all ports in your config.

like image 113
Nick Rak Avatar answered Oct 24 '22 09:10

Nick Rak


For anyone looking for an answer on why it is not working (even when not exactly the same problem as above)

I had 'none' as my vm driver, running everything on the host, that way what ever you do, it will not work. when using the default driver, you will be able to access exposed ports.

like image 20
SomeOne_1 Avatar answered Oct 24 '22 10:10

SomeOne_1