Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes LoadBalancer Service returning empty response

  1. node.js express server bound to port 8080
server.listen(8080, () => {
  logger.log({
    level: 'info',
    message: 'Listening on port ' + port
  })
})
  1. Docker image with node.js code + npm modules with port 8080 exposed
FROM node:10-alpine

...

# Expose port
EXPOSE 8080
  1. Kubernetes deployment of Docker image with containerPort 8080 configured
apiVersion: apps/v1

kind: Deployment

metadata:
  name: deployment

spec:
  selector:
    matchLabels:
      app: deployment

  replicas: 2

  template:
    metadata:
      labels:
        app: deployment

    spec:
      containers:
      - name: job-id-20
        image: redacted/redacted
        command: ["node", "backend/services.js"]

        ports:
        - name: http-port
          containerPort: 8080

      imagePullSecrets:
      - name: docker-hub-credentials

      dnsConfig:
        options:
          - name: ndots
            value: "0"
  1. Kubernetes service with matching selector to app with targetPort of 8080 and type LoadBalancer
apiVersion: v1

kind: Service

metadata:
  name: service

spec:
  ports:
    - protocol: TCP
      targetPort: 8080
      port: 8080

  selector:
    app: deployment

  type: LoadBalancer
  1. Verify load balancer has external IP (I scrubbed it)
$ kubectl --kubeconfig="k8s-1-13-4-do-0-nyc1-1552761372568-kubeconfig.yaml" get service/service
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
service   LoadBalancer   10.245.239.60   1x4.2x9.1x8.x2   8080:30626/TCP   113s
  1. curl fails with empty response
$ curl --verbose http://1x4.2x9.1x8.x2:8080/
*   Trying 1x4.2x9.1x8.x2...
* TCP_NODELAY set
* Connected to 1x4.2x9.1x8.x2 (1x4.2x9.1x8.x2) port 8080 (#0)
> GET / HTTP/1.1
> Host: 1x4.2x9.1x8.x2:8080
> User-Agent: curl/7.54.0
> Accept: */*
> 
* Empty reply from server
* Connection #0 to host 1x4.2x9.1x8.x2 left intact
curl: (52) Empty reply from server

I'd expect the traffic to route through to the service to one of the pods/replicas in the deployment. What am I doing wrong?

like image 203
Brandon Ros Avatar asked Mar 17 '19 00:03

Brandon Ros


People also ask

How LoadBalancer service works in Kubernetes?

The Kubernetes load balancer sends connections to the first server in the pool until it is at capacity, and then sends new connections to the next available server. This algorithm is ideal where virtual machines incur a cost, such as in hosted environments.

Does Kubernetes service perform load balancing?

With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.

How load balancing and service discovery works in Kubernetes?

The Kubernetes Service interworks with a group of pods in the downlink direction to implement load balancing among the pods. This provides a central endpoint for service discovery and enables access to the external network and access among different pods through the same address.


1 Answers

There are some potential sources of errors here.

First potential problem is that your Docker image does not work as expected. You can try this: Use nginx:latest as your image and try if this works. If this works the Kubernetes parts are working correctly and you can do further investigation on your Docker image.

Your code snippet does not contain any code that outputs some data as far as I can see.

You can experiment with your image by using the docker run command as indicated in the comments above.

If it still does not work with the Nginx image then you have to further investigate the Kubernetes side.

Although a LoadBalancer is a standard Kubernetes service type, its implementation is different for different cloud providers and on-premise installations.

You must consult your Kubernetes or cloud provider's documentation on how to find out if the LoadBalancer is configured correctly.

To see if the service can reach the pods you can use the command kubectl get endpoints.

To do some more debugging you can use the kubectl port-forward command to create a tunnel to either one of the pods or to the service and try the curl command on the established tunnel.

Also you can use the kubectl logs command to see any log output of your pods.

like image 173
Guido Müller Avatar answered Oct 12 '22 23:10

Guido Müller