server.listen(8080, () => {
logger.log({
level: 'info',
message: 'Listening on port ' + port
})
})
FROM node:10-alpine
...
# Expose port
EXPOSE 8080
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
selector:
matchLabels:
app: deployment
replicas: 2
template:
metadata:
labels:
app: deployment
spec:
containers:
- name: job-id-20
image: redacted/redacted
command: ["node", "backend/services.js"]
ports:
- name: http-port
containerPort: 8080
imagePullSecrets:
- name: docker-hub-credentials
dnsConfig:
options:
- name: ndots
value: "0"
apiVersion: v1
kind: Service
metadata:
name: service
spec:
ports:
- protocol: TCP
targetPort: 8080
port: 8080
selector:
app: deployment
type: LoadBalancer
$ kubectl --kubeconfig="k8s-1-13-4-do-0-nyc1-1552761372568-kubeconfig.yaml" get service/service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service LoadBalancer 10.245.239.60 1x4.2x9.1x8.x2 8080:30626/TCP 113s
$ curl --verbose http://1x4.2x9.1x8.x2:8080/
* Trying 1x4.2x9.1x8.x2...
* TCP_NODELAY set
* Connected to 1x4.2x9.1x8.x2 (1x4.2x9.1x8.x2) port 8080 (#0)
> GET / HTTP/1.1
> Host: 1x4.2x9.1x8.x2:8080
> User-Agent: curl/7.54.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 1x4.2x9.1x8.x2 left intact
curl: (52) Empty reply from server
I'd expect the traffic to route through to the service to one of the pods/replicas in the deployment. What am I doing wrong?
The Kubernetes load balancer sends connections to the first server in the pool until it is at capacity, and then sends new connections to the next available server. This algorithm is ideal where virtual machines incur a cost, such as in hosted environments.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
The Kubernetes Service interworks with a group of pods in the downlink direction to implement load balancing among the pods. This provides a central endpoint for service discovery and enables access to the external network and access among different pods through the same address.
There are some potential sources of errors here.
First potential problem is that your Docker image does not work as expected. You can try this: Use nginx:latest
as your image and try if this works. If this works the Kubernetes parts are working correctly and you can do further investigation on your Docker image.
Your code snippet does not contain any code that outputs some data as far as I can see.
You can experiment with your image by using the docker run
command as indicated in the comments above.
If it still does not work with the Nginx image then you have to further investigate the Kubernetes side.
Although a LoadBalancer is a standard Kubernetes service type, its implementation is different for different cloud providers and on-premise installations.
You must consult your Kubernetes or cloud provider's documentation on how to find out if the LoadBalancer is configured correctly.
To see if the service can reach the pods you can use the command kubectl get endpoints
.
To do some more debugging you can use the kubectl port-forward
command to create a tunnel to either one of the pods or to the service and try the curl command on the established tunnel.
Also you can use the kubectl logs
command to see any log output of your pods.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With