If I run through the http load balancer example it works fine in my google container engine project. When I run "kubectl describe ing" the backend is "HEALTHY". If I then change the svc out to one that points to my app as shown here:
apiVersion: v1
kind: Service
metadata:
name: app
labels:
name: app
spec:
ports:
- port: 8000
name: http
targetPort: 8000
selector:
name: app
type: NodePort
The app I'm running is django behind gunicorn and works just find if I make that a load balancer instead of a NodePort.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
spec:
backend:
serviceName: app
servicePort: 8000
Now when I run "kubectl describe ing" the backend is listed as "UNHEALTHY" and all requests to the ingress IP give a 502.
Check if the Pod and Containers is Running If the pod or one of its containers did not start, this could result in a 502 error to clients accessing an application running in the pod. If the entire pod or the required containers are not running—restart the pod or force Kubernetes to reschedule it.
A 502 bad gateway message indicates that one server got an invalid response from another. In essence, you've connected with some kind of interim device (like an edge server) that should fetch all of the bits you need to load the page. Something about that process went wrong, and the message indicates the problem.
After a lot of digging I found the answer: According to the requirements here: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites the application must return a 200 status code at '/'. Because my application was returning a 302 (redirect to login), the health check was failing. When the health check fails, the ingress resource returns 502.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With