I'm having some trouble getting the Nginx ingress controller working in my Kubernetes cluster. I have created the nginx-ingress deployments, services, roles, etc., according to https://kubernetes.github.io/ingress-nginx/deploy/
I also deployed a simple hello-world
app which listens on port 8080
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hello-world
namespace: default
spec:
selector:
matchLabels:
name: hello-world
template:
metadata:
labels:
name: hello-world
spec:
containers:
- name: hello-world
image: myrepo/hello-world
resources:
requests:
memory: 200Mi
cpu: 150m
limits:
cpu: 300m
ports:
- name: http
containerPort: 8080
protocol: TCP
And created a service for it
kind: Service
apiVersion: v1
metadata:
namespace: default
name: hello-world
spec:
selector:
app: hello-world
ports:
- name: server
port: 8080
Finally, I created a TLS secret (my-tls-secret
) and deployed the nginx ingress per the instructions. For example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-world
namespace: default
spec:
rules:
- host: hello-world.mydomain.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: server
tls:
- hosts:
- hello-world.mydomain.com
secretName: my-tls-cert
However, I am unable to ever reach my application, and in the logs I see
W0103 19:11:15.712062 6 controller.go:826] Service "default/hello-world" does not have any active Endpoint.
I0103 19:11:15.712254 6 controller.go:172] Configuration changes detected, backend reload required.
I0103 19:11:15.864774 6 controller.go:190] Backend successfully reloaded.
I am not sure why it says Service "default/hello-world" does not have any active Endpoint
. I have used a similar service definition for the traefik ingress controller without any issues.
I'm hoping I'm missing something obvious with the nginx ingress. Any help you can provide would be appreciated!
I discovered what I was doing wrong. In my application definition I was using name
as my selector
selector:
matchLabels:
name: hello-world
template:
metadata:
labels:
name: hello-world
Whereas in my service I was using app
selector:
app: hello-world
After updating my service to use app
, it worked
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
Another situation when it may happen is when ingress class of the ingress controller does not match ingress class in the ingress resource manifest used for your services.
Nginx installation command, short example:
helm install stable/nginx-ingress \
--name ${INGRESS_RELEASE_NAME} \
--namespace ${K8S_NAMESPACE} \
--set controller.scope.enabled=true \
--set controller.scope.namespace=${K8S_NAMESPACE} \
--set controller.ingressClass=${NGINX_INGRESS_CLASS}
ingress resource spec. , excerpt:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
annotations:
# folowing line is not valid for K8s or Helm,
# but reflects the values must be the same
kubernetes.io/ingress.class: ${NGINX_INGRESS_CLASS}
In our case, this was caused by having the ingress resource definition on a different namespace then the services.
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: nginx-ingress-rules
namespace: **default** #<= make sure this is the same value like the namespace on the services you are trying to reach
In my case, I included an "id" directive in my Service selector that was missing from the Deployment metadata and this prevented the endpoints controller from finding the correct Pod. I expect the reverse case would also fail:
---
apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
ports:
- name: port-name
port: 1234
protocol: TCP
selector:
app: some-app
id: "0" ## include in both or neither
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With