I have setup an ingress for an application but want to whitelist my ip address. So I created this Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: ${MY_IP}/32
name: ${INGRESS_NAME}
spec:
rules:
- host: ${DNS_NAME}
http:
paths:
- backend:
serviceName: ${SVC_NAME}
servicePort: ${SVC_PORT}
tls:
- hosts:
- ${DNS_NAME}
secretName: tls-secret
But when I try to access it I get a 403 forbidden and in the nginx logging I see a client ip but that is from one of the cluster nodes and not my home ip.
I also created a configmap with this configuration:
data:
use-forwarded-headers: "true"
In the nginx.conf in the container I can see that has been correctly passed on/ configured, but I still get a 403 forbidden with still only the client ip from cluster node.
I am running on an AKS cluster and the nginx ingress controller is behind an Azure loadbalancer. The nginx ingress controller svc is exposed as type loadbalancer and locks in on the nodeport opened by the svc.
Do I need to configure something else within Nginx?
If you've installed nginx-ingress with the Helm chart, you can simply configure your values.yaml
file with controller.service.externalTrafficPolicy: Local
, which I believe will apply to all of your Services. Otherwise, you can configure specific Services with service.spec.externalTrafficPolicy: Local
to achieve the same effect on those specific Services.
Here are some resources to further your understanding:
It sounds like you have your Nginx Ingress Controller behind a NodePort (or LoadBalancer) Service, or rather behind a kube-proxy. Generally to get your controller to see the raw connecting IP you will need to deploy it using a hostNetwork port so it listens directly to incoming traffic.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With