We are running an API server on GKE (google kubernetes engine). We handle our authorization using Google Cloud Endpoints and API keys. We whitelist certain IP addresses on every API key. In order to make this work we had to change over from a loadbalancer to a ingress controller for exposing our API server. The IP whitelisting does not work with the loadbalancer service. Now we have an ingress setup similar to this:
apiVersion: v1
kind: Service
metadata:
name: echo-app-nodeport
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: esp-echo
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "RESERVED_IP"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: SECRET_NAME
backend:
serviceName: echo-app-nodeport
servicePort: 80
This setup functions fine and the IP whitelisting works. Now my concern lies primarily with the NodePort that seems needed in order to make the ingress controller work. I read multiple sources [1][2] that strongly advise against using NodePorts for exposing your application. Yet most examples I find use this NodePort + Ingress combo. Can we safely use this setup or should we migrate towards an other ingress controller (NGINX, Traefik,..) ?
This value is mandatory. nodePort: The port on the node which is used to access the web server externally. These ports can only be in a valid range from 30000 to 32767. This is not a mandatory field, if it is not provided a free port from the range is selected.
The ingress controller in Kubernetes is the application that is deployed to implement those rules. Ingress isn't a service type like NodePort, ClusterIP, or LoadBalancer. Ingress actually acts as a proxy to bring traffic into the cluster, then uses internal service routing to get the traffic where it is going.
ClusterIP : Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType . NodePort : Exposes the Service on each Node's IP at a static port (the NodePort ).
You can have only ClusterIP type service for all your workload pods and have one LoadBalancer service to expose the ingress controller itself outside the cluster.That way you can completely avoid NodePort service.
My suspicion is that the GKE ingress is actually outside of your GKE cluster and forwards the traffic to your cluster over the Nodeport. This is probably why the set-up of GKE ingress and services exposed over ClusterIP doesn't work.
If you deploy an NGINX Ingress Controller on your GKE cluster, it will create an ingress gateway from within your cluster (instead of forwarding to your cluster) and be able to communicate to services exposed over ClusterIP.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With