The problem is, that I can only route http or https requests via my ngnix controller. How can I send non HTTP Requests (e.g. database or corba) via ingress to my containers?
Ingress resource only supports rules for directing HTTP(S) traffic.
When you use an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster. This article shows you how to deploy the NGINX ingress controller in an Azure Kubernetes Service (AKS) cluster.
The Ingress itself has no power. It is a configuration request for the ingress controller that allows the user to define how external clients are routed to a cluster's internal Services. The ingress controller hears this request and adjusts its configuration to do what the user asks.
Kubernetes Ingress is an API object that provides routing rules to manage access to the services within a Kubernetes cluster. This typically uses HTTPS and HTTP protocols to facilitate the routing. Ingress is the ideal choice for a production environment.
This is not well supported via the ingress mechanism and is an open issue.
There is a work around for tcp or udp traffic using nginx-ingress which will map an exposed port to a kubernetes service using a configmap.
See this doc.
Start the ingress controller with the tcp-services-configmap
(and/or udp-services-configmap
) argument.
args:
- "/nginx-ingress-controller"
- "--tcp-services-configmap=default/nginx-tcp-configmap"
- "--v=2"
deploy configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-tcp-configmap
data:
9000: "default/example-service:8080"
where 9000
is the exposed port and 8080
is the service port
I'm using a nginx-ingress-controller on a bare-metal server. In order to get on the hosted sites from all nodes, I created it as a DaemonSet rather than a Deployment (Bare-metal considerations).
The solution works well and updates on the Ingress specifications are perfectly integrated.
For the sake of making a TS Server available, I changed my args for the Pods in nginx-ingress-controller.yml as mentioned by stacksonstacks:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-configuration
--publish-service=$(POD_NAMESPACE)/ingress-nginx
--annotations-prefix=nginx.ingress.kubernetes.io
--tcp-services-configmap=default/tcp-ingress-configmap
--udp-services-configmap=default/udp-ingress-configmap
Unfortunately, when applying the changed specification, the DaemonSet did not automatically recreate the Pods, so when inspecting the Pods, I still had the old args:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-configuration
--publish-service=$(POD_NAMESPACE)/ingress-nginx
--annotations-prefix=nginx.ingress.kubernetes.io
Deleting the Pods inside ingress-nginx namespace with kubectl --namespace ingress-nginx delete pod --all
made the controller create new Pods and finally the ports were available on the host-network.
I know the circumstances might be a bit different, but hopefully someone can save a few minutes with this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With