So I'm working on a project that involves managing many postgres instances inside of a k8s cluster. Each instance is managed using a Stateful Set
with a Service
for network communication. I need to expose each Service
to the public internet via DNS on port 5432.
The most natural approach here is to use the k8s Load Balancer
resource and something like external dns to dynamically map a DNS name to a load balancer endpoint. This is great for many types of services, but for databases there is one massive limitation: the idle connection timeout. AWS ELBs have a maximum idle timeout limit of 4000 seconds. There are many long running analytical queries/transactions that easily exceed that amount of time, not to mention potentially long-running operations like pg_restore
.
So I need some kind of solution that allows me to work around the limitations of Load Balancers. Node IPs
are out of the question since I will need port 5432
exposed for every single postgres instance in the cluster. Ingress
also seems less than ideal since it's a layer 7 proxy that only supports HTTP/HTTPS. I've seen workarounds with nginx-ingress involving some configmap chicanery, but I'm a little worried about committing to hacks like that for a large project. ExternalName
is intriguing but even if I can find better documentation on it I think it may end up having similar limitations as NodeIP
.
Any suggestions would be greatly appreciated.
Kubernetes handles load balancing through a load balancer. This can be internal or external. In the case of internal load balancing, the load balancer enables routing across containers. In essence, internal load balancers help you to optimize in-cluster load balancing.
Ways to connect You have several options for connecting to nodes, pods and services from outside the cluster: Access services through public IPs. Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.
Cluster networking provides communication between different Pods. The Service resource lets you expose an application running in Pods to be reachable from outside your cluster.
The Kubernetes ingress controller implementation Contour from Heptio can proxy TCP
streams when they are encapsulated in TLS
. This is required to use the SNI
handshake message to direct the connection to the correct backend service.
Contour can handle ingresses
, but introduces additionally a new ingress API IngressRoute which is implemented via a CRD
. The TLS connection can be terminated at your backend service. An IngressRoute
might look like this:
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: postgres
namespace: postgres-one
spec:
virtualhost:
fqdn: postgres-one.example.com
tls:
passthrough: true
tcpproxy:
services:
- name: postgres
port: 5432
routes:
- match: /
services:
- name: dummy
port: 80
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With