Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes nodes behind NAT service exposure

I'm trying get a Kubernetes cluster working with some nodes working behind NAT without public IP address. (Why i need it is a different story)

There are 3 nodes:

  1. Kubernetes cluster master (with public IP address)
  2. Node1 (with public IP address)
  3. Node2 (works behind NAT on my laptop as a VM, no public IP address)

All 3 nodes are running Ubuntu 18.04 with Kubernetes v1.10.2(3), Docker 17.12

Kubernetes cluster was created like this:

kubeadm init --pod-network-cidr=10.244.0.0/16

Flannel network is used:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Node1 and Node2 joined the cluster:

NAME STATUS ROLES AGE VERSION master-node Ready master 3h v1.10.2 node1 Ready <none> 2h v1.10.3 node2 Ready <none> 2h v1.10.2

Nginx deployment + service (type=NodePort) created and scheduled for the Node1 (with public IP):

https://pastebin.com/6CrugunB

kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h my-nginx NodePort 10.110.202.32 <none> 80:31742/TCP 16m

This deployment is accessible through http://MASTER_NODE_PUBLIC_IP:31742 and http://NODE1_PUBLIC_IP:31742 as expected.

Another Nginx deployment + service (type=NodePort) created and scheduled for the Node2 (without public IP):

https://pastebin.com/AFK42UNW

kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h my-nginx NodePort 10.110.202.32 <none> 80:31742/TCP 22m nginx-behind-nat NodePort 10.105.242.178 <none> 80:32350/TCP 22m

However this service is not accessible through http://MASTER_NODE_PUBLIC_IP:32350 nor http://NODE1_PUBLIC_IP:32350.

It is only accessible through http://MY_VM_IP:32350 from my laptop.

Moreover: i can not get inside the nginx-behind-nat pods via kubectl exec either.

Is there any way to achieve it?

like image 233
mennanov Avatar asked Nov 07 '22 06:11

mennanov


1 Answers

As mentioned in the Kubernetes documentation:

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

  • all containers can communicate with all other containers without NAT
  • all nodes can communicate with all containers (and vice-versa) without NAT
  • the IP that a container sees itself as is the same IP that others see it as

What this means in practice is that you can not just take two computers running Docker and expect Kubernetes to work. You must ensure that the fundamental requirements are met.

By default, the connections from api-server to a node, port or service are just plain HTTP without authentication and encryption.
They can work over HTTPS, but by default, apiserver will not validate the HTTPS endpoint certificate, and therefore, it will not provide any guarantees of integrity and could be subject to man-in-the-middle attacks.

For details about securing connections inside the cluster, please check this document

like image 71
VASャ Avatar answered Nov 15 '22 06:11

VASャ