I have two (independent) Kubernetes clusters, one setup as a GKE/GCE cluster and the other setup in an AWS environment created using the kube-up.sh
script. Both clusters are working properly and I can start / stop pods, services, and everything else between.
I want pods located in these clusters to communicate with each other, but without exposing them as services. In order to achieve this, I have setup a VPN connection between the two clusters, and also a couple of routing / firewall rules to make sure the VMs / pods can see each other.
I can confirm that the following scenarios are working properly:
VM in GCE -> VM in AWS (OK)
Pod in GCE -> VM in AWS (OK)
VM in AWS -> VM in GCE (OK)
VM in AWS -> Pod in GCE (OK)
Pod in AWS -> Pod in GCE (OK)
However, I can't make a VM or Pod in GCE communicate with a Pod in AWS.
I was wondering if there is any way of making this work with current AWS VPC capabilities. It seems that when the AWS end of the VPN tunnel receives packets addressed to a pod, it doesn't really know what to do with them. On the other hand, GCE networking is automatically configured with routes that associate pod IPs to a GKE cluster. In this case, when a packet addressed to a pod reaches the GCE end of the VPN tunnel, it is correctly forwarded to its destination.
That's my configuration:
GKE/GCE in us-east1
Network: 10.142.0.0/20
VM1 IP: 10.142.0.2
Pod range (for VM1) : 10.52.4.0/24
Pod1 IP: 10.52.4.4
(running busybox)
Firewall rule: Allows any traffic from 172.16.0.0/12
Route: Sends everything with destination 172.16.0.0/12
to the VPN tunnel (automatically added when the VPN is created)
AWS in ap-northeast-1
VPC: 172.24.0.0/16
Subnet1: 172.24.1.0/24
VM3 IP (in Subnet1): 172.24.1.5
Kubernetes cluster network (NON_MASQUERADE_CIDR
): 172.16.0.0/16
Pod range (CLUSTER_IP_RANGE
): 172.16.128.0/17
Pod range (for VM3) : 172.16.129.0/24
Pod3 IP: 172.16.129.5
Security Group: Allows any traffic from 10.0.0.0/8
Routes:
10.0.0.0/8
to the VPN tunnel172.16.129.0/24
to VM3Has anyone tried to do something similar? Is there any way to configure AWS VPC VPN Gateway to ensure the packets destined to Pods are correctly sent to the VMs that host them? Any suggestions?
A Pod can communicate with another Pod by directly addressing its IP address, but the recommended way is to use Services. A Service is a set of Pods, which can be reached by a single, fixed DNS name or IP address. In reality, most applications on Kubernetes use Services as a way to communicate with each other.
Kubernetes defines a network model called the container network interface (CNI), but the actual implementation relies on network plugins. The network plugin is responsible for allocating internet protocol (IP) addresses to pods and enabling pods to communicate with each other within the Kubernetes cluster.
Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory.
What you are asking in kubernetes federation.
Federation makes it easy to manage multiple clusters. It does so by providing 2 major building blocks:
Sync resources across clusters: Federation provides the ability to keep resources in multiple clusters in sync. This can be used, for example, to ensure that the same deployment exists in multiple clusters.
Cross cluster discovery: It provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. This can be used, for example, to ensure that a global VIP or DNS record can be used to access backends from multiple clusters.
Also,this one might help you https://kubernetes.io/docs/admin/multiple-zones/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With