If I'm running processes in 2 pods that communicate with each other over tcp (addressing each other through Kubernetes services) and the pods are scheduled to the same node will the communication take place over the network or will Kubernetes know to use the loopback device?
Kubernetes defines a network model called the container network interface (CNI), but the actual implementation relies on network plugins. The network plugin is responsible for allocating internet protocol (IP) addresses to pods and enabling pods to communicate with each other within the Kubernetes cluster.
Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory. Containers use the strategy of the localhost hostname for communication within a pod.
There are two primary communication paths from the control plane (the API server) to the nodes. The first is from the API server to the kubelet process which runs on each node in the cluster. The second is from the API server to any node, pod, or service through the API server's proxy functionality.
Every pod should have a unique IP address. Every pod should be able to communicate with every other pod on the same node. Every pod should be able to communicate with every other pod on other nodes without NAT (Network Address Translation).
In a kubernetes cluster, a pod could be scheduled in any node in the cluster. The another pod which wants to access it should not ideally know where this pod is running or its POD IP address. Kubernetes provides a basic service discovery mechanism by providing DNS names to the kubernetes services (which are associated with pods). When a pod wants to talk to another pod, it should use the DNS name (e.g. svc1.namespace1.svc.cluster.local)
loopback is not mentioned in "community/contributors/design-proposals/network/networking"
Because every pod gets a "real" (not machine-private) IP address, pods can communicate without proxies or translations. The pod can use well-known port numbers and can avoid the use of higher-level service discovery systems like DNS-SD, Consul, or Etcd.
When any container calls
ioctl(SIOCGIFADDR)
(get the address of an interface), it sees the same IP that any peer container would see them coming from — each pod has its own IP address that other pods can know.
By making IP addresses and ports the same both inside and outside the pods, we create a NAT-less, flat address space. Running "ip addr show
" should work as expected. This would enable all existing naming/discovery mechanisms to work out of the box, including self-registration mechanisms and applications that distribute IP addresses.
We should be optimizing for inter-pod network communication.
Using IP was already mentioned last year in "Kubernetes - container communication within a pod using names instead of 'localhost
'?"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With