I am new to Kubernetes networking.
We have separated a Kubernetes cluster into a set of namespaces (e.g. namespace-a
, namespace-b
). Every namespace has a set of Kubernetes pods. Every pod has a service that is available at my-svc.namespace-x.svc.cluster.local
.
Now, we want to prevent pods of namespace namespace-a
to talk with services or pods that are part of namespace-b
and vice versa. Communication within a namespace should be unrestricted.
This is what I found as an example in the network policies documentation: https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
As far as I understand, this prevents network communication completely, for all pods across a namespace.
You can limit communication to Pods using the Network Policy API of Kubernetes. The Kubernetes Network Policy functionality is implemented by different network providers, like Calico, Cilium, Kube-router, etc. Most of these providers have some added functionality that extends the main Kubernetes Network Policy API.
Namespaces are used to isolate resources within the control plane. For example if we were to deploy a pod in two different namespaces, an administrator running the “get pods” command may only see the pods in one of the namespaces. The pods could communicate with each other across namespaces however.
When defining a pod- or namespace- based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector. Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
Ingress and egress From the point of view of a Kubernetes pod, ingress is incoming traffic to the pod, and egress is outgoing traffic from the pod. In Kubernetes network policy, you create ingress and egress “allow” rules independently (egress, ingress, or both).
Some of those resources may be shared among different Linux kernel namespaces, such as process IDs, hostnames, file names, etc., while some may be hidden and separated, like privileges and user identifications. Regardless of the Kubernetes namespace, a new Linux kernel namespace is created by the container runtime for each new container.
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
This is one of the simplest ways of addressing, but it requires cluster DNS to be set-up and working properly, most of the kubernetes deployment tools like kubeadm or minikube comes with core-dns installed. Also for core-dns to function correctly, you might require a CNI plugin like flannel, cilium, weavenet etc.
While Kubernetes was originally designed without security tenancy and segmentation, it does have two important security mechanisms. The first one is RBAC(Role-Based Access Control). This mechanism matches users and service accounts to their allowed (or forbidden) actions on components, and basically manages the permissions on the cluster.
Do I need a networking plugin, such as Calico, Flannel or Weave?
No matter what you need a networking plugin, but not all plugins support the NetworkPolicy
API object. According to the Declare Network Policy walkthrough, the following is a (probably non-exhaustive) list of plugins that do support NetworkPolicy
:
Without a plugin that supports NetworkPolicy
, creating the resource would have no effect.
Which one should I choose?
As for which one you should choose, stackoverflow is not the place for soliciting that kind of advice. What I can recommend is reading the overview/features documentation for the various options available. Maybe try one or two different plugins in a local development cluster to get a feel for how difficult or easy they are to install, maintain, and update.
How can I allow all network traffic, but only within a particular namespace?
Given your example setup, I think the following NetworkPolicy
resources would address your need:
For pods in namespace-a
, only allow ingress from namspace-a
pods, denying ingress from any other source. Egress is unrestricted:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: namespace-a
spec:
policyTypes:
- Ingress
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: namespace-a
For pods in namespace-b
, only allow ingress from namspace-b
pods, denying ingress from any other source. Egress is unrestricted:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: namespace-b
spec:
policyTypes:
- Ingress
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: namespace-b
Note that this assumes you have set the name: namespace-a
and name: namespace-b
labels on your namespaces, similar to this:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-a
labels:
name: namespace-a
other: labelname
I only point this out to avoid confusing you with regard to the fact that the labels I showed above happen to match up with your hypothetical namespace names. The labels can be arbitrary and potentially inclusive of mulitple namespaces -- for example you might have namespace-a
and namespace-c
both with a label called other: labelname
which would allow you to select multiple namespaces using a single namespaceSelector
in your NetworkPolicy
resource.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With