Can someone please let me know why the kubernetes pod use the none network instead of the bridge network on the worker node?
I Setup a kubernetes cluster by use kubo.
The worker node by default will have 3 docker network.
NETWORK ID NAME DRIVER
30bbbc954768 bridge bridge
c8cb510d1646 host host
5e6b770e7aa6 none null
The docker default network is bridge $>docker network inspect bridge
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
But if I use kubectl run command to start a pod
kubectl run -it --image nginx bash
on the work node there will be two container start
7cf6061fe0b8 40960efd7b8f "nginx -g 'daemon off" 33 minutes ago
Up 33 minutes k8s_bash_bash-325157647-ns4xj_default_9d5ea60e-cf74-11e7-9ae8-00505686d000_2
37c51d605b16 gcr.io/google_containers/pause-amd64:3.0 "/pause"
35 minutes ago Up 35 minutes k8s_POD_bash-325157647-ns4xj_default_9d5ea60e-cf74-11e7-9ae8-00505686d000_0
if we run docker inspect 37c51d605b16 we can see it will use “none”
"Networks": {
"none": {
"IPAMConfig": null,
"Links": null,
So why kubernetes will use the none network for communication?
Kubernetes uses an overlay network to manage pod-to-pod communication on the same or different hosts. Each pod gets a single IP address for all containers in that pod. A pause
container is created to hold the network namespace and thus reserve the IP address, which is useful when containers restart, as they get the same IP.
The pod has its own ethernet adapter, say eth0
which is mapped to a virtual ethernet adapter on the host say veth0xx
, in the root network namespace, which in turn is connected to a network bridge docker0
or cbr0
.
In my Kubernetes setup, with Project Calico as the overlay network CNI plugin, calico creates an ethernet adapter in each pod and maps it to a virtual adapter on the host (name format calic[0-9a-z]
). This virtual adaptor is connected to a Linux ethernet bridge. IP table rules filter packets to this bridge and then onto the CNI plugin provider, in my case Calico which is able to redirect the packet to the correct pod.
So your containers are in the none
docker network as docker networking is disabled in your Kubernetes setup, as it's using the overlay network via a CNI plugin. Kubernetes doesn't handle networking but delegates it to the underlying CNI plugin.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With