I know by default docker creates a virtual bridge docker0
, and all container network are linked to docker0
.
As illustrated above:
eth0
is paired with vethXXX
vethXXX
is linked to docker0
same as a machine linked to switchBut what is the relation between docker0
and host eth0
? More specifically:
Question 2 can be a little confusing, I will keep it there and explained a little more:
eth0
. How it is forwarded to container? I mean, there must be some place to store the information, how can I check it?Thanks in advance!
After reading the answer and official network articles, I find the following diagram more accurate that docker0
and eth0
has no direct link,instead they can forward packets:
http://dockerone.com/uploads/article/20150527/e84946a8e9df0ac6d109c35786ac4833.png
docker0 is a virtual bridge interface created by Docker. It randomly chooses an address and subnet from a private defined range. All the Docker containers are connected to this bridge and use the NAT rules created by docker to communicate with the outside world.
docker0 is a Linux bridge and veth068f is an interface on that bridge. Docker picks a subnet not currently in use on the machine and assigns an IP in that range to the docker0 bridge (these are just the default settings). Going back into the container, let's issue an 'ifconfig' there.
The default IP range for the docker0 network is 172.17. 0.0/16.
You can easily get the IP address of any container if you have the name or ID of the container. You can get the container names using the "Docker ps -a" command. This will list all the existing containers.
There is no direct link between the default docker0
bridge and the hosts ethernet devices. If you use the --net=host
option for a container then the hosts network stack will be available in the container.
When a packet flows from container to docker0, how does it know it will be forwarded to eth0, and then to the outside world?
The docker0
bridge has the .1
address of the Docker network assigned to it, this is usually something around a 172.17 or 172.18.
$ ip address show dev docker0 8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:03:47:33:c1 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever
Containers are assigned a veth interface which is attached to the docker0
bridge.
$ bridge link 10: vethcece7e5 state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2
Containers created on the default Docker network receive the .1
address as their default route.
$ docker run busybox ip route show default via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 src 172.17.0.3
Docker uses NAT MASQUERADE for outbound traffic from there and it will follow the standard outbound routing on the host, which may or may not involve eth0
.
$ iptables -t nat -vnL POSTROUTING Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
iptables handles the connection tracking and return traffic.
When an external packet arrives to eth0, why it is forwarded to docker0 then container? instead of processing it or drop it?
If you are asking about the return path for outbound traffic from the container, see iptables above as the MASQUERADE
will map the connection back through.
If you mean new inbound traffic, Packets are not forwarded into a container by default. The standard way to achieve this is to setup a port mapping. Docker launches a daemon that listens on the host on port X and forwards to the container on port Y.
I'm not sure why NAT wasn't used for inbound traffic as well. I've run into some issues trying to map large numbers of ports into containers which led to mapping real world interfaces completely into containers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With