I have created a small project to test Docker clustering. Basically, the cluster.sh script launches three identical containers, and uses pipework to configure a bridge (bridge1
) on the host and add an NIC (eth1
) to each container.
If I log into one of the containers, I can arping
other containers:
# 172.17.99.1
root@d01eb56fce52:/# arping 172.17.99.2
ARPING 172.17.99.2
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=0 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=1 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=2 time=1.001 sec
42 bytes from aa:b3:98:92:0b:08 (172.17.99.2): index=3 time=1.001 sec
^C
--- 172.17.99.2 statistics ---
5 packets transmitted, 4 packets received, 20% unanswered (0 extra)
So it seems packets can go through bridge1
.
But the problem is I can't ping
other containers, neither can I send any IP packets through via any tools like telnet
or netcat
.
In contrast, the bridge docker0
and NIC eth0
work correctly in all containers.
Here's my route table
# 172.17.99.1
root@d01eb56fce52:/# ip route
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.17
172.17.99.0/24 dev eth1 proto kernel scope link src 172.17.99.1
and bridge config
# host
$ brctl show
bridge name bridge id STP enabled interfaces
bridge1 8000.8a6b21e27ae6 no veth1pl25432
veth1pl25587
veth1pl25753
docker0 8000.56847afe9799 no veth7c87801
veth953a086
vethe575fe2
# host
$ brctl showmacs bridge1
port no mac addr is local? ageing timer
1 8a:6b:21:e2:7a:e6 yes 0.00
2 8a:a3:b8:90:f3:52 yes 0.00
3 f6:0c:c4:3d:f5:b2 yes 0.00
# host
$ ifconfig
bridge1 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::48e9:e3ff:fedb:a1b6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:163 errors:0 dropped:0 overruns:0 frame:0
TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8844 (8.8 KB) TX bytes:12833 (12.8 KB)
# I'm showing only one veth here for simplicity
veth1pl25432 Link encap:Ethernet HWaddr 8a:6b:21:e2:7a:e6
inet6 addr: fe80::886b:21ff:fee2:7ae6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:155 errors:0 dropped:0 overruns:0 frame:0
TX packets:162 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12366 (12.3 KB) TX bytes:23180 (23.1 KB)
...
and IP FORWARD chain
# host
$ sudo iptables -x -v --line-numbers -L FORWARD
Chain FORWARD (policy ACCEPT 10675 packets, 640500 bytes)
num pkts bytes target prot opt in out source destination
1 15018 22400195 DOCKER all -- any docker0 anywhere anywhere
2 15007 22399271 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
3 8160 445331 ACCEPT all -- docker0 !docker0 anywhere anywhere
4 11 924 ACCEPT all -- docker0 docker0 anywhere anywhere
5 56 4704 ACCEPT all -- bridge1 bridge1 anywhere anywhere
Note the pkts cound for rule 5 isn't 0, which means ping
has been routed correctly (FORWARD chain is executed after routing right?), but somehow didn't reach the destination.
I'm out of ideas why docker0
and bridge1
behave differently. Any suggestion?
Update 1
Here's the tcpdump
output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65
, which is the eth0
of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat
IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0
's IP is on 172.17.0.0/16
, packets sent have their source IP changed. This is why the responses from ping
can't go back to the source.
Conclusion
The solution is to change the container's eth0
's IP to a different network than that of the default docker0
.
Copied from
Update 1
in question
Here's the tcpdump
output on the target container when pinged from another.
$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
22:11:17.754261 IP 192.168.1.65 > 172.17.99.1: ICMP echo request, id 26443, seq 1, length 6
Note the source IP is 192.168.1.65
, which is the eth0
of the host, so there seems to be some SNAT going on on the bridge.
Finally, printing out the nat
IP table revealed the cause of the problem:
$ sudo iptables -L -t nat
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
...
Because my container's eth0
's IP is on 172.17.0.0/16
, packets sent have their source IP changed. This is why the responses from ping
can't go back to the source.
Conclusion
The solution is to change the container's eth0
's IP to a different network than that of the default docker0
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With