Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker Macvlan network inside container is not reaching to its own host

I have setup Macvlan network between 2 docker host as follows:

Host Setup: HOST_1 ens192: 172.18.0.21

Create macvlan bridge interface

docker network  create  -d macvlan \
--subnet=172.18.0.0/22 \
--gateway=172.18.0.1 \
--ip-range=172.18.1.0/28 \
-o macvlan_mode=bridge \
-o parent=ens192 macvlan

Create macvlan interface HOST_1

ip link add ens192.br link ens192 type macvlan mode bridge
ip addr add 172.18.1.0/28 dev ens192.br
ip link set dev ens192.br up

Host Setup: HOST_2 ens192: 172.18.0.23

Create macvlan bridge interface

docker network  create  -d macvlan \
--subnet=172.18.0.0/22 \
--gateway=172.18.0.1 \
--ip-range=172.18.1.16/28 \
-o macvlan_mode=bridge \
-o parent=ens192 macvlan

Create macvlan interface in HOST_2

ip link add ens192.br link ens192 type macvlan mode bridge
ip addr add 172.18.1.16/28 dev ens192.br
ip link set dev ens192.br up

Container Setup

Create containers in both host

HOST_1# docker run --net=macvlan -it --name macvlan_1 --rm alpine /bin/sh
HOST_2# docker run --net=macvlan -it --name macvlan_1 --rm alpine /bin/sh

CONTAINER_1 in HOST_1

24: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 02:42:ac:12:01:00 brd ff:ff:ff:ff:ff:ff
    inet 172.18.1.0/22 brd 172.18.3.255 scope global eth0
       valid_lft forever preferred_lft forever

CONTAINER_2 in HOST_2

21: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 02:42:ac:12:01:10 brd ff:ff:ff:ff:ff:ff
    inet 172.18.1.16/22 brd 172.18.3.255 scope global eth0
       valid_lft forever preferred_lft forever

Route table in CONTAINER_1 and CONTAINER_2

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.18.0.1      0.0.0.0         UG    0      0        0 eth0
172.18.0.0      0.0.0.0         255.255.252.0   U     0      0        0 eth0

Scenario

HOST_1 (172.18.0.21) <-> HOST_2 (172.18.0.23) = OK (Vice-versa)

HOST_1 (172.18.0.21) -> CONTAINER_1 (172.18.1.0) and CONTAINER_2 (172.18.1.16) = OK

HOST_2 (172.18.0.23) -> CONTAINER_1 (172.18.1.0) and CONTAINER_2 (172.18.1.16) = OK

CONTAINER_1 (172.18.1.0) -> HOST_2 (172.18.0.23) = OK

CONTAINER_2 (172.18.1.16) -> HOST_1 (172.18.0.21) = OK

CONTAINER_1 (172.18.1.0) <-> CONTAINER_2 (172.18.1.16) = OK (Vice-versa)

CONTAINER_1 (172.18.1.0) -> HOST_1 (172.18.0.21) = FAIL

CONTAINER_2 (172.18.1.16) -> HOST_2 (172.18.0.23) = FAIL

Question

I am very close to my solution I wanted to achieve except this 1 single problem. How can I make this work for container to connect to its own host. If there is solution to this, I would like to know how to configure in ESXi virtualization perspective and also bare-metal if there is any difference

like image 948
jlim Avatar asked Apr 01 '18 17:04

jlim


People also ask

How do I create a macvlan network in Docker?

To create a macvlan network which bridges with a given physical network interface, use --driver macvlan with the docker network create command. You also need to specify the parent , which is the interface the traffic will physically go through on the Docker host.

Can a container connect to a macvlan network?

Host access With a container attached to a macvlan network, you will find that while it can contact other systems on your local network without a problem, the container will not be able to connect to your host (and your host will not be able to connect to your container).

Why can't my host send packets to its own macvlan interfaces?

This is a limitation of macvlan interfaces: without special support from a network switch, your host is unable to send packets to its own macvlan interfaces. Fortunately, there is a workaround for this problem: you can create another macvlan interface on your host, and use that to communicate with containers on the macvlan network.

Why can’t I ping eth0 in Docker macvlan?

See Docker Macvlan Documentation When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.


2 Answers

The question is "a bit old", however others might find it useful. There is a workaround described in Host access section of USING DOCKER MACVLAN NETWORKS BY LARS KELLOGG-STEDMAN. I can confirm - it's working.

Host access With a container attached to a macvlan network, you will find that while it can contact other systems on your local network without a problem, the container will not be able to connect to your host (and your host will not be able to connect to your container). This is a limitation of macvlan interfaces: without special support from a network switch, your host is unable to send packets to its own macvlan interfaces.

Fortunately, there is a workaround for this problem: you can create another macvlan interface on your host, and use that to communicate with containers on the macvlan network.

First, I’m going to reserve an address from our network range for use by the host interface by using the --aux-address option to docker network create. That makes our final command line look like:

docker network create -d macvlan -o parent=eno1 \
  --subnet 192.168.1.0/24 \
  --gateway 192.168.1.1 \
  --ip-range 192.168.1.192/27 \
  --aux-address 'host=192.168.1.223' \
  mynet

This will prevent Docker from assigning that address to a container.

Next, we create a new macvlan interface on the host. You can call it whatever you want, but I’m calling this one mynet-shim:

ip link add mynet-shim link eno1 type macvlan  mode bridge

Now we need to configure the interface with the address we reserved and bring it up:

ip addr add 192.168.1.223/32 dev mynet-shim
ip link set mynet-shim up

The last thing we need to do is to tell our host to use that interface when communicating with the containers. This is relatively easy because we have restricted our containers to a particular CIDR subset of the local network; we just add a route to that range like this:

ip route add 192.168.1.192/27 dev mynet-shim

With that route in place, your host will automatically use ths mynet-shim interface when communicating with containers on the mynet network.

Note that the interface and routing configuration presented here is not persistent – you will lose if if you were to reboot your host. How to make it persistent is distribution dependent.

like image 71
ple91 Avatar answered Sep 30 '22 19:09

ple91


This is defined behavior for macvlan and is by design. See Docker Macvlan Documentation

  • When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.

  • A macvlan subinterface can be added to the Docker host, to allow traffic between the Docker host and containers. The IP address needs to be set on this subinterface and removed from the parent address.

like image 29
ad22 Avatar answered Sep 30 '22 20:09

ad22