Server Specs:
os: Ubuntu 14.04
docker: 1.10.2
docker-compose: 1.6.0
Just recently upgraded from 1.9 to 1.10 and added docker-compose (not using compose yet however). The slowness issue didn't occur prior to upgrade.
Also Docker is configured with my DNS IP and proxy like so in '/etc/default/docker'
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --dns 138.XX.XX.X"
export http_proxy="http://proxy.myproxy.com:8888/"
(my ip is fully spelled out there, just using X's for question)
I have two containers (container_a, container_b) both running HTTP servers (Node.js), both containers are running on a bridge network (--net=mynetwork) I created via:
docker network create mynetwork
The two containers make HTTP calls between one another using the container_name as the "host" for the HTTP calls like so:
container_b:3000/someurl
These calls made between the two containers over the docker bridge network are taking a very long time to complete (~5 seconds). These calls typically run under 100ms.
When I change the networking from --net=mynetwork on those containers and instead run them both as --net=host, while also modifying my http calls to use "localhost" as the host instead of the container name and exposing their ports via a -p flag... The calls run in the expected time of < 100ms.
It appears that the docker bridge network is causing my calls between containers to take a very long time.
Any ideas of where I can look to diagnose/correct this issue?
Containers can only communicate with each other if they share a network. Containers that don't share a network cannot communicate with one another. That's one of the isolation features provided by Docker. A container can belong to more than one network, and a network can have multiple containers inside.
If you are running more than one container, you can let your containers communicate with each other by attaching them to the same network. Docker creates virtual networks which let your containers talk to each other. In a network, a container has an IP address, and optionally a hostname.
User-defined bridge networks are best when you need multiple containers to communicate on the same Docker host. Host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network, so the Docker daemon effectively acts as a DHCP server for each container. Each network also has a default subnet mask and gateway.
This issue was a result of a change to an internal DNS released as part of docker 1.10.
More information on can be found here: https://github.com/docker/docker/issues/20661
I enabled the debug mode on the daemon and looked through the log as I made requests. I could see it first try "8.8.8.8" before going on to "8.8.4.4" and then finally coming to the DNS IP I added for my host and resolving. My guess is my corporate proxy is causing those first two requests (8.8..) to hang and eventually timeout, causing the slowness to resolve at the correct IP which was the the third one in the list.
My solution was to change the DNS order in my /etc/default/docker file to have my internal IP first.
DOCKER_OPTS="--dns 138.XX.XX.X --dns 8.8.8.8 --dns 8.8.4.4 "
This seems to fix our issue as it resolves our container_name based HTTP requests between containers first to that host DNS IP.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With