Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why can't I curl one docker container from another via the host

Tags:

docker

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.

Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.

DEV-MACHINE

HOST (Public IP = 111.222.333.444)
  CONT_A (Publish 3000)
  CONT_B

enter image description here

On my dev machine (a completely different machine)

I can curl without any problems

curl http://111.222.333.444:3000 --> OK

When I SSH into the HOST

I can curl without any problesm

curl http://111.222.333.444:3000 --> OK

When I execute inside CONT_B

Not possible, just timeout. Ping is fine though...

docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK

Why?

Ubuntu 16.04, Docker 1.12.3 (default network setup)

like image 874
pqvst Avatar asked Dec 15 '16 18:12

pqvst


People also ask

How do I connect two containers in different hosts?

For containers to communicate with other, they need to be part of the same “network”. Docker creates a virtual network called bridge by default, and connects your containers to it. In the network, containers are assigned an IP address, which they can use to address each other.

Can 2 containers communicate with each other?

If you are running more than one container, you can let your containers communicate with each other by attaching them to the same network. Docker creates virtual networks which let your containers talk to each other. In a network, a container has an IP address, and optionally a hostname.

Can two Docker containers use the same port?

Surprisingly or not, neither Docker nor Podman support exposing multiple containers on the same host's port right out of the box. Example: docker-compose failing scenario with "Service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash."

Why is it difficult for Docker containers to communicate with each other?

Containers can only communicate with each other if they share a network. Containers that don't share a network cannot communicate with one another. That's one of the isolation features provided by Docker. A container can belong to more than one network, and a network can have multiple containers inside.


4 Answers

I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:

//create network    
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest

//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!

Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)

If you're not keen on using Docker swarm you can still use Docker legacy links as well:

docker run -d --name B --link A:A consumer:latest

which would link any exposed (not published) ports in your A container.

And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.

like image 136
Rik Nauta Avatar answered Oct 05 '22 18:10

Rik Nauta


By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.

> docker run -d --name containerB --network host yourimagename:version

After you run container B with above command then you can try curl container A from container B like this

> docker exec -it containerB /bin/bash
> curl http://localhost:3000
like image 20
Arif Nazar Purwandaru Avatar answered Oct 05 '22 18:10

Arif Nazar Purwandaru


I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:

curl http://app1.example.com/some_maintance.php

But I always was getting host unreachable after some time.

First solution was to update /etc/hosts in cron container, and add:

1.2.3.4 app1.example.com

where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.

I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.

Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:

curl -H'Host: app1.example.com' web/some_maintance.php

not very beautiful but does work.

(here web is the name of my nginx container)

like image 26
marcinj Avatar answered Oct 05 '22 17:10

marcinj


None of the current answers explain why the docker containers behave like described in the question

Docker is there to provide a lightweight isolation of the host resources to one or several containers.

The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.

https://docs.docker.com/engine/tutorials/bridge1.png

and how to fix the problem without docker networks.

From "How to connect to the Docker host from inside a Docker container?"

As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.

This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.

Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:

--add-host=host.docker.internal:host-gateway

This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.

That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.

like image 25
VonC Avatar answered Oct 05 '22 16:10

VonC