Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiple docker containers as web server on a single IP

Tags:

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.

My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:

sudo docker run -p 80:80 -t -i <yourname>/<imagename>

This way I can do from my computers terminal:

curl http://hostIP:80/foobar

But how to handle this with multiple containers and multiple web servers?

like image 969
UpCat Avatar asked Jan 07 '15 21:01

UpCat


People also ask

Can we run multiple Docker containers on a single host?

With Docker compose, you can configure and start multiple containers with a single yaml file. This is really helpful if you are working on a technology stack with multiple technologies.

Does each Docker container get its own IP?

By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network, so the Docker daemon effectively acts as a DHCP server for each container. Each network also has a default subnet mask and gateway.

Can I run multiple Docker containers on same port?

Surprisingly or not, neither Docker nor Podman support exposing multiple containers on the same host's port right out of the box. Example: docker-compose failing scenario with "Service specifies a port on the host. If multiple containers for this service are created on a single host, the port will clash."

Is it possible to use two containers in a single service?

It's ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.


2 Answers

You can either expose multiple ports, e.g.

docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>

or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.

Update:

The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config

RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]

you may run your containers like this:

docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>

This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.

Update II:

In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.

like image 64
Mykola Gurov Avatar answered Sep 20 '22 07:09

Mykola Gurov


Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.

sudo docker run -P -t -i <yourname>/<imagename>

You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.

like image 28
Mark O'Connor Avatar answered Sep 20 '22 07:09

Mark O'Connor