I have a Webapp running completely locally on my MacBook.
The Webapp has a Front End (Angular/Javascript) and a Back End (Python/Django) which implements a RESTful API.
I have Dockerized the Back End so that it is completely self-contained in a Docker Container and exposes port 8000. I map this port locally to 4026.
Now I need to Dockerize the Front End. But if I have these two docker containers running on my localhost, how can I get the FE to send HTTP requests to the BE? The FE container won't know anything that exists outside of it. Right?
This is how I run the FE:
$ http-server Starting up http-server, serving ./ Available on: http://127.0.0.1:8080 http://192.168.1.16:8080 Hit CTRL-C to stop the server
Please provide references explaining how I can achieve this.
If you are running more than one container, you can let your containers communicate with each other by attaching them to the same network. Docker creates virtual networks which let your containers talk to each other. In a network, a container has an IP address, and optionally a hostname.
Use --network="host" in your docker run command, then 127.0. 0.1 in your docker container will point to your docker host. Note: This mode only works on Docker for Linux, per the documentation.
Docker provides a host network which lets containers share your host's networking stack. This approach means localhost inside a container resolves to the physical host, instead of the container itself. Now your container can reference localhost or 127.0. 0.1 directly.
Containers can only communicate with each other if they share a network. Containers that don't share a network cannot communicate with one another. That's one of the isolation features provided by Docker. A container can belong to more than one network, and a network can have multiple containers inside.
The way to do this today is Docker Networking.
The short version is that you can run docker network ls
to get a listing of your networks. By default, you should have one called bridge
. You can either create a new one or use this one by passing --net=bridge
when creating your container. From there, containers launched with the same network can communicate with each other over exposed ports.
If you use Docker Compose as has been mentioned, it will create a bridge network for you when you run docker-compose up
with a name matching the folder of your project and _default
appended. Each image defined in the Compose file will get launched in this network automatically.
With all that said, I'm guessing your frontend is a webserver that just serves up the HTML/JS/CSS and those pages access the backend service. If that's accurate, you don't really need container-to-container communication in this case anyway... both need to be exposed to the host since connections originate from the client system.
There are multiple ways to do this and the simplest answer is to use Docker-Compose. You can use Docker-compose to allow multiple services to run a server.
If you are not using Docker-Compose and running individual containers, then expose both services port with host and use those services on such on links like:
docker run -p 3306:3306 mysql docker run -p 8088:80 nginx
Now you can communicate as:
http://hostip:3306 http://hostip:8088
Now you can communicate with containers using hostIP.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With