I have the Docker version 1.10 with embedded DNS service.
I have created two service containers in my docker-compose file. They are reachable each other by hostname and by IP, but when I would like reach one of them from the host machine, it doesn't work, it works only with IP but not with hostname.
So, is it possible to access a docker container from the host machine by it's hostname in the Docker 1.10, please?
Update:
docker-compose.yml
version: '2' services: service_a: image: nginx container_name: docker_a ports: - 8080:80 service_b: image: nginx container_name: docker_b ports: - 8081:80
then I start it by command: docker-compose up --force-recreate
when I run:
docker exec -i -t docker_a ping -c4 docker_b
- it worksdocker exec -i -t docker_b ping -c4 docker_a
- it worksping 172.19.0.2
- it works (172.19.0.2
is docker_b
's ip)ping docker_a
- fails The result of the docker network inspect test_default
is
[ { "Name": "test_default", "Id": "f6436ef4a2cd4c09ffdee82b0d0b47f96dd5aee3e1bde068376dd26f81e79712", "Scope": "local", "Driver": "bridge", "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.19.0.0/16", "Gateway": "172.19.0.1/16" } ] }, "Containers": { "a9f13f023761123115fcb2b454d3fd21666b8e1e0637f134026c44a7a84f1b0b": { "Name": "docker_a", "EndpointID": "a5c8e08feda96d0de8f7c6203f2707dd3f9f6c3a64666126055b16a3908fafed", "MacAddress": "02:42:ac:13:00:03", "IPv4Address": "172.19.0.3/16", "IPv6Address": "" }, "c6532af99f691659b452c1cbf1693731a75cdfab9ea50428d9c99dd09c3e9a40": { "Name": "docker_b", "EndpointID": "28a1877a0fdbaeb8d33a290e5a5768edc737d069d23ef9bbcc1d64cfe5fbe312", "MacAddress": "02:42:ac:13:00:02", "IPv4Address": "172.19.0.2/16", "IPv6Address": "" } }, "Options": {} } ]
You need some DNS to map container ip:s to hostnames. If you want out of the box solution. One solution is to use for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks for each service and every service can be reached by service_name.
Docker provides two ways for containers to save files on the host system so that the files are persistent even after the container is shut down. These are Docker volumes and bind mounts. This blog will teach you how to share data between a Docker containerized application and the host computer.
In the same way, a container's hostname defaults to be the container's ID in Docker. You can override the hostname using --hostname . When connecting to an existing network using docker network connect , you can use the --alias flag to specify an additional network alias for the container on that network.
The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. The communication would be established only if the bridge network is provided and the proper permissions on the iptables rules are given.
As answered here there is a software solution for this, copying the answer:
There is an open source application that solves this issue, it's called DNS Proxy Server
It's a DNS server that resolves container hostnames, and when it can't resolve a hostname then it can resolve it using public nameservers.
$ docker run --hostname dns.mageddo --name dns-proxy-server -p 5380:5380 \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /etc/resolv.conf:/etc/resolv.conf \ defreitas/dns-proxy-server
It will set as your default DNS automatically (and revert back to the original when it stops).
docker-compose up
docker-compose.yml
version: '2' services: redis: container_name: redis image: redis:2.8 hostname: redis.dev.intranet network_mode: bridge # that way he can solve others containers names even inside, solve elasticsearch, for example elasticsearch: container_name: elasticsearch image: elasticsearch:2.2 hostname: elasticsearch.dev.intranet
from host
$ nslookup redis.dev.intranet Server: 172.17.0.2 Address: 172.17.0.2#53 Non-authoritative answer: Name: redis.dev.intranet Address: 172.21.0.3
from another container
$ docker exec -it redis ping elasticsearch.dev.intranet PING elasticsearch.dev.intranet (172.21.0.2): 56 data bytes
As well it resolves Internet hostnames
$ nslookup google.com Server: 172.17.0.2 Address: 172.17.0.2#53 Non-authoritative answer: Name: google.com Address: 216.58.202.78
Here's what I do.
I wrote a Python script called dnsthing, which listens to the Docker events API for containers starting or stopping. It maintains a hosts
-style file with the names and addresses of containers. Containers are named <container_name>.<network>.docker
, so for example if I run this:
docker run --rm --name mysql -e MYSQL_ROOT_PASSWORD=secret mysql
I get this:
172.17.0.2 mysql.bridge.docker
I then run a dnsmasq
process pointing at this hosts
file. Specifically, I run a dnsmasq instance using the following configuration:
listen-address=172.31.255.253 bind-interfaces addn-hosts=/run/dnsmasq/docker.hosts local=/docker/ no-hosts no-resolv
And I run the dnsthing
script like this:
dnsthing -c "systemctl restart dnsmasq_docker" \ -H /run/dnsmasq/docker.hosts --verbose
So:
dnsthing
updates /run/dnsmasq/docker.hosts
as containers stop/startdnsthing
runs systemctl restart dnsmasq_docker
dnsmasq_docker
runs dnsmasq
using the above configuration, bound to a local bridge interface with the address 172.31.255.253
.The "main" dnsmasq process on my system, maintained by NetworkManager, uses this configuration from /etc/NetworkManager/dnsmasq.d/dockerdns
:
server=/docker/172.31.255.253
That tells dnsmasq to pass all requests for hosts in the .docker
domain to the docker_dnsmasq
service.
This obviously requires a bit of setup to put everything together, but after that it seems to Just Work:
$ ping -c1 mysql.bridge.docker PING mysql.bridge.docker (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.087 ms --- mysql.bridge.docker ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With