I'm trying to work out how to properly use swarm mode in Docker. First I tried running containers on my 2 workers and manager machine without specifying a custom network (so I'm using the default ingress overlay network). However, If I use the ingress network, for some reason I cannot resolve tasks.myservice
.
So I tried configuring a custom network like this:
docker network create -d overlay elasticnet
So now, when I bash
into one of the containers, I can successfully resolve tasks.myservice
but I can no longer access the port I've defined in my service creation under --publish
externally (which I could when I used the ingress network).
Is there any way of either:
Use the ingress network and be able to resolve tasks.myservice
or any other DNS record that will direct to all of my service containers?
Or, use a custom network, but --publish
ports correctly so I could access them externally?
EDIT
This is how I create my service,
Without a custom network:
docker service create --replicas 3 --label elasticsearch --endpoint-mode vip --name elastic -e ES_HOSTS="tasks.elastic" --publish 9200:9200 --mount type=bind,source=/tmp/es,destination=/usr/share/elasticsearch/config --update-delay 10s es:latest
With a custom network:
docker service create --replicas 3 --network elasticnet --label elasticsearch --endpoint-mode vip --name elastic -e ES_HOSTS="tasks.elastic" --publish 9200:9200 --mount type=bind,source=/tmp/es,destination=/usr/share/elasticsearch/config --update-delay 10s es:latest
Docker daemons participating in a swarm need the ability to communicate with each other over the following ports: Port 7946 TCP/UDP for container network discovery.
Manage a user-defined bridgeUse the docker network create command to create a user-defined bridge network. You can specify the subnet, the IP address range, the gateway, and other options. See the docker network create reference or the output of docker network create --help for details.
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host: an overlay network called ingress , which handles the control and data traffic related to swarm services.
In docker also we can create a network and can create a container and connect to the respective network and two containers that are connected to the same network can communicate with each other. These containers can also communicate with the host in which the docker is deployed.
Look at the example below:
1.Create user defined overlay network:
sudo docker network create overlay1 --driver overlay
9g4ipjn513iy overlay1 overlay swarm
2.Run a service with exposed ports and 3 replicas:
sudo docker service create --name nginx --replicas 3 --publish 80:80 --network overlay1 nginx
You dont have to specify endpoint-mode
if you gonna use VIP
, its the default.
sudo docker service ps nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
dbz8b4jjfp6xg3vqunt1x8shx nginx.1 nginx dg1 Running Running 13 minutes ago
9d8zr6zka0sp99vadr8eqq2t2 nginx.2 nginx dg3 Running Running 13 minutes ago
cwbcegunuxz5ye9a8ghdrc4fg nginx.3 nginx dg3 Running Running 12 minutes ago
3.Verification: Testing Exposed port from one of the nodes:
administrator@dg1:~$ telnet localhost 80
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Testing exposed port from external host:
user@externalhost /home/balrog% telnet dg1 80
Trying 172.30.135.101...
Connected to 172.30.135.101.
Escape character is '^]'.
Testing DNS lookup from inside of containers:
sudo docker exec -it 05d05f934c68 /bin/bash
root@05d05f934c68:/# ping nginx
PING nginx (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: icmp_seq=0 ttl=64 time=0.050 ms
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms
root@05d05f934c68:/# ping tasks.nginx
PING tasks.nginx (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: icmp_seq=0 ttl=64 time=0.037 ms
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=0.149 ms
ElasticSearch Specific Suggestion:
Elasticseach has its own clustering that provides Failover
and Loadbalancing
features.
You can use shards
and replicas
per index
in elasticsearch hosts that are part of elasticsearch cluster.
This being said, I suggest you create 3 Services
with 1 replica
each, then join then in an elasticsearch cluster, then create indexes
with 3 shards
and 3 replicas
. You will have loadbalancing
and failover
within elasticsearch cluster.
To read more about shards
, Use this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With