I am quite new to Docker and Consul and now trying to set up a local Consul Cluster consisting of 3 dockerized nodes. I am using the progrium/consul
Docker image and went through the whole tutorial and examples described.
The cluster works fine until it comes to restarting / rebooting.
Here is my docker-compose.yml
:
---
node1:
command: "-server -bootstrap-expect 3 -ui-dir /ui -advertise 10.67.203.217"
image: progrium/consul
ports:
- "10.67.203.217:8300:8300"
- "10.67.203.217:8400:8400"
- "10.67.203.217:8500:8500"
- "10.67.203.217:8301:8301"
- "10.67.203.217:8302:8302"
- "10.67.203.217:8301:8301/udp"
- "10.67.203.217:8302:8302/udp"
- "172.17.42.1:53:53/udp"
restart: always
node2:
command: "-server -join 10.67.203.217"
image: progrium/consul
restart: always
node3:
command: "-server -join 10.67.203.217"
image: progrium/consul
restart: always
registrator:
command: "consul://10.67.203.217:8500"
image: "progrium/registrator:latest"
restart: always
I get message like:
[ERR] raft: Failed to make RequestVote RPC to 172.17.0.103:8300: dial tcp 172.17.0.103:8300: no route to host
which is obviously because of the new IP my nodes 2 and 3 get after the restart. So is it possible to prevent this? A read about linking and environment variables but it seems those variables are also not updated after a reboot.
I have had the same problem until I have read that there is a ARP table caching problem when you restart a containerized consul node.
As far as I know, there are 2 workaround:
The owner(Jeff Lindsay) told me that they are redisigning the entire container with this fix built in, no timelines unfortunately.
Source: https://github.com/progrium/docker-consul/issues/26
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With