I have installed Redis in Docker using below command
docker run -d -p 6379:6379 redis:3.0.1
docker run -d -p 6380:6379 redis:2.8.20
Now I need to access this redis instance from another machine
public static ConnectionMultiplexer redis = ConnectionMultiplexer.Connect(IPOFDOCKERINSTALLEDMACHINE:6379);
My App is hosted in another machine in different server.
Wnen i am running the app, below is the exception
It was not possible to connect to the redis server(s); to create a disconnected multiplexer, disable AbortOnConnectFail. SocketFailure on PING
Is there anything need to be changed in docker or oracle virtual machine ?
Connecting multiple containers across the network on different servers is a perfect use case for docker swarm. You should try to create a overlay network and connect the running containers to the swarm and that network, as described here. Depending on your knowledge of the swarm ecosystem, yous could try different solutions.
Starting with docker 1.12, and if you want to manually manage the containers you could run
# retrieve the last swarm version
$ docker pull swarm
# running your swarm manager on your server
$ docker swarm init --advertise-addr $(hostname -I | awk '{print $1}')
# creating a cross server container network
$ docker network create --driver overlay redisnet
This command will output a slave command to use on your node. This command will allow you to join the swarm as "slave" server. If you want to launch services from that server, you should output the following command, which wil give you the manager token.
$ docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-1ewyz5urm5ofu78vddmrixfaye5mx0cnuj0hwxdt7baywmppav-0p5n6b7hz170gb79uuvd2ipoy \
<IP_ADDRESS>:2377
When your nodes are on the swarm you can launch your redis services with replicas
$ docker service create --network redisnet \
--name redis --replicas 1 redis:3.0.1
$ docker service create --network redisnet \
--name old_redis --replicas 1 redis:2.8.20
$ docker service create --network redisnet --name app <APP_IMAGE>
Now all your containers can make http calls using the service name as hostname for the specific service. Basically if you only need to access your redis services from your application, this should do it.
You can also expose ports using the same docker option with -p
but you have to discover which server runs your service. However, this require you to follow the other answwers to check if you have any port blocking on your VM.
Moreover, other solutions exist like Kubernetes or Mesos, but swarm is like the official way to go.
Docker is binding your exposed ports to localhost. That is why you are finding issues. In order to make Docker unbind those exposed ports to localhost, you must run the containers via changing the command you use to launch the containers:
docker run -d -p 0.0.0.0:6380:6379 redis:2.8.20
docker run -d -p 0.0.0.0:6379:6379 redis:3.0.1
The part that changed is that now you are specifying both a host and a port for the local part (and leaving the container part like it was; a single port).
I hope that helps you mate! And please, be careful with this. It's not like something you'd typically want to achieve. Make sure you are not leaving your redis service unauthenticated if you are using this for something even close to production.
Given your run command without any ip restriction in:
$ docker run -d -p 6379:6379 redis:3.0.1
$ docker run -d -p 6380:6379 redis:2.8.20
And the netstat output that shows it bound to localhost in the comment:
$ netstat -na | grep 6379 && netstat -na | grep 6380
TCP 127.0.0.1:6379 0.0.0.0:0 LISTENING
There is a mismatch between the port you published to all interfaces and the port that is listening on localhost. There are a few possibilities:
Something else is listening on 127.0.0.1:6379. Run a sudo netstat -lntp | grep 6379
to find the process using that port.
Most likely the containers are not running. I say this since there wasn't anything on 6380. Check if your containers are running with docker ps -a
. The ps output will include any port bindings if the container is running. If the containers exist, even if exited, you can check their logs with docker logs container_id
to see if there are any errors.
Docker is not running on your local host. If echo $DOCKER_HOST
is pointing to an ip or hostname, your client is sending the commands there. This would also apply if you're running your commands inside a VM and checking the results on your physical host.
Unlikely, but you could have changed the default ip on dockerd to 127.0.0.1 with the --ip
flag. By default, published ports listen on all interfaces (0.0.0.0).
Your run command listed above may not be accurate. If you published the port with docker run -d -p 127.0.0.1:6379:6379 redis:3.0.1
that could explain the binding to 127.0.0.1. Removing the 127.0.0.1:
from the command will default to binding to all interfaces.
Based on your error;
It was not possible to connect to the redis server(s); to create a >disconnected multiplexer, disable AbortOnConnectFail. SocketFailure on PING
I think your other machine cannot access the machine with docker containers.
ping IPOFDOCKERMACHINE
.If step 1 is successful try telnet command to make sure you can access the ports require. Please refer this article about telnet command. This article shows how to enable telnet in Windows OS.
telnet IPOFDOCKERMACHINE 6380
telnet IPOFDOCKERMACHINE 6379
If telnet fails, that means you don't have access to ports you need. You need to fix this problem before moving forward.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With