I have two worker nodes: worker1 and worker2 and one swarm manager. I'm running all the services in the worker nodes only. I need to run from the manager docker exec to access some of the containers created in the worker nodes but I keep getting that the service is not recognized. I know I can run docker exec in any of the worker nodes and it works fine but I dont want to have to find on which node the service is running and then ssh to the designated node to run docker exec command. Is there a way to do so in swarm or not?
First, create overlay network on a manager node using the docker network create command with the --driver overlay flag. After you create an overlay network in swarm mode, all manager nodes have access to the network. The swarm extends my-network to each node running the service.
Docker Swarm is not being deprecated, and is still a viable method for Docker multi-host orchestration, but Docker Swarm Mode (which uses the Swarmkit libraries under the hood) is the recommended way to begin a new Docker project where orchestration over multiple hosts is required.
Swarm mode does not currently have a way to run an exec on a running task. You need to find the container and run the exec on the host. You can configure the workers to have a TLS protected port they listen on, which would give you remote access (see docker's guide). And you can lookup the node for each task in a service by checking the output of a docker service ps $service_name
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With