Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to share volumes across multiple hosts in docker engine swarm mode?

Tags:

Can we share a common/single named volume across multiple hosts in docker engine swarm mode, what's the easiest way to do it ?

like image 814
vivekyad4v Avatar asked Nov 01 '16 13:11

vivekyad4v


People also ask

Can volume be shared across multiple containers?

For multiple containers writing to the same volume, you must individually design the applications running in those containers to handle writing to shared data stores to avoid data corruption. After that, exit the container and get back to the host environment.

Can docker volumes be shared?

You can manage volumes using Docker CLI commands or the Docker API. Volumes work on both Linux and Windows containers. Volumes can be more safely shared among multiple containers. Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.

Can two docker containers share the same volume?

Multiple containers can run with the same volume when they need access to shared data. Docker creates a local volume by default.


2 Answers

If you have an NFS server setup you can use use some nfs folder as a volume from docker compose like this:

volumes:
    grafana:
      driver: local
      driver_opts:
        type: nfs
        o: addr=192.168.xxx.xx,rw
        device: ":/PathOnServer"
like image 182
herm Avatar answered Sep 24 '22 13:09

herm


In the grand scheme of things

The other answers are definitely correct. If you feel like you're still missing something or are coming to the conclusion that things might never really improve in this space, then you might want to reconsider the use of the typical POSIX-like hierarchical filesystem abstraction. Not all applications really need it (I might go as far as to say that few do). Maybe yours doesn't either.

In defense of filesystems

It is still very common in many circles, but usually these people know their remote/distributed filesystems very well and know how to set them up and leverage them properly (and they might be very good systems too, though often not with existing Docker volume drivers). Sometimes it's also in part because they're simply forced to (codebases that can't or shouldn't be rewritten to support other storage backends). Using, configuring or even writing arbitrary Docker volume drivers would be a secondary concern only.

Alternatives

If you have the option however, then evaluate other persistence solutions for your applications. Many implementations won't use POSIX filesystem interfaces but network interfaces instead, which pose no particular infrastructure-level difficulties in clusters such as Docker Swarm.

Solutions managed by third-parties (e.g. cloud providers)

Should you succeed in removing all dependencies to filesystems for persistent and shared data (it's still fine for transient local state), then you might claim to have fully "stateless" applications. Of course there is often always state persisted somewhere still, but the idea is that you don't handle it yourself. Many cloud providers (if that's where you're hosting things) will offer fully managed solutions for handling persistent state such that you don't have to care about it at all. If you're going this route, do consider managed services that use APIs compatible with implementations that you can use locally for testing (for example by running a Docker container based on an image for that implementation that is provided by a third-party or that you can maintain yourself).

DIY solutions

If you do want to manage persistent state yourself within a Docker Swarm cluster, then the filesystem abstraction is often inevitable (and you'd probably have more difficulties targeting block devices directly anyway). You'll want to play with node and service constraints to ensure the requirements of whatever you use to persist data are fulfilled. For certain things like a central DBMS server it could be easy ("always run the task on that specific node only"), for others it could be way more involved.

The task of setting up, scaling and monitoring such a setup is definitely not trivial, which is why many application developers are happy to let somebody else (e.g. cloud providers) do it. It's still a very cool space to explore however, though given you had to ask that question it's likely not something you should focus on if you're on a deadline.

Conclusion

As always, use the right abstraction for the job, and pause to think about what your strengths are and where to spend your resources.

like image 36
tne Avatar answered Sep 23 '22 13:09

tne