I've been looking for weeks now for a suitable solution for persistent distributed file storage for my Docker Swarm implementation to no avail.
I've have three nodes in a swarm, which multiple instances of multiple applications running perfectly fine.
I'm at the point now where I need some consistency in storage across each of my applications but I don't seem to be able to find any straight forward solution to the problem.
I want to take the least configurable approach as possible.
I have a single cluster, with three nodes, and let's just say an instance of an application running on each.
What is the best option here: 1. I have something to replicate files across each of my nodes 2. I have something to replicate files across each of my containers 3. I use a separate file store that I connect my nodes to 4. I use a separate file store that I connect my containers to
Either way, I need some form of replication at a minimum for redundancy.
Appreciate if anyone can set me on the right course with some options!
Well performance/storage space
wise, replicating is not the answer, You need a solution with shared storage
. Like what we use in Server Virtualization
. There are many SAN Storage brands out there, if you got the cash.
Now if you have a virtualization infrastructure
in place like me (vmware vsphere
in my case) you can use available SAN Datastores
to store volumes via docker-volume vsphere driver. You will create docker volumes on vmware datastores
, and they are shared between your docker swarm
so in case of failure docker will start the containers (with their persistent data on volume) on another node in the swarm.
There are similar drivers
for other vendors (Virtualization/Storage) too.
There are Cloud based solutions, you can use Ceph
, glusterfs
, flocker
or other opensource distributed file systems for storing your containers persistent data
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With