Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Distributed File Systems: GridFS vs. GlusterFS vs Ceph vs HekaFS Benchmarks [closed]

I am currently searching for a good distributed file system.

It should:

  • be open-source
  • be horizontally scalable (replication and sharding)
  • have no single point of failure
  • have a relatively small footprint

Here are the four most promising candidates in my opinion:

  • GridFS (based on MongoDB)
  • GlusterFS
  • Ceph
  • HekaFS

The filesystem will be used mainly for media files (images and audio). There are very small as well as medium sized files (1 KB - 10 MB). The amount of files should be around several millions.

Are there any benchmarks regarding performance, CPU-load, memory-consumption and scalability? What are your experiences using these or other distributed filesystems?

like image 567
Alp Avatar asked Jul 02 '13 12:07

Alp


4 Answers

I'm not sure your list is quite correct. It depends on what you mean by a file system.

If you mean a file system that is mountable in an operating system and usable by any application that reads and writes files using POSIX calls, then GridFS doesn't really qualify. It is just how MongoDB stores BSON-formatted objects. It is an Object system rather than a File system.

There is a project to make GridFS mountable, but it is a little weird because GridFS doesn't have concepts for things like hierarchical directories, although paths are allowed. Also, I'm not sure how distributed writes on gridfs-fuse would be.

GlusterFS and Ceph are comparable and are distributed, replicable mountable file systems. You can read a comparison between the two here (and followup update of comparison), although keep in mind that the benchmarks are done by someone who is a little biased. You can also watch this debate on the topic.

As for HekaFS, it is GlusterFS that is set up for cloud computing, adding encryption and multitenancy as well as an administrative UI.

like image 70
sockets-to-me Avatar answered Oct 22 '22 23:10

sockets-to-me


After working with Ceph for 11 months I came to conclusion that it utterly sucks so I suggest to avoid it. I tried XtreemFS, RozoFS and QuantcastFS but found them not good enough either.

I wholeheartedly recommend LizardFS which is a fork of now proprietary MooseFS. LizardFS features data integrity, monitoring and superior performance with very few dependencies.


2019 update: situation has changed and LizardFS is not actively maintained any more.
MooseFS is stronger than ever and free from most LizardFS bugs. MooseFS is well maintained and it is faster than LizardFS.

RozoFS has matured and maybe worth a try.
GfarmFS have its niche but today I would have chosen MooseFS for most applications.

like image 30
Onlyjob Avatar answered Oct 22 '22 22:10

Onlyjob


OrangeFS, anyone?

I am looking for a HPC DFS and found this discussion here: http://forums.gentoo.org/viewtopic-t-901744-start-0.html

Lots of good data and comparisons :)

After some talk the OP decided for OrangeFS, quoting: "OrangeFS. It does not support quotas nor file locks (though all i/o operations are atomic and this way consistency is kept without locks). But it works, and works well and stable. Furthermore this is not a general file storage oriented system, but HPC dedicated one, targeted on parallel I/O including ROMIO support. All test were done for stripe data distribution. a) No quotas — to hell quotas. I gave up on them anyway, even glusterfs supports not common uid/gid based quotas, but directory size limitations, more like LVM works. b) Multiple active metadata servers are supported and stable. Compared to dedicated metadata storage (single node) this gives +50% performance on small files and no significant difference on large ones. c) Excellent performance on large data chunks (dd bs=1M). It is limited by a sum of local hard drive (do not forget each node participates as a data server as well) speed and available network bandwidth. CPU consumption on such load is decent and is about 50% of single core on a client node and about 10% percents on each other data server nodes. d) Fair performance on large sets of small files. For the test I untared linux kernel 3.1. It took 5 minutes over OrangeFS (with tuned parameters) and almost 2 minutes over NFSv4 (tuned as well) for comparison. CPU load is about 50% of single core (of course, it is actually distributed between cores) on the client and about several percents on each node. e) Support of ROMIO MPI I/O API. This is a sweet yummy for MPI aware applications, which allows to use PVFS2/OrangeFS parallel input-output features directly from applications. f) No support for special files (sockets, fifo, block devices). Thus can't be safely used as /home and I use NFSv4 for that task providing users quota-restricted small home space. Though most distributed filesystems don't support special files anyway. "

like image 2
Raul Kist Avatar answered Oct 22 '22 21:10

Raul Kist


I do not know about the other systems you posted but I have made a comparison of 3 PHP CMS/Frameworks on local storage vs GlusterFS to see if it does better on real world tests than raw benchmarks. Sadly not.

http://blog.lavoie.sl/2013/12/glusterfs-performance-on-different-frameworks.html

like image 1
sebastien Avatar answered Oct 22 '22 21:10

sebastien