Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MongoDb vs CouchDb: write speeds for geographically remote clients

I would like all of my users to be able to read and write to the datastore very quickly. It seems like MongoDb has blazing reads, but the writes seem like they could be very very slow if the one master db needs to be located very far away from the client. Couchdb seems that it has slow reads, but how about the writes in the case when the client is very far away from the master. With couchdb, we can have multiple masters, meaning we can always have a write node close to the client. Could couchdb actually be faster for writes than mongodb in the case when our user base is spread very far out geographically?

I would love to use mongoDb due to its blazing fast speed, but some of my users very far away from the only master will have a horrible experience. For worldwide types of systems, wouldn't couchDb be better. Isn't mongodb completely ruled out in the case where you have users all around the world? MongoDb, if you're listening, why don't you do some simple multi-master setups, where conflict resolution can be part of the update semantic? This seems to be the only thing standing in between mongoDb completely dominating the nosql marketshare. Everything else is very impressive.

like image 466
SeekingNonblockingIo Avatar asked Nov 22 '10 06:11

SeekingNonblockingIo


1 Answers

Disclosure: I am a MongoDB fan and user, i have zero experience with CouchDB.

I have a heavy duty app that is very read write intensive. I'd say reads outnumber writes by a factor of around 30:1. The way mongo is designed reads are always going to be much faster than writes the trick (in my experience) is to make your writes so efficient that you can dedicate a higher percentage of your system resources to the writes.

When building a product on top of mongo the key thing to remember is the _id field. This field is automatically generated and added to all of your JSON objects it will look something like 47cc67093475061e3d95369d when you design your queries (Find's) try and query on this field wherever possible as it contains the machine location (and i think also disk location??? - i should check this) where the object lives so when you use a find or update using this field will really speed up your machine. Consider this in the design of your system.

Example:

2 of the clusters in my database are "users" and "posts". A user can create multiple posts. These two collections have to reference each other alot in the implementation of my app.

In each post object i store the _id of the parent user. In each user object i store an array of all the posts the user has authored.

Now on each user page I can generate a list of all the authored posts without a resource stressful query but with a direct look up of the _id. The bigger the mongo cluster the bigger the difference this is going to make.

If you're at all familiar with oracle's physical location rowids you may understand this concept only in mongo it is much more awesome and powerful.

I was scared last year when we decided to finally ditch MySQL for mongo but I can tell you the following about my experience: - Data porting is always horrible but it went as well as I could have imagined. - Mongo is probably the best documented NoSQL DB out there and the Open Source community is fantastic. - When they say fast and scalable there not kidding, it flies. - Schema design is very easy and much more natural and orderly than key/value type db's in my opinion. - The whole system seems designed for minimal user complexity, adding nodes etc is a breeze.

Ok, seriously I swear mongo didn't pay me to write this (I wish) but apologies for the love fest.

Whatever your choice, best of luck.

like image 156
Eamonn Avatar answered Oct 19 '22 10:10

Eamonn