Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Choosing MongoDb/CouchDb/RavenDb - performance and scalability advice [closed]

We are looking at a document db storage solution with fail over clustering, for some read/write intensive application.

We will be having an average of 40K concurrent writes per second written to the db (with peak can go up to 70,000 during) - and may have around almost similiar number of reads happening.

We also need a mechanism for the db to notify about the newly written records (some kind of trigger at db level).

What will be a good option in terms of a proper choice of document db and related capacity planning?

Updated

More details on the expectation.

  • On an average, we are expecting 40,000 (40K) Number of inserts (new documents) per second across 3-4 databases/document collections.
  • The peak may go up to 120,000 (120K) Inserts
  • The Inserts should be readable right away - almost realtime
  • Along with this, we expect around 5000 updates or deletes per second
  • Along with this, we also expect 500-600 concurrent queries accessing data. These queries and execution plans are somewhat known, though this might have to be updated, like say, once in a week or so.
  • The system should support failover clustering on the storage side
like image 715
amazedsaint Avatar asked Mar 10 '11 10:03

amazedsaint


People also ask

What is difference between CouchDB vs MongoDB?

CouchDB accepts queries via a RESTful HTTP API, while MongoDB uses its own query language. CouchDB prioritizes availability, while MongoDB prioritizes consistency. MongoDB has a much larger user base than CouchDB, making it easier to find support and hire employees for this database solution.

Does MongoDB Sharding improve performance?

Sharded clusters in MongoDB are another way to potentially improve performance. Like replication, sharding is a way to distribute large data sets across multiple servers. Using what's called a shard key, developers can copy pieces of data (or “shards”) across multiple servers.

Is MongoDB better than couchbase?

MongoDB performed better than Couchbase on a 4-node cluster, but lower on 10- and 20-node clusters. Cassandra did not perform so well with 2,570 ops/sec on a 4-node cluster, 4,230 ops/sec on a 10-node cluster, and 6,563 ops/sec on a 20-node cluster.

What kind of databases are MongoDB and CouchDB?

MongoDB and CouchDB both are the types of document-based NoSQL databases. A document database is also called mdocument store, and they are usually used to store the document format of the semi-structured data and detailed description of it.


3 Answers

if "20,000 concurrent writes" means inserts then I would go for CouchDB and use "_changes" api for triggers. But with 20.000 writes you would need a stable sharding aswell. Then you would better take a look at bigcouch

And if "20.000" concurrent writes consist "mostly" updates I would go for MongoDB for sure, since Its "update in place" is pretty awesome. But then you should handle triggers manually, but using another collection to update in place a general document can be a handy solution. Again be careful about sharding.

Finally I think you cannot select a database with just concurrency, you need to plan the api (how you would retrieve data) then look at options in hand.

like image 112
frail Avatar answered Oct 23 '22 08:10

frail


I would recommend MongoDB. My requirements wasn't nearly as high as yours but it was reasonably close. Assuming you'll be using C#, I recommend the official MongoDB C# driver and the InsertBatch method with SafeMode turned on. It will literally write data as fast as your file system can handle. A few caveats:

  1. MongoDB does not support triggers (at least the last time I checked).
  2. MongoDB initially caches data to RAM before syncing to disk. If you need real-time needs with durability, you might want to set fsync lower. This will have a significant performance hit.
  3. The C# driver is a little wonky. I don't know if it's just me but I get odd errors whenever I try to run any long running operations with it. The C++ driver is much better and actually faster than the C# driver (or any other driver for that matter).

That being said, I'd also recommend looking into RavenDB as well. It supports everything you're looking for but for the life of me, I couldn't get it to perform anywhere close to Mongo.

The only other database that came close to MongoDB was Riak. Its default Bitcask backend is ridiculously fast as long as you have enough memory to store the keyspace but as I recall it doesn't support triggers.

like image 6
Rahul Ravindran Avatar answered Oct 23 '22 08:10

Rahul Ravindran


Membase (and the soon-to-be-released Couchbase Server) will easily handle your needs and provide dynamic scalability (on-the-fly add or remove nodes), replication with failover. The memcached caching layer on top will easily handle 200k ops/sec, and you can linearly scale out with multiple nodes to support getting the data persisted to disk.

We've got some recent benchmarks showing extremely low latency (which roughly equates to high throughput): http://10gigabitethernet.typepad.com/network_stack/2011/09/couchbase-goes-faster-with-openonload.html

Don't know how important it is for you to have a supported Enterprise class product with engineering and QA resources behind it, but that's available too.

Edit: Forgot to mention that there is a built-in trigger interface already, and we're extending it even further to track when data hits disk (persisted) or is replicated.

Perry

like image 4
Perry krug Avatar answered Oct 23 '22 07:10

Perry krug