I recently started trying out some noSQL prototypes for a customer. They got a real-time application which does lots of inserts, but less reads(Currently they are using MySql and would like to try out some noSQL solutions)
Over the weekend I tried Cassandra 2.0, MongoDB 2.4.9 and Redis to be compared to a normal Mysql 5.5 DB. All are running in my Windows i3 core 2.30 Ghz/8GB RAM laptop, so no high-end fancy machines.
The table structure is a simple one as below. Though it's the MySql DESC, Cassandra has the same structure, and in MongoDb it's stored as a JSON/BSON but got the same structure and indexes. It has got two indexes (oneway_id & twoway_id) for all the three db's.
Structure (For all four db's)
+--------------+---------------------+
| Field | Type |
+--------------+---------------------+
| tmstamp | bigint(20) unsigned |
| field_1 | bigint(20) unsigned |
| field_2 | varchar(64) |
| field_3 | varchar(64) |
| field_4 | tinyint(3) unsigned |
| field_5 | bigint(20) unsigned |
| field_6 | varchar(25) |
| field_7 | varchar(15) |
| field_8 | varchar(15) |
| field_9 | varchar(15) |
+--------------+---------------------+
DB/Environment details
Created a simple java test cases and these are the results I got (Though not consistently the same numbers but latencies are pretty much the same way):
100,000 Records
500,000 Records
1,000,000 Records
My question is why does Cassandra takes this long for such a small and simple table inserts?
In Cassandra I tried both inline looped sql inserts & Batch inserts. The funny thing is batch inserts took more time. The document I followed for batch inserts is:
http://www.datastax.com/dev/blog/client-side-improvements-in-cassandra-2-0
I don't want to use asyncExecute, because it doesn't gives me the exact insert time.
Batch insert which I used is something like this(Which takes ages more than normal insert)
PreparedStatement ps = session.prepare("INSERT INTO some_table (val_1, val_2, val_3, val_4) VALUES (?, ?, ?, ?)");
BatchStatement batch = new BatchStatement();
//for loop start
batch.add(ps.bind(uid, mid1, title1, body1));
//for loop end
session.execute(batch);
inline loop I used insert is something like this
String sqlInsert = "INSERT INTO some_table (val_1, val_2, val_3, val_4) VALUES (";
// for loop start
sqlInsert += uid+", "+", "+mid1+", "+title1+", "+body1+")";
session.execute(sqlInsert);
// for loop end
Now why is Cassandara slower than mysql and more important - why is MongoDB much much faster than Cassandra? I seriously wish I am doing something wrong?
Is there a way I can insert JSON/BSON objects directly to Cassandra like MongoDB does? I guess that might make it fast? Can some experts please help me on this? If there are no answers I'll conclude that MongoDB is better than Cassandra!
Because Redis stores voluminous data in memory, its transactional response times are much faster than Cassandra that persists data to disk by performing traditional read-write transactions, albeit much quicker than a conventional RDBMS.
Read performance is highly efficient in Cassandra as it takes O(1) time. Read performance is not that fast in MongoDB when compared to Cassandra. 5. Cassandra has only cursory support for secondary indexes i.e secondary indexing is restricted.
Since Cassandra has multi-primary node support, the architectural design of Cassandra enables it to handle many simultaneous writes to more than one node. It will be more write performant than MongoDB which is limited to one writable primary node per replica set. Secondary servers can only be used for reads.
First, writing data is very fast in Cassandra, because its design does not require performing disk reads or seeks. The memtables and SSTables save Cassandra from having to perform these operations on writes, which slow down many databases. All writes in Cassandra are append-only.
Your code is using serial inserts. Each insert must wait for the previous to complete and return an acknowledgement before the next can begin. This is a bad way to benchmark any database that can handle multiple incoming connections. If you really don't want to use execute_async (the correct approach) you should write a multi-threaded stress program so that the inserts are not blocking (on the client side) and you are truly limited by the Cassandra node. Basically what you are seeing is the speed at which your client program can run rather than the capability of the database.
Blog Post of Interest points of interest:
http://www.datastax.com/dev/blog/how-not-to-benchmark-cassandra
There are only two principles to doing load generation right:
Feed Cassandra enough work Generate the workload on separate machines That’s it! But it’s frequently done wrong, from the extreme case of a single-threaded client running on the same laptop as Cassandra, to more subtle problems with the Python Global Interpreter Lock. It seems that like binary search, it’s surprisingly difficult to build a good load generator. If possible, avoid the temptation of rolling your own and use something battle-tested.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With