Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MongoDB sharded cluster 25 slower than standalone node

I'm confused by the situation and trying to fix this for a couple of days now. I'm running 3 shard on top of three 3-members replica sets (rs0, rs1 and rs2). All is working so far. Data is distributed over the 3 shards as well as cloned within the replica sets.

BUT: importing data into one of the replica set works fine with constantly 40k docs/s but by enabling sharding slows the entire process down to just 1.5k docs/s.

I've populated the data via different methods:

  • generated some random data in the mongo shell (running in my mongos)
  • JSON data import via mongoimport
  • MongoDB dump restore from another server via mongorestore

All of them result in just 1.5k doc/s which is disappointing. The mongod's are physical Xeon boxes with 32GB each, the 3 config servers are virtual servers (40 GB HDD, 2 GB RAM, if that matters), the mongos is running on my app server. By the way, the value of 1.5k inserts/s doesn't depend on the shard key, same behaviour for a dedicated shard key (single field key as well as compound key) as well as hashed shard key on _id field.

I tried a lot, even reinstalled the entire cluster twice. The question is: what is the bottleneck in this setup:

  • config servers running on virtual server? -> shouldn't be problematic due to the low resource consumption of config servers
  • mongos? -> running multiple Mongos on a dedicated box behind HAproxy might be an alternative, haven't tested that yet
like image 830
ctp Avatar asked Feb 04 '14 14:02

ctp


People also ask

Does sharding improve performance in MongoDB?

Sharded clusters in MongoDB are another way to potentially improve performance. Like replication, sharding is a way to distribute large data sets across multiple servers. Using what's called a shard key, developers can copy pieces of data (or “shards”) across multiple servers.

Which is better sharding or replication?

What is the difference between replication and sharding? Replication: The primary server node copies data onto secondary server nodes. This can help increase data availability and act as a backup, in case if the primary server fails. Sharding: Handles horizontal scaling across servers using a shard key.

Does sharding improve write performance?

Advantages of shardingIncreased read/write throughput — By distributing the dataset across multiple shards, both read and write operation capacity is increased as long as read and write operations are confined to a single shard.

Can MongoDB be Sharded?

MongoDB uses sharding to support deployments with very large data sets and high throughput operations. Database systems with large data sets or high throughput applications can challenge the capacity of a single server.


2 Answers

Let's do the math first: how big are your documents? Keep in mind that they have to be transferred over the net multiple times depending on your write concern.

May be you are experiencing this because of the indices which have to be build.

Please try this:

  1. Disable all indices except the one for _id (which is not possible anyway, iirc)
  2. Load your data
  3. Reenable indices.
  4. Enable sharding and balancing if not done already

This is the suggested way for importing data into a shared cluster anyway and should speed up your import considerably. Some (cautious !) fiddling with storage.syncPeriodSecs and storage.journal.commitIntervalMs might help, too.

The delay can occur even when storing the data on the primary shard. Depending on the size of your indices, they may slow down bulk operations considerably. You might also want to have a look at the replication.secondaryIndexPrefetch config option.

Another thing might be that your oplog simply gets filled faster than the replication can take place. Problem here: once it is created, you can not increase it's size. I am not sure wether it is safe to delete and recreate it in standalone mode and then reshare the replica set, but I doubt it. So the safe option would be to have the instance actually leave the replica set, reinstall it with a more appropriate oplog size and add the instance to the replica set as if it were the first time. If you don't care for the data, simply shut the replica set down, adjust the oplog size in the config file, delete the data dir and restart and reinitialize the replica set. Thinking of your problem twice, this sounds like the best bet to me, since the opllog isn't involved in standalone mode, iirc.

If you still have the same performance issues, my bet is on problems with disk or network IO.

You have a fairly standard setup, your mongos instance is running on a different machine than your mongod (be it a standalone or the primary of a replica set). You might want to check a few things:

  1. Name resolution latency for resolving the name of your primary and secondary shards from the machine running your mongos instance. I can not count the times installing nscd improved performance for various operations.
  2. network latency from your mongos instance to your primary shard. Assuming you have a firewall between your AppServer and your cluster, you might want to talk to the respective administrator.
  3. In case you are using external authentication, try to measure how long it takes.
  4. When using some sort of tunneling (e.g. stunnel or encryption like SSL/TLS), make sure you disable name resolution. Please keep in mind that encrypting and decrypting may take a relatively long time.
  5. Measure random disk IO on the mongod instances
like image 106
Markus W Mahlberg Avatar answered Oct 13 '22 11:10

Markus W Mahlberg


I was facing a similar performance issue. What helped to solve the performance issue was I ended up setting the mongod instance that was running on the same host as the mongos as the primary shard.

using the following command:

mongos> use admin
mongos> db.runCommand( { movePrimary: "mydb", to: "shard0003" } )  

After making this change (without touching the load balancer or tweaking anything else), I was able to load a relatively large dataset (25 million rows) using a loader I had written, and the entire procedure took about 15 minutes instead of hours/days.

like image 22
user3892260 Avatar answered Oct 13 '22 09:10

user3892260