Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MongoDB Cursor Timeouts while doing a lot of writes

Tags:

mongodb

We have a cluster of 2 replica Sets, with 3 servers per set. With a single collection being sharded. We also have a quite a few more(8+) collections that we use on a daily basis. With the majority of the data being in the sharded collection with close to 100 Million records in it.

Recently we have added the requirement to obtain 100x the data that we had been getting previously, and we need to write this to mongodb. A daemon has been set in place to perform the writes necessary to keep the database up to date. The script performs at over 200 writes a second, with the majority going to the all separate collections.

With this quantity of writes, we have been unable to perform large reads for analytical purposes. Receiving a combination of Cursor Timeouts client-side and server-side("Cursor Not Found").

We have attempted to do limit/skip schemes on the reads, but the problem persists. What is the best course of action to remedy this as we require both a large amount of writes, with few, but large reads?

like image 429
Bryan Avatar asked May 09 '11 18:05

Bryan


1 Answers

Typically, in a case like this you want to start looking at the queries causing the time. Then you want to look at the hardware to see what's being stressed.

  1. Are these queries correctly indexed?
  2. How big are the indexes? Do they fit in RAM?
  3. Can you provide some details on where the bottlenecks are?
  4. Are you locked on on IO?
  5. Are your processors running at full speed?

Also, is there anything unusual in the logs?

Basically we need to ensure that you have: 1. Correctly built the system to handle the query 2. Correctly provisioned the system to handle the data volumes

like image 162
Gates VP Avatar answered Nov 15 '22 05:11

Gates VP