Perhaps I have misconfigured MongoDB somehow, but even under heavy load I don't see it using more than one core. For example, top is currently showing:
Tasks: 145 total, 1 running, 144 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 :100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 41182768k total, 40987476k used, 195292k free, 109956k buffers Swap: 2097144k total, 1740288k used, 356856k free, 28437928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16297 mongod 15 0 521g 18g 18g S 99.8 47.2 2929:32 mongod 1 root 15 0 10372 256 224 S 0.0 0.0 0:03.39 init
Is there something I can do to get Mongo to use the other cores more effectively? If it's relevant, I currently have a big M/R running which seems to have put a lot of read queries in "waiting" mode.
Use Multiple CPU CoresMongoDB's WiredTiger storage engine architecture is capable of efficiently using multiple CPU cores. Typically a single client connection is represented by its own thread.
The maximum size an individual document can be in MongoDB is 16MB with a nested depth of 100 levels. Edit: There is no max size for an individual MongoDB database.
The MongoDB server currently uses a thread per connection plus a number of internal threads. You can list all threads (including idle and system) using db. currentOp(true) in the mongo shell. If you have 8 incoming requests, each of those will be handled by a separate connection thread.
A single-core processor is a microprocessor with a single core on its die. It performs the fetch-decode-execute cycle once per clock-cycle, as it only runs on one thread. A computer using a single core CPU is generally slower than a multi-core system.
MongoDB can saturate all cores on a multi-core machine for read operations, but for write operations and map-reduce MongoDB can only utilize a single core per mongod process.
The limitation for single-core MapReduce is due to the Javascript interpreter that MongoDB utilizes. This is something that is supposed to be fixed in the future but in the interim you can use Hadoop to execute the MapReduce and store the result set in your MongoDB database.
Another option which has seen mixed results is to run a single mongod process for every core on the instance this will not increase performance for a single database unless they are configured to run in a sharded setup.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With