I was trying to benchmark my mongodb servers lately, and I am guessing they are kind of overloaded, this is the result of serverStatus():
SECONDARY> db.serverStatus().mem
{
"bits" : 64,
"resident" : 26197,
"virtual" : 161106,
"supported" : true,
"mapped" : 79994,
"mappedWithJournal" : 159988
}
So if i understood right, MongoDB is using 26GB of memory. If my server has 32GB, and it is only running mongoDb, getting a new server and sharding my data will be a good idea??
The way MongoDB caching works will end up using whatever memory is available. Performance will drop significantly when the resident portion bumps up against the total memory, but will depend on your data access patterns. It's usually okay if not everything is in memory all the time, but you want to have room for your working set. See Working Set Size and serverStatus().mem for general advice and details.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With