I have used the command
db.collection.stats()
db.stats()
{
"collections" : 17,
"objects" : 487747,
"avgObjSize" : 1924.9327048654322,
"dataSize" : 938880152,
"storageSize" : 1159314432,
"numExtents" : 82,
"indexes" : 32,
"indexSize" : 153812992,
"fileSize" : 8519680000,
"ok" : 1
}
From net i found out this statement
Indexes should always be in memory .(which is nothing but RAM)
The index size is 153812992
and the datasize is 938880152
Could you please tell me what amount of RAM do i require on my MongoDB server , so that the performance is aways great .
As per the applciation design , daiily nearly 100k insertions/updations might happen , And one more question i have is , does this index size will grow each day ??
Then in that case how can i determine the best fit RAM Size for my Application .
please advice , thanks in advance .
MongoDB requires approximately 1 GB of RAM per 100.000 assets. If the system has to start swapping memory to disk, this will have a severely negative impact on performance and should be avoided.
MongoDB, in its default configuration, will use will use the larger of either 256 MB or ½ of (ram – 1 GB) for its cache size.
Probably the quickest and easiest way to check the size of a MongoDB collection is to use the db. collection. dataSize() method. This method returns the size of the collection in bytes.
MongoDB will allocate per default 50 % of (RAM - 1GB), so we have in this example 63,5 GB RAM for MongoDB. 63,5 GB minus 23,5 GB for the indexes will make 40 GB remaining for documents. from the mongod.
There is a tool in the later versions of MongoDB to help with finding out how big your working set is, it is still quite experimental but it should work: http://docs.mongodb.org/manual/reference/command/serverStatus/#serverStatus.workingSet
The best way to use this is simply to make a automated test script which will use your application and in the meantime print out serverStatus
and archive the value of the working set document. You can graph it, etc etc and come to a reasonable conclusion of what your RAM needs to be.
Somethings are changed in years about MongoDB. Today MongoDB says that:
Changed in version 3.0:
serverStatus
no longer outputs theworkingSet
TL;DR
If MMAPv1 storage engine is used on MongoDB concerning of working set
is present.
https://docs.mongodb.com/manual/faq/diagnostics/#must-my-working-set-size-fit-ram
If WiredTiger storage engine is used on MongoDB, not need to think about working set
.
https://docs.mongodb.com/manual/faq/diagnostics/#memory-diagnostics-for-the-wiredtiger-storage-engine
Memory Diagnostics for the WiredTiger Storage Engine
Must my working set size fit RAM?
No.
How do I calculate how much RAM I need for my application?
With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.
Changed in version 3.2: Starting in MongoDB 3.2, the WiredTiger internal cache, by default, will use the larger of either:
60% of RAM minus 1 GB, or 1 GB.
Update
Starting in 3.4, the WiredTiger internal cache, by default, will use the larger of either:
50% of (RAM - 1 GB), or 256 MB. For example, on a system with a total of 4GB of RAM the WiredTiger cache will use 1.5GB of RAM (0.5 * (4 GB - 1 GB) = 1.5 GB). Conversely, a system with a total of 1.25 GB of RAM will allocate 256 MB to the WiredTiger cache because that is more than half of the total RAM minus one gigabyte (0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB).
Your working set should stay in memory to achieve good performance. Otherwise many random disk IO’s will occur.
The working set for a MongoDB database is the portion of your data that clients access most often. You can estimate size of the working set, using
db.runCommand( { serverStatus: 1, workingSet: 1 } )
At SO level. Look at the number or rate of page faults and other MMS gauges to detect when you need more RAM.
If page faults are infrequent, your working set fits in RAM. If fault rates rise higher than that, you risk performance degradation.
One area to watch specifically in managing the size of your working set is index access patterns. If you are inserting into indexes at random locations (as would happen with id’s that are randomly generated by hashes), you will continually be updating the whole index. If instead you are able to create your id’s in approximately ascending order (for example, day concatenated with a random id), all the updates will occur at the right side of the b-tree and the working set size for index pages will be much smaller.
Source: MongoDb FAQ
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With