Will it get slower? Will find work for only data that fit into RAM? What will happen if mongodb indexes are more then RAM?
MongoDB requires approximately 1 GB of RAM per 100.000 assets. If the system has to start swapping memory to disk, this will have a severely negative impact on performance and should be avoided.
MongoDB is not an in-memory database. Although it can be configured to run that way. But it makes liberal use of cache, meaning data records kept memory for fast retrieval, as opposed to on disk.
Indexes are stored in memory, are kept in sorted order, and prevent queries from having to scan every document in a collection when querying an indexed field.
You can inspect mem. mapped to check the amount of mapped memory that mongod is using. If this value is greater than the amount of system memory, some operations will require a page faults to read data from disk.
EDIT: THIS ANSWER NO LONGER APPLIES (MongoDB has a new storage engine that does not operate this way), the answer is very old and the mmapv1 storage engine is deprecated.
About Mongo
MongoDB uses memory mapped files.
This means the the operating system essentially controls what is paged in and out of memory (to and from disk).
The Rules
If your indexes + working set exceed memory, the last recently used pages (sections of memory) will be flushed to disk. This leaves only the most recently used data which still fits in memory readily available.
Your operating system controls this.
While you will experience awful performance if your true working set and indexes do not fit into memory, in practice, the size of one's working set (hot data) is much smaller than their total dataset.
If you don't violate this rule, you should have excellent performance most of the time even though your indexes + total data may exceed the total available memory.
How It Works
If a query is performed that needs data that is not in memory, it will be paged into memory (retrieved from disk) and there will be a performance hit.
Note: this is essentially the situation when the database is first started (cold).
Nothing is in memory to start with, page faults occur when data is required, and data is paged into memory as needed. When you run out of memory, the last recently used pages (chunks) are flushed from memory in favor of hotter (more recently accessed) data.
Also it is worth mentioning that because indexes are used constantly, and thus always recently used, they are virtually never paged out.
If your indexes are larger than available RAM then performance drops rapidly. The MongoDB site specifically advises you to "Make sure your indexes can fit in RAM".
If your queries seem sluggish, you should verify that your indexes are small enough to fit in RAM. For instance, if you're running on 4GB RAM and you have 3GB of indexes, then your indexes probably aren't fitting in RAM. You may need to add RAM and/or verify that all the indexes you've created are actually being used.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With