I want MongoDB to hold query results in RAM for longer period of time (say 30 minutes if memory is available). Is it possible? OR is there any way i can make sure that the data is pre-loaded into RAM before subsequent queries on it.
In fact i am wondering about simple query results performance by MongoDB. I have a dedicated server with 10GB RAM and my db.stats() are as follows;
db.stats();
{
"db": "test",
"collections":16,
"objects":625690,
"avgObjSize":68.90,
"dataSize":43061996,
"storageSize":1121402888,
"numExtents":74,
"indexes":25,
"indexSize":28207200,
"fileSize":469762048,
"nsSizeMB":16,
"ok":1
}
Now when i query single document (as mentioned here) from a web service it loads in 1.3 seconds. Subsequent calls of same queries gives response in 400ms and then after few seconds, it again starts taking 1.3 seconds. Looks like MongoDB has lost the previous queried document from Memory, where as there is no other queries asking for data mapped to RAM.
Please explain this and let me know any way to make subsequent queries faster responding.
Your observed performance problem on an initial query is likely one of the following issues (in rough order of likelihood):
1) Your application / web service has some overhead to initialize on first request (i.e. allocating memory, setting up connection pools, resolving DNS, ...).
2) Indexes or data you have requested are not yet in memory, so need to be loaded.
3) The Query Optimizer may take a bit longer to run on the first request, as it is comparing the plan execution for your query pattern.
It would be very helpful to test the query via the mongo
shell, and isolate whether the overhead is related to MongoDB or your web service (rather than timing both, as you have done).
Following are some notes related to MongoDB.
MongoDB doesn't have a "caching" time for documents in memory. It uses memory-mapped files for disk I/O and the documents in memory are based on your active queries (documents/indexes you've recently loaded) as well as the available memory. The operating system's virtual memory manager is in charge of caching, and typically will follow a Least-Recently Used (LRU) algorithm to decide which pages to swap out of memory.
The expected behaviour is that over time MongoDB will grow to use all free memory to store your active working data set.
Looking at your provided db.stats()
numbers (and assuming that is your only database), it looks like your database size is current about 1Gb so you should be able to keep everything within your 10Gb total RAM unless:
mongod
server and those documents/indexes haven't been requested yetIn MongoDB 2.2, there is a new touch
command you can use to load indexes or documents into memory after a server restart. This should only be used on initial startup to "warm up" the server, as otherwise you could be unhelpfully forcing actual "active" data out of memory.
On a linux system, for example, you can use the top
command and should see that:
mongod
should not use swap (since the files are memory-mapped)You can use the mongostat
tool to get a quick view of your mongod
activity .. or more usefully, use a service like MMS to monitor metrics over time.
The MongoDB Query Optimizer compares plan execution for a query pattern every ~1,000 write operations, and then caches the "winning" query plan until the next time the optimizer runs .. or you explicitly call an explain()
on that query.
This should be a straightforward one to test: run your query in the mongo
shell with .explain()
and look at the ms timings, and also the number of index entries and documents scanned. The timing for an explain() isn't the actual time the queries will take to run, as it includes the cost of comparing the plans. The typical execution will be much faster .. and you can look for slow queries in your mongod
log.
By default MongoDB will log all queries slower than 100ms, so this provides a good starting point to look for queries to optimize. You can adjust the slow ms value with the --slowms
config option, or using the Database Profiler
commands.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With