Last time I get alert from MongoDB Atlas:
Disk I/O % utilization on Data Partition has gone above 70 on nvme2n1
But I have no any ideas how can I localize / query / index / part of code / problematic collection.
In what way can I perform any analyze to find out problem root-cause?
Disk I/O % Utilization alerts indicate that percentage of time during which requests are being issued reaches a specified threshold. This threshold is specified when the alert is created.
To provide durability in the event of a crash, MongoDB uses write ahead logging to an on-disk journal. MongoDB writes the in-memory changes first to the on-disk journal files.
MongoDB Atlas is a multi-cloud database service by the same people that build MongoDB. Atlas simplifies deploying and managing your databases while offering the versatility you need to build resilient and performant global applications on the cloud providers of your choice.
Starting in MongoDB 3.4, the default WiredTiger internal cache size is the larger of either: 50% of (RAM - 1 GB), or. 256 MB.
Not answer, but just seen that many people faced with similar problem. In My case root cause was: we had collection with huge documents that contain array of data (in fact - list of coordinates with some metadata), and update it as many times, as coordinates we have (when adding new coordinates). + some additional operations.
As I know MongoDB cannot fetch just part of document, it fetch full document, and when we fetch many different and big documents, they are not fit into MongoDB in-memory cache, and each time access into hard disc, that lead to this issue. So, we just split up this document on several, and this fixed issue. While we need frequent access to update/add this data, we keep it into different documents, and finally, after process done, we gather back all this documents into one big document, for "history check" purpose.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With