If I delete the 3.1G journal file, sudo service mongodb restart
will fail. However, this file is taking too much space. How can I solve this problem? How can I remove it?
bash$ du -sh /var/lib/mongodb/* 4.0K _tmp 65M auction_development.0 128M auction_development.1 17M auction_development.ns 3.1G journal 4.0K mongod.lock
MongoDB uses write ahead logging to an on-disk journal to guarantee write operation durability. The WiredTiger storage engine does not require journaling to guarantee a consistent state after a crash. The database will be restored to the last consistent checkpoint during recovery.
In this process, a write operation occurs in mongod, which then creates changes in private view. The first block is memory and the second block is 'my disc'. After a specified interval, which is called a 'journal commit interval', the private view writes those operations in journal directory (residing in the disc).
MongoDB first applies write operations to the private view. MongoDB then applies the changes in the private view to the on-disk journal files in the journal directory roughly every 100 milliseconds.
To delete a record, or document as it is called in MongoDB, we use the deleteOne() method. The first parameter of the deleteOne() method is a query object defining which document to delete.
TL;DR: You have two options. Use the --smallfiles
startup option when starting MongoDB to limit the size of the journal files to 128MB, or turn off journalling using the --nojournal
option. Using --nojournal
in production is usually a bad idea, and it often makes sense to use different write concerns also in development so you don't have different code in dev and prod.
The long answer: No, deleting the journal file isn't safe. The idea of journalling is this:
A write comes in. Now, to make that write persistent (and the database durable), the write must somehow go to the disk.
Unfortunately, writes to the disk take eons compared to writes to the RAM, so the database is in a dilemma: not writing to the disk is risky, because an unexpected shutdown would cause data loss. But writing to the disk for every single write operation will decrease the database's performance so badly that it becomes unusable for practical purposes.
Now instead of writing to the data files themselves, and instead of doing it for every request, the database will simply append to a journal file where it stores all the operations that haven't been committed to the actual data files yet. This is a lot faster, because the file is already 'hot' since it's read and written to all the time, and it's only one file, not a bunch of files, and lastly, because it writes all pending operations in a batch every 100ms by default. Deleting this file in the middle of something wreaks havoc.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With