The idea of what I want to achieve is to read (find) mongoDB oplog collection while new documents are being created (such as insert into a DB collection).
This is my code simplified:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect(url, function(err, db) {
if(err){console.error("ERROR",err); return;}
console.log("Connected correctly to server");
db.collection('oplog.rs').find({
ns: 'cabo_dev.documents',
op: 'i',
// ts: {
// $gte: $gte
// }
}, {
tailable: true
})
.each(function (err, entry) {
if (err) {
console.error("Error fetching a document", err, entry);
return;
}
console.log('--- entry', entry);
});
});
I have commented the $gte value to simplify but the idea is read all "new" logs, not the old one. I also have a similar code using mongoose, instead of raw driver.
According to the documentation, the previous code would return all oplog document that represents an insert into the cabo_dev.documents collection (cabo_dev is the name of the db) and all the new subsequent inserts. However, when it finishes returning documents (the 'old' ones) and then there are no more documents to return, it returns the next output as error (err var in the each):
{ [MongoError: No more documents in tailed cursor]
name: 'MongoError',
message: 'No more documents in tailed cursor',
tailable: true,
awaitData: true }
After that, it does not fetch more inserts from oplog any more. According to the tainlable documentation, one of the reasons for a cursor to become dead or invalid is:
Which is something that I think is happening to me here. However, in that situation, the each process never ends (which is something that I would expect when a cursor become dead or invalid, isn't it?). But I really want to continue fetching the subsequent inserts logs.
What am I doing wrong?
Probably you might have found the answer by now. But I am writing this answer just so that anyone who stumbles across this same problem, gets the work around.
In my case mongodb driver version is 2.0.33
After you have established a connection with mongodb server, do the following:
db.collection('yourCappedColl', function (err, coll) {
var stream = coll.find({},
{
tailable: true,
awaitdata: true,
numberOfRetries: Number.MAX_VALUE
}).stream();
stream.on('data', function(val) {
console.log('Doc: %j',val);
});
stream.on('error', function(val) {
console.log('Error: %j', val);
});
stream.on('end', function(){
console.log('End of stream');
});
});
That is:
For more details refer this jira item:
MongoError: No more documents in tailed cursor
The previous answer by @havish is a good one, but this is assuming you can tail the oplog of the database in the first place (I couldn't). You'll only be able to do that in MongoDB iff your mongod
process is a member of a replica set.
Before you ask, you don't need more than one replica in a set, effectively making it a one member replica set. This is how MongoDB creates a publishable oplog database collection that you may then stream.
Here's a simplified mongod
command to run MongoDB in the background:
mongod --port 27017 --dbpath /data/db --logpath test.log --replSet test0 --fork
Then, fire up a mongo
shell and run these to initialize your replica set:
$ mongo localhost:27017/test
> rs.initialize()
> rs.slaveOK()
Then, pick your oplogger:
MongoClient
Streamsmongo-oplog
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With