Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

mongo 3 duplicates on unique index - dropDups

Tags:

In the documentation for mongoDB it says: "Changed in version 3.0: The dropDups option is no longer available."

Is there anything I can do (other than downgrading) if I actually want to create a unique index and destroy duplicate entries?

please keep in mind the I receive about 300 inserts per second so I can't just delete all duplicates and hope none will come in by the time I'm done indexing.

like image 763
Alonzorz Avatar asked May 12 '15 10:05

Alonzorz


2 Answers

Yes dropDupes is now deprecated since version 2.7.5 because it was not possible to predict correctly which document would be deleted in the process.

Typically, you have 2 options :

  1. Use a new collection :

    • Create a new collection,
    • Create the unique index on this new collection,
    • Run a batch to copy all the documents from the old collection to the new one and make sure you ignore duplicated key error during the process.
  2. Deal with it in your own collection manually :

    • make sure you won't insert more duplicated documents in your code,
    • run a batch on your collection to delete the duplicates (and make sure you keep the good one if they are not completely identical),
    • then add the unique index.

For your particular case, I would recommend the first option but with a trick :

  • Create a new collection with unique index,
  • Update your code so you now insert documents in both tables,
  • Run a batch to copy all documents from the old collection to the new one (ignore duplicated key error),
  • rename the new collection to match the old name.
  • re-update your code so you now write only in the "old" collection
like image 194
Maxime Beugnet Avatar answered Sep 20 '22 14:09

Maxime Beugnet


As highlighted by @Maxime-Beugnet you can create a batch script to remove duplicates from a collection. I have included my approach below that is relatively fast if the number of duplicates are small in comparison to the collection size. For demonstration purposes this script will de-duplicate the collection created by the following script:

db.numbers.drop()  var counter = 0 while (counter<=100000){   db.numbers.save({"value":counter})   db.numbers.save({"value":counter})   if (counter % 2 ==0){     db.numbers.save({"value":counter})   }   counter = counter + 1; } 

You can remove the duplicates in this collection by writing an aggregate query that returns all records with more than one duplicate.

var cur = db.numbers.aggregate([{ $group: { _id: { value: "$value" }, uniqueIds: { $addToSet: "$_id" }, count: { $sum: 1 } } }, { $match: { count: { $gt: 1 } } }]); 

Using the cursor you can then iterate over the duplicate records and implement your own business logic to decide which of the duplicates to remove. In the example below I am simply keeping the first occurrence:

while (cur.hasNext()) {     var doc = cur.next();     var index = 1;     while (index < doc.uniqueIds.length) {         db.numbers.remove(doc.uniqueIds[index]);         index = index + 1;     } } 

After removal of the duplicates you can add an unique index:

db.numbers.createIndex( {"value":1},{unique:true}) 
like image 33
Alex Avatar answered Sep 16 '22 14:09

Alex