I am trying to perform several insertions on an existent Mongo DB collection using the following code
db.dados_meteo.aggregate( [
{ $match : { "POM" : "AguiardaBeira" } },
{ $project : {
_id : { $concat: [
"0001:",
{ $substr: [ "$DTM", 0, 4 ] },
{ $substr: [ "$DTM", 5, 2 ] },
{ $substr: [ "$DTM", 8, 2 ] },
{ $substr: [ "$DTM", 11, 2 ] },
{ $substr: [ "$DTM", 14, 2 ] },
{ $substr: [ "$DTM", 17, 2 ] }
] },
"RNF" : 1, "WET":1,"HMD":1,"TMP":1 } },
{ $out : "dados_meteo_reloaded" }
] )
But each time I change the $match parameters and make a new aggregation, Mongo DB deletes the previous documents and inserts the new result.
Could you help me?
Starting in MongoDB 4.2, you can use the aggregation pipeline for update operations. With the update operations, the aggregation pipeline can consist of the following stages: $addFields. $set.
For performing MongoDB Join two collections, you must use the $lookup operator. It is defined as a stage that executes a left outer join with another collection and aids in filtering data from joined documents. For example, if a user requires all grades from all students, then the below query can be written: Students.
You can use $addFields with a $concatArrays expression to add an element to an existing array field. For example, the following operation uses $addFields to replace the homework field with a new array whose elements are the current homework array concatenated with another array containing a new score [ 7 ] .
MongoDB – copyTo() Method In MongoDB, copyTo() method is used to copies all the documents from one collection(Source collection) to another collection(Target collection) using server-side JavaScript and if that other collection(Target collection) is not present then MongoDB creates a new collection with that name.
In MongoDB v4.2 (currently v4.2.0-rc1), you can utilise Aggregation Pipeline $merge stage to append to an existing collection. Can output to a collection in the same or different database.
Versions of MongoDB prior to 4.4 did not allow $merge to output to the same collection as the collection being aggregated. An aggregation pipeline cannot use $merge inside a transaction. An aggregation pipeline cannot use $merge to output to a time series collection. View definition cannot include the $merge stage.
As with many other database systems, MongoDB allows you to perform a variety of aggregation operations. These allow you to process data records in a variety of ways, such as grouping data, sorting data into a specific order, or restructuring returned documents, as well as filtering data as one might with a query.
I’ve introduced the MongoDB aggregation pipeline and demonstrated with examples how to use only some stages. The more that you use MongoDB, the more important the aggregation pipeline becomes in allowing you to do all those reporting, transforming, and advanced querying tasks that are so integral to the work of a database developer.
Starting Mongo 4.2
, the new $merge
aggregation operator (similar to $out
) allows merging the result of an aggregation pipeline into the specified collection:
Given this input:
db.source.insert([
{ "_id": "id_1", "a": 34 },
{ "_id": "id_3", "a": 38 },
{ "_id": "id_4", "a": 54 }
])
db.target.insert([
{ "_id": "id_1", "a": 12 },
{ "_id": "id_2", "a": 54 }
])
the $merge
aggregation stage can be used as such:
db.source.aggregate([
// { $whatever aggregation stage, for this example, we just keep records as is }
{ $merge: { into: "target" } }
])
to produce:
// > db.target.find()
{ "_id" : "id_1", "a" : 34 }
{ "_id" : "id_2", "a" : 54 }
{ "_id" : "id_3", "a" : 38 }
{ "_id" : "id_4", "a" : 54 }
Note that the $merge
operator comes with many options to specify how to merge inserted records conflicting with existing records.
In this case (with the default options), this:
keeps the target collection's existing documents (this is the case of { "_id": "id_2", "a": 54 }
)
inserts documents from the output of the aggregation pipeline into the target collection when they are not already present (based on the _id
- this is the case of { "_id" : "id_3", "a" : 38 }
)
replaces the target collection's records when the aggregation pipeline produces documents existing in the target collection (based on the _id
- this is the case of { "_id": "id_1", "a": 12 }
replaced by { "_id" : "id_1", "a" : 34 }
)
The short answer is "you can't":
If the collection specified by the $out operation already exists, then upon completion of the aggregation, the $out stage atomically replaces the existing collection with the new results collection. The $out operation does not change any indexes that existed on the previous collection. If the aggregation fails, the $out operation makes no changes to the pre-existing collection.
As a workaround, you can copy the collection document specified by $out
to a "permanent" collection just after aggregation, in one of a several ways (non of which is ideal though):
db.out.find().forEach(function(doc) {db.target.insert(doc)})
It's not the prettiest thing ever, but as another alternative syntax (from a post-processing archive/append operation)...
db.targetCollection.insertMany(db.runCommand(
{
aggregate: "sourceCollection",
pipeline:
[
{ $skip: 0 },
{ $limit: 5 },
{
$project:
{
myObject: "$$ROOT",
processedDate: { $add: [new ISODate(), 0] }
}
}
]
}).result)
I'm not sure how this stacks up against the forEach variant, but i find it more intuitive to read.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With