Is there anyway to tar gzip mongo dumps like you can do with MySQL dumps?
For example, for mysqldumps, you can write a command as such:
mysqldump -u <username> --password=<password> --all-databases | gzip > all-databases.`date +%F`.gz
Is there an equivalent way to do the same for mongo dumps?
For mongo dumps I run this command:
mongodump --host localhost --out /backup
Is there a way to just pipe that to gzip? I tried, but that didn't work.
Any ideas?
When you use mongodump --db Database --gzip --archive=pathDatabase. gz You will create 1 archive file (it does not create a folder) for the specified DB and compress it with gzip . Resulting file will be pathDatabase. gz in your current directory.
mongoexport is a command-line tool that produces a JSON or CSV export of data stored in a MongoDB instance. mongodump is a utility for creating a binary export of the contents of a database.
mongodump is a utility that creates a binary export of a database's contents. mongodump can export data from: Standalone deployments. Replica sets. Sharded clusters.
Dump MongoDB Data To create backup of database in MongoDB, you should use mongodump command. This command will dump the entire data of your server into the dump directory. There are many options available by which you can limit the amount of data or create backup of your remote server.
Version 3.2 introduced gzip
and archive
option:
mongodump --db <yourdb> --gzip --archive=/path/to/archive
Then you can restore with:
mongorestore --gzip --archive=/path/to/archive
Update (July 2015): TOOLS-675 is now marked as complete, which will allow for dumping to an archive format in 3.2 and gzip will be one of the options in the 3.2 versions of the mongodump/mongorestore
tools. I will update with the relevant docs once they are live for 3.2
Original answer (3.0 and below):
You can do this with a single collection by outputting mongodump
to stdout
, then piping it to a compression program (gzip, bzip2) but you will only get data (no index information) and you cannot do it for a full database (multiple collections) for now. The relevant feature request for this functionality is SERVER-5190 for upvoting/watching purposes.
Here is a quick sample run through of what is possible, using bzip2
in this example:
./mongo MongoDB shell version: 2.6.1 connecting to: test > db.foo.find() { "_id" : ObjectId("53ad8a3eb74b5ae2ff0ec93a"), "a" : 1 } { "_id" : ObjectId("53ad8ba445be9c4f7bd018b4"), "a" : 2 } { "_id" : ObjectId("53ad8ba645be9c4f7bd018b5"), "a" : 3 } { "_id" : ObjectId("53ad8ba845be9c4f7bd018b6"), "a" : 4 } { "_id" : ObjectId("53ad8baa45be9c4f7bd018b7"), "a" : 5 } > bye $ ./mongodump -d test -c foo -o - | bzip2 - > foo.bson.bz2 connected to: 127.0.0.1 $ bunzip2 foo.bson.bz2 $ ./bsondump foo.bson { "_id" : ObjectId( "53ad8a3eb74b5ae2ff0ec93a" ), "a" : 1 } { "_id" : ObjectId( "53ad8ba445be9c4f7bd018b4" ), "a" : 2 } { "_id" : ObjectId( "53ad8ba645be9c4f7bd018b5" ), "a" : 3 } { "_id" : ObjectId( "53ad8ba845be9c4f7bd018b6" ), "a" : 4 } { "_id" : ObjectId( "53ad8baa45be9c4f7bd018b7" ), "a" : 5 } 5 objects found
Compare that with a straight mongodump
(you get the same foo.bson but the extra foo.metadata.json describing the indexes is not included above):
$ ./mongodump -d test -c foo -o . connected to: 127.0.0.1 2014-06-27T16:24:20.802+0100 DATABASE: test to ./test 2014-06-27T16:24:20.802+0100 test.foo to ./test/foo.bson 2014-06-27T16:24:20.802+0100 5 documents 2014-06-27T16:24:20.802+0100 Metadata for test.foo to ./test/foo.metadata.json $ ./bsondump test/foo.bson { "_id" : ObjectId( "53ad8a3eb74b5ae2ff0ec93a" ), "a" : 1 } { "_id" : ObjectId( "53ad8ba445be9c4f7bd018b4" ), "a" : 2 } { "_id" : ObjectId( "53ad8ba645be9c4f7bd018b5" ), "a" : 3 } { "_id" : ObjectId( "53ad8ba845be9c4f7bd018b6" ), "a" : 4 } { "_id" : ObjectId( "53ad8baa45be9c4f7bd018b7" ), "a" : 5 } 5 objects found
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With