I want to switch from MySQL to MongoDB but great data losses (more than 1 hour) are not acceptable for me.
I need to have 3 backup plans:
Hourly backup plan. Data is flushed to disk every X minutes and if something wrong with the server I shall be sure that after reboot it will have all data at least for an hour ago. Can I configure it?
Daily backup plan. Data is synced to backup disk every day so even if server explodes I can recover data for yesterday in some hours. Should I use fsync, master-slave or something else? I would like to have minimal traffic so ideally only changes will be sent.
Weekly backup plan. Data is synced to second backup disk so if both server and first backup disk explode I have at least data for last week. Here this is the question of reliability so it's ok to send all data via network.
How can I do it?
The Mongodump command is an efficient backup utility of MongoDB and allows you to create data backups of all sizes and varieties. For instance, it lets you back up a query, collections, or even an entire database. The Mongodump command also generates a periodic snapshot of your data if the oplog is included.
Mongobackup is an external tool performing full & incremental backup. Backup are stored on the filesystem and compressed using the lz4 algorithm. Full backup are done by performing a file system copy of the dbPath and partial oplog dump is used for incremental backup.
To create backup of database in MongoDB, you should use mongodump command. This command will dump the entire data of your server into the dump directory. There are many options available by which you can limit the amount of data or create backup of your remote server.
The fsync
command flushes the data to disk. It is executed each 60 seconds by default, but can be configured using the --syncdelay
command line parameter.
The documentation on backups has some good pointers for daily and weekly backups. For the daily backup, a master-slave configuration seems like the best option, as it will only sync changes.
For the weekly backup you can also use a master-slave configuration, or replication. Another option is the mongodump utility, which will back-up the entire database. It is capable of creating backups while the database is running, so you can run it on the main database or one of the slaves. You can also lock the slave before backing it up.
DB host (secondary preferred as to avoid impacting primary performance)
HOST='SomeHost/mongodbtest-slave'
DB name
DBNAME=***
S3 bucket name BUCKET=*-backup
Linux user account
USER=ubuntu
Current time
TIME=/bin/date +%d-%m-%Y-%T
Password
PASSWORD=somePassword#!2*1
Username
USERNAME=someUsername
Backup directory DEST=/home/ubuntu/tmp
Tar file of backup directory TAR=$DEST/../$TIME.tar
Create backup dir (-p to avoid warning if already exists) /bin/mkdir -p $DEST
Log echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
Dump from mongodb host into backup directory
Create tar of backup directory /bin/tar cvf $TAR -C $DEST .
Upload tar to s3 /usr/bin/aws s3 cp $TAR s3://$BUCKET/
Remove tar file locally /bin/rm -f $TAR
Remove backup directory /bin/rm -rf $DEST
All done echo "Backup available at https://s3.amazonaws.com/$BUCKET/$TIME.tar
You can use the steps above put them in a shell executable file and execute this at any interval using crontab commands.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With