To dump entire databases, do not name any tables following db_name , or use the --databases or --all-databases option. To see a list of the options your version of mysqldump supports, issue the command mysqldump --help .
It took a total of 1 minute 27 seconds to take a dump of the entire database (same data as used for mysqldump) and also it shows its progress which will be really helpful to know how much of the backup has completed.
http://www.maatkit.org/ has a mk-parallel-dump and mk-parallel-restore
If you’ve been wishing for multi-threaded mysqldump, wish no more. This tool dumps MySQL tables in parallel. It is a much smarter mysqldump that can either act as a wrapper for mysqldump (with sensible default behavior) or as a wrapper around SELECT INTO OUTFILE. It is designed for high-performance applications on very large data sizes, where speed matters a lot. It takes advantage of multiple CPUs and disks to dump your data much faster.
There are also various potential options in mysqldump such as not making indexes while the dump is being imported - but instead doing them en-mass on the completion.
If you are importing to InnoDB the single most effective thing you can do is to put
innodb_flush_log_at_trx_commit = 2
in your my.cnf
, temporarily while the import is running. You can put it back to 1
if you need ACID.
I guess your question also depends on where the bottleneck is:
-C
/--compress
flag to mysqldump
.Also, have a look at the --quick
flag for mysqldump
(and --disable-keys
if you are using MyIsam).
Using extended inserts in dumps should make imports faster.
turn off foreign key checks and turn on auto-commit.
mysqlhotcopy
might be an alternative for you too if you only have MyIsam tables.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With