Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I slow down a MySQL dump as to not affect current load on the server?

While doing a MySQL dump is easy enough, I have a live dedicated MySQL server that I am wanting to setup replication on. To do this, I need dumps of the databases to import to my replication slave.

The issue comes when I do the dumps, MySQL goes full force at it and ties up resources to the sites that connecting to it. I am wondering if there is a way to limit the dump queries to a low priority state to which preference is given to live connections? The idea being that the load from external sites is not affected by the effort of MySQL to do a full dump...

like image 798
z33k3r Avatar asked Apr 14 '11 16:04

z33k3r


People also ask

How long does MySQL dump take?

It took a total of 1 minute 27 seconds to take a dump of the entire database (same data as used for mysqldump) and also it shows its progress which will be really helpful to know how much of the backup has completed.

What can you do with a MySQL dump?

To generate the backup using mysqldump, 'Select' to dump the tables, 'Show View' for views, 'Trigger' for the triggers. If you are not using —single-transaction option, then 'Lock Tables' privileges must be granted to the user. -p [password]: The valid password of the MySQL user.


2 Answers

If using InnoDB tables, use the --single-transaction and --quick options for mysqldump

like image 35
Drew Clayton Avatar answered Sep 19 '22 09:09

Drew Clayton


I have very large databases with tens of thousands of tables some of which have up to 5GB of data in 10's of millions of entries. (I run a popular service)... I've always had headaches when backing up these databases. Using default mysqldump it quickly spirals the server load out of control and locks up everything... affecting my users. Trying to stop the process can lead to crashed tables and lots of downtime during recovery of those tables.

I now use...

mysqldump -u USER -p --single-transaction --quick --lock-tables=false DATABASE | gzip > OUTPUT.gz 

The mysqldump reference at dev.mysql.com even says...

To dump large tables, you should combine the --single-transaction option with --quick.

Says nothing about that being dependent on the database being InnoDB, mine are myISAM and this worked beautifully for me. Server load was almost completely unaffected and my service ran like a Rolex during the entire process. If you have large databases and backing them up is affecting your end user... this IS the solution. ;)

like image 197
CA3LE Avatar answered Sep 21 '22 09:09

CA3LE