Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to obtain a correct dump using mysqldump and single-transaction when DDL is used at the same time?

I'm new to MySQL and I'm figuring out the best way to perform an on-line hot logical backup using mysqldump. This page suggests this command line:

mysqldump --single-transaction --flush-logs --master-data=2
          --all-databases > backup_sunday_1_PM.sql

but... if you read the documentation carefully you find that:

While a --single-transaction dump is in process, to ensure a valid dump file (correct table contents and binary log position), no other connection should use the following statements: ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE. A consistent read is not isolated from those statements, so use of them on a table to be dumped can cause the SELECT performed by mysqldump to retrieve the table contents to obtain incorrect contents or fail.

So, is there any way to prevent this possible dump corruption scenario? I.e. a commands that could block those statements temporarily.

PS: MySQL bug entry on this subject http://bugs.mysql.com/bug.php?id=27850

like image 661
Valentino Miazzo Avatar asked Jan 16 '09 18:01

Valentino Miazzo


People also ask

What is single transaction in Mysqldump?

The --single-transaction flag will start a transaction before running. Rather than lock the entire database, this will let mysqldump read the database in the current state at the time of the transaction, making for a consistent data dump.

What are the Mysqldump options for dumping the definition of stored routine and event definitions?

For a definition-only dump, add the --routines and --events options to also include stored routine and event definitions: $> mysqldump --no-data --routines --events test > dump-defs. sql. PDF (US Ltr) - 34.9Mb. Info (Gzip) - 3.4Mb.

Can I restore a single table from a full MySQL Mysqldump file?

As outlined in the intro, there are a few required steps you need to perform to restore a single table from a mysqldump backup, because all your tables and data are in one file. Your mysqldump backup file might be hundreds MB's in size. Therefore, you first need to single out the table you want restored.


2 Answers

Open a mysql command window and issue this command:

mysql> FLUSH TABLES WITH READ LOCK;

This will lock all tables in all databases on this MySQL instance until you issue UNLOCK TABLES (or terminate the client connection that holds these read locks).

To confirm this, you can open another command window and try to do an ALTER, DROP, RENAME or TRUNCATE. These commands hang, waiting for the read lock to be released. Hit Ctrl-C to terminate the waiting.

But while the tables have a read lock, you can still perform a mysqldump backup.

The FLUSH TABLES WITH READ LOCK command may be the same as using the --lock-all-tables option of mysqldump. It's not totally clear, but this doc seems to support it:

Another use for UNLOCK TABLES is to release the global read lock acquired with FLUSH TABLES WITH READ LOCK.

Both FLUSH TABLES WITH READ LOCK and --lock-all-tables use the phrase "global read lock," so I think it's likely that these do the same thing. Therefore, you should be able to use that option to mysqldump and protect against concurrent ALTER, DROP, RENAME, and TRUNCATE.


Re. your comment: The following is from Guilhem Bichot in the MySQL bug log that you linked to:

Hi. --lock-all-tables calls FLUSH TABLES WITH READ LOCK. Thus it is expected to block ALTER, DROP, RENAME, or TRUNCATE (unless there is a bug or I'm wrong). However, --lock-all-tables --single-transaction cannot work (mysqldump throws an error message): because lock-all-tables locks all tables of the server against writes for the duration of the backup, whereas single-transaction is intended to let writes happen during the backup (by using a consistent-read SELECT in a transaction), they are incompatible in nature.

From this, it sounds like you cannot get concurrent access during a backup, and simultaneously block ALTER, DROP, RENAME and TRUNCATE.

like image 82
Bill Karwin Avatar answered Sep 23 '22 23:09

Bill Karwin


I thought the same thing reading that part of the documentation though, I found more information:

4.5.4. mysqldump — A Database Backup Program http://dev.mysql.com/doc/en/mysqldump.html

For InnoDB tables, mysqldump provides a way of making an online backup:

shell> mysqldump --all-databases --single-transaction > all_databases.sql

This backup acquires a global read lock on all tables (using FLUSH TABLES WITH READ LOCK) at the beginning of the dump. As soon as this lock has been acquired, the binary log coordinates are read and the lock is released. If long updating statements are running when the FLUSH statement is issued, the MySQL server may get stalled until those statements finish. After that, the dump becomes lock free and does not disturb reads and writes on the tables. If the update statements that the MySQL server receives are short (in terms of execution time), the initial lock period should not be noticeable, even with many updates.

There is a conflict with the --opt and --single-transaction options:

--opt

This option is shorthand. It is the same as specifying --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. It should give you a fast dump operation and produce a dump file that can be reloaded into a MySQL server quickly.

The --opt option is enabled by default. Use --skip-opt to disable it.

If I understand your question correctly you want the actual data and the DDL (Data Definition Language) together, because if you only want the DDL you would use --no-data. More information about this can be found at:

http://dev.mysql.com/doc/workbench/en/wb-reverse-engineer-create-script.html

Use the --databases option with mysqldump if you wish to create the database as well as all its objects. If there is no CREATE DATABASE db_name statement in your script file, you must import the database objects into an existing schema or, if there is no schema, a new unnamed schema is created.

As suggested by The Definitive Guide to MySQL 5 By Michael Kofler I would suggest the follow options:

--skip-opt
--single-transaction
--add-drop-table
--create-options
--quick
--extended-insert
--set-charset
--disable-keys

Additionally, not mentioned is --order-by-primary Also if you are using the --databases option, you should also use --add-drop-database especially if combined with this answer If you are backing up databases that are connect on different networks you may need to use the --compress option.

So a mysqldump command (without using the --compress, --databases, or --add-drop-database options) would be :

mysqldump --skip-opt --order-by-primary --single-transaction --add-drop-table --create-options --quick --extended-insert --set-charset -h db_host -u username --password="myPassword" db_name | mysql --host=other_host db_name

I removed the reference to --disable-keys that was given in the book as it is not effective with InnoDB as i understand it. The MySql manual states:

For each table, surround the INSERT statements with /*!40000 ALTER TABLE tbl_name DISABLE KEYS /; and /!40000 ALTER TABLE tbl_name ENABLE KEYS */; statements. This makes loading the dump file faster because the indexes are created after all rows are inserted. This option is effective only for nonunique indexes of MyISAM tables.

I also found this bug report http://bugs.mysql.com/bug.php?id=64309 which has comments on the bottom from Paul DuBois who also wrote a few books to which I have no reference on this specific issue other than those comments found within that bug report.

Now to create the "Ultimate Backup" I would suggest to consider something along the lines of this shell script

  1. https://github.com/red-ant/mysql-svn-backup/blob/master/mysql-svn.sh
like image 26
14 revs Avatar answered Sep 24 '22 23:09

14 revs