I have a java application that uses mysql as back end, every night we take a backup of mysql using mysqldump and application stops working for that time period(app 20 min).
Command used for taking the backup.
$MYSQLDUMP -h $HOST --user=$USER --password=$PASS $database > \
$BACKDIR/$SERVER-mysqlbackup-$database-$DATE.sql
gzip -f -9 $BACKDIR/$SERVER-mysqlbackup-$database-$DATE.sql
Is this normal or am I doing something wrong that is causing the DB to stall during that time ?
Thanks, K
See https://serverfault.com/questions/224711/backing-up-a-mysql-database-while-it-is-still-in-use/224716#224716
I suspect you are using MyISAM, and the table is locking. I suggest you switch to InnoDB and use the single-transaction flag. That will allow updates to continue and will also preserve a consistent state.
mysqldump
has to get a read lock on the tables and hold it for the duration of the backup in order to ensure a consistent backup. However, a read lock can stall subsequent reads, if a write occurs in between (i.e. read -> write -> read
): the first read lock blocks the write lock, which blocks the second read lock.
This depends in part on your table type. If you are using MyISAM, locks apply to the entire table and thus the entire table will be locked. I believe that the locks in InnoDB work differently, and that this will not lock the entire table.
If tables are stored in the InnoDB storage engine, mysqldump provides a
way of making an online backup of these (see command below).
shell> mysqldump --all-databases --single-transaction > all_databases.sql
This may help... specifically the --single-transaction option, and not the --all-databases one... (from the mysqldump manpage)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With