Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Backup a mysql database with a large number of tables (>100,000)

I know, really bad database design, but here we are, I have some kind of forum platform (Based on PunBB) and for each forum, I've generated a new set of tables. Really bad idea.

Time has passed and now I have more than 100,000 tables (SHOW TABLES; SELECT FOUND_ROWS(); - 112965 rows in set (1.21 sec)). Performance are great though as the tables do the job of indexes and when you make a direct references to one table, it's ultrafast.

The issue is now I am trying to back everything up and to move to another server. Of course, it take forever. I've launched a mysqldump:

mysqldump --max_allowed_packet=500M --force --opt -u root -pXXXX a > fullbackup.sql 

And it's still processing, a little more than 12 hours! The backup is already 546 Mb in size and Mysql is still alive and working.

I've tried to copy directly the Mysql files but I've ran into the issue that a lot of tables has been corrupted.

Any idea to speed this up?

like image 635
Hartator Avatar asked Jun 09 '13 10:06

Hartator


1 Answers

If you are using AWS RDS take a snapshot.

If you are not, use some other snapshot based tool. Percona has one http://www.percona.com/software/percona-xtrabackup/. Using mysqldump to back up large databases is extremely slow

If your source database is already corrupt, that's an independent issue.

If you are copying the database and the copy is corrupt, that is because you are doing a "hot copy" which means that you can't copy a database while it's running without a special "snapshot tool". Even file systems have such tools. You need a consistent set of files.

like image 191
Gidon Wise Avatar answered Nov 05 '22 21:11

Gidon Wise