Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Slow running MySql restore - 10 times slower than backup speed and still going

I'm moving a large (~80GB) database from its testbed into what will be its production environment. We're working on Windows servers. This is the first time we've worked with MySQL and we're still learning the expected behaviours.

We backed up the data with

mysqldump -u root -p --opt [database name] > [database name].sql

Which took about 3 hours and created a file 45GB in size. It copied over to its new home overnight and, next morning, I used MySQL Workbench to launch a restore. According to its log, it ran

mysql.exe --defaults-file="[a path]\tmpc8tz9l.cnf" --protocol=tcp --host=127.0.0.1 --user=[me] --port=3306 --default-character-set=utf8 --comments --database=[database name] < "H:\[database name].sql"

And it's working - if I connect to the instance I can see the database and some its tables.

The trouble is, it seems to be taking forever. I presumed it would restore in the same 3-4 time frame it took to back up, maybe faster because it's restoring onto a more powerful server with SSD drives.

But it's now about 36 hours since the restore started and the DB is apparently 30GB in size. And it appears to be getting slower as it goes on.

I don't want to interrupt it now that it's started working so I guess I just have to wait. But for future reference: is this treacle-slow restore speed normal? Is there anything we can do it improve matters next time we need to restore a big DB?

like image 296
Bob Tway Avatar asked Feb 07 '18 14:02

Bob Tway


People also ask

Why is MySQL running slow?

Queries can become slow for various reasons ranging from improper index usage to bugs in the storage engine itself. However, in most cases, queries become slow because developers or MySQL database administrators neglect to monitor them and keep an eye on their performance.

How long will Mysqldump take?

The mysqldump of all the tables (into separate files) is fairly fast, takes maybe 20 minutes. It generates about 15GB of data. The largest dumped files are in the 2GB range.


1 Answers

Very large imports are notoriously hard to make fast. It sounds like your import is slowing down--processing fewer rows per second--as it progresses. That probably means MySQL is checking each new row to see whether it has key-conflicts with the rows already inserted.

A few things you can do:

Before starting, disable key checking.

   SET FOREIGN_KEY_CHECKS = 0;
   SET UNIQUE_CHECKS = 0;

After ending restore your key checking.

  SET UNIQUE_CHECKS = 1;
  SET FOREIGN_KEY_CHECKS = 1;

And, if you can wrap every few thousand lines of INSERT operations in

   START TRANSACTION;
   INSERT ...
   INSERT ...
   ...
   COMMIT;

you'll save a lot of disk churning.

Notice that this only matters for tables with many thousands of rows or more.

mysqldump can be made to create a dump with that disables keys. https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_disable-keys

mysqldump --disable-keys

Similarly,

mysqldump --extended-insert --no-autocommit

will make the dumped sql file contain a variant of my suggestion about using transactions.

In your case if you had used --opts --no-autocommit you probably would have gotten an optimal dump file. You already used --opts.

like image 88
O. Jones Avatar answered Sep 21 '22 01:09

O. Jones