Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Piping mysqldump to mysql

Sometimes I need to copy MySQL database (db1) to another database (db2). I found this command to be concise and effective:

mysqldump --opt db1 | mysql db2

It was working fine, but now it breaks with following error:

ERROR 1064 (42000) at line 1586: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'some_table_name'': MySQL server ' at line 1

First thing that comes to mind is that database is too big (uncompressed SQL dump is >1G, 1090526011 bytes at the moment, to be precise) for piping it like this. When I do mysqldump > file and then mysql < file it works fine, no errors. Table mentioned in error message (some_table_name) is not big or special.

Second idea comes from the impression that error message might be truncated, and that it says

"...MySQL server has gone away"

Quick research on that says it's possible that maximum number of open files (for MySQL and/or system) is reached. So I've tried adding --skip-lock-table to mysqldump and raising open-files-limit, but no luck, same error.

Obvious solution is to do dump and then import (as it works fine), but piping seems better and more clean to me (let me know if I'm wrong), plus I'm curious to find out what causes this problem. Did I hit some limit that affects command piping?

I've been doing this on hosting server, running MySQL 5.1.60 on Linux and on my dev machine - MySQL 5.1.58 on Linux. Latter gives a bit different error:

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table other_table_name at row: 7197


UPDATE: Problem is solved by doing separate dump and import, without pipe. Even though I feel it's not really answer to my question, ssmusoke's suggestions were most to the point resulting in accepted answer.

like image 733
parserr Avatar asked Mar 01 '12 00:03

parserr


2 Answers

"MySQL server has gone away" is a symptom of a max packet error. http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

Modify your command to specify a larger value for max_allowed_packet.

mysqldump --opt db1 | mysql --max_allowed_packet=32M db2

The default is 1M. It may take trial and error to get the right value. http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet

like image 97
txyoji Avatar answered Sep 29 '22 12:09

txyoji


The problem may be that the load on the servers gets too high doing both dumping and loading at the same time. This also means that you lose some optimizations like extended inserts, ability to disable foreign keys which can be achieved when you dump a file then import it.

I would recommend that you use mysqldump to generate the backup then load it using mysql. That way the load on your server is reduced, and like you said it always works. You can even automate it into a bash script to do both so that you do not need to execute the mysqldump and loading commands.

like image 45
Stephen Senkomago Musoke Avatar answered Sep 29 '22 13:09

Stephen Senkomago Musoke