Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

getting Lost connection to mysql when using mysqldump even with max_allowed_packet parameter

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:

Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359

so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.

so im running the following command to dump my table:

mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip  > dump_test.sql.gz

and from some reason, i still get:

Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499

am i doing something wrong?

its weird, only 9m records...not too big.

like image 941
JohnBigs Avatar asked Oct 31 '18 20:10

JohnBigs


People also ask

How do I fix the lost connection to MySQL server during query?

Open the MySQL Workbench Preferences. Check if the SSH Timeout and DBMS Timeout value is set to only a few seconds. Try to increase the default value of the connection timeouts. Save the settings, close the MySQL Workbench and reopen the connection to see if you are able to connect to the database.

Why Mysqldump is not working?

If mysqldump is not identified by the cmd prompt that means it cannot recognize where the mysqldump.exe is located. You need to add path of the directory where the exe is located in the PATH variable under environment variables. After doing that your command will start working in the cmd prompt.

What permissions are needed for Mysqldump?

mysqldump requires at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used. Certain options might require other privileges as noted in the option descriptions.

What is the difference between Mysqldump and Mysqlpump?

mysqlpump is the 4th fastest followed closer by mydumper when using gzip. mysqldump is the classic old-school style to perform dumps and is the slowest of the four tools. In a server with more CPUs, the potential parallelism increases, giving even more advantage to the tools that can benefit from multiple threads.


1 Answers

Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.

 mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
 gzip  > dump_test.sql.gz

You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.

It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.

These set the timeouts to one calendar day.

    SET GLOBAL wait_timeout=86400;
    SET GLOBAL interactive_timeout=86400;

Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.

Try logging into a machine closer to the server and running mysqldump on it. Then use some other means (sftp?) to copy your gz file to your own machine.

Or, you may have to segment the dump of this file. You can do something like this (not debugged).

mysqldump  -uroot -h my.host -p'mypassword'  \ 
          db_name table_name --skip-create-options --skip-add-drop-table \
          --where="id>=0 AND id < 1000000" | \
          gzip....

Then repeat that with these lines.

          --where="id>=1000000 AND id < 2000000" | \

          --where="id>=2000000 AND id < 3000000" | \
          ...

until you get all the rows. Pain in the neck, but it will work.

like image 82
O. Jones Avatar answered Sep 17 '22 21:09

O. Jones