Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to take backup of a database which has table size 105 GB?

Tags:

mysql

backup

There is a database in MYSQL InnoDB engine. When I take backup of my database which is about 150 GB by mysqldump command on linux server. I get the error message

mysqldump Lost connection.

There is a table of size 105 GB and about 420 000 000 rows and about 100 000 rows inserting average per hour. I got that error when backup is running at this table. How to take backup and what is the foolproof process?

like image 288
Pankaj Kumar Avatar asked Mar 13 '18 07:03

Pankaj Kumar


4 Answers

Really similar to Lost connection to MySQL server during query? & https://dba.stackexchange.com/questions/97846/mysqldump-error-2013

Add those parameters in your mysqldump command line:

--max_allowed-packet=1G --net-read-timeout=7200 --net-write-timeout=7200 --net-buffer-length=32704
like image 157
A STEFANI Avatar answered Nov 17 '22 19:11

A STEFANI


You can do a backup in parts using a WHERE clause to split it to several files.

You should look for an index or perhaps use the primary key and use several mysqldump commands. Note that the data will not be a snapshot of the whole table at the same time.

mysqldump -u root -p  dbName tableName --where="id>=10000 AND id<20000" > dbName_tableName_10_20.sql

Consider splitting the table on the future for easier maintenance.

like image 27
Ami Heines Avatar answered Nov 17 '22 18:11

Ami Heines


The "Lost Connection" problem is caused by too much data through the dumping channel - you need to increase max_allowed_packet in my.cnf.

I had this pointed out to me a long time ago, but now rereading the docs I'm not too sure it applies. You might try it, anyway, or use it for further research:

Note that mysqldump will build special multiple-INSERT clauses which might exceed the buffer length. You may then need to resort to --skip-extended-insert (which will greatly slow down restore) when backing up.

The docs say:

net_buffer_length The initial size of the buffer for client/server communication. When creating multiple-row INSERT statements (as with the --extended-insert or --opt option), mysqldump creates rows up to net_buffer_length bytes long. If you increase this variable, ensure that the MySQL server net_buffer_length system variable has a value at least this large. [...] max_allowed_packet The maximum size of one packet or any generated/intermediate string, or any parameter sent by the mysql_stmt_send_long_data() C API function. The default is 4MB.

The packet message buffer is initialized to net_buffer_length bytes, but can grow up to max_allowed_packet bytes when needed.

However, backing up will lock the database (or the tables) and cause service problems. So you might be better off by, first, setting up replication, so that the slave is kept in a consistent state by continuous updates from its master. When you need to backup, you lock the slave and set it to read only, then can either run a mysqldump or just back up the binary data (which the manual recommends for larger databases).

like image 25
LSerni Avatar answered Nov 17 '22 18:11

LSerni


Please consider this two thing

One Configuration :

configured max_allowed_packet to 1024M
configured net_read_timeout to 7200
configured net_write_timeout to 7200

Second :

mysqldump -u root --skip-extended-insert -p <DB> DB.sql
like image 1
Kernelv5 Avatar answered Nov 17 '22 20:11

Kernelv5