Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

improve speed of mysql import

I have large database of 22GB. I used to take backup with mysqldump command in a gzip format.

When i extract the gz file it produces the .sql file of 16.2GB

When I try to import the database in my local server, it takes approximately 48hrs to import.Is there a way to increase the speed of the import process?

Also i would like to know if any hardware changes need to be done to improve the performance.

Current System Config

 Processor: 4th Gen i5  RAM: 8GB 

#update

my.cnf is as follows

# # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. #  # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html  # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port        = 3306 socket      = /var/run/mysqld/mysqld.sock  # Here is entries for some specific programs # The following values assume you have at least 32M ram  # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket      = /var/run/mysqld/mysqld.sock nice        = 0  [mysqld] # # * Basic Settings # user        = mysql pid-file    = /var/run/mysqld/mysqld.pid socket      = /var/run/mysqld/mysqld.sock port        = 3306 basedir     = /usr datadir     = /var/lib/mysql tmpdir      = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address        = 127.0.0.1 # # * Fine Tuning # key_buffer      = 16M max_allowed_packet  = 512M thread_stack        = 192K thread_cache_size       = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover         = BACKUP #max_connections        = 100 #table_cache            = 64 #thread_concurrency     = 10 # # * Query Cache Configuration # query_cache_limit   = 4M query_cache_size        = 512M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file        = /var/log/mysql/mysql.log #general_log             = 1 # # Error log - should be very few entries. # log_error = /var/log/mysql/error.log # # Here you can see queries with especially long duration #log_slow_queries   = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about #       other settings you may need to change. #server-id      = 1 #log_bin            = /var/log/mysql/mysql-bin.log expire_logs_days    = 10 max_binlog_size         = 100M #binlog_do_db       = include_database_name #binlog_ignore_db   = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem    [mysqldump] quick quote-names max_allowed_packet  = 512M  [mysql] #no-auto-rehash # faster start of mysql but no tab completition  [isamchk] key_buffer      = 512M  # # * IMPORTANT: Additional settings that can override those from this file! #   The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ 

It is being uploading for 3 days and right now 9.9 GB has been imported. The Database has both MyISAM and InnoDB tables. What can i do to improve the import performance?

I have tried exporting each table separately in gz format with mysqldump and importing each table through PHP script executing the following code

$dir="./"; $files = scandir($dir, 1); array_pop($files); array_pop($files); $tablecount=0; foreach($files as $file){     $tablecount++;     echo $tablecount."     ";      echo $file."\n";     $command="gunzip < ".$file." | mysql -u root -pubuntu cms";      echo exec($command); } 
like image 315
dharanbro Avatar asked Apr 15 '15 07:04

dharanbro


People also ask

How do I import a large SQL file into SQLYog?

In SQLyog there is an option - Tools -> Execute SQL script . If there are some Errors it will give you a pop up, then click on the "Open Error File button" , it will give you error in the sql file. Show activity on this post. I think you can open your SQL file in Query window, and just run it in SQLYog.

How can I speed up a MySQL restore from a dump file?

Summary (for best dump restore performance):Set innodb_buffer_pool_size to half of RAM. Set innodb_log_file_size to 1G. Set innodb_flush_log_at_trx_commit to 0. Disabling innodb_double_write recommended only for fastest performance, it should be enabled on production.


2 Answers

There are a lot of parameters that are missing, to fully understand the reason for the problem. such as:

  1. MySQL version
  2. Disk type and speed
  3. Free memory on the server before you start MySQL server
  4. iostat output before and at the time of the mysqldump.
  5. What are the parameters that you use to create the dump file in the first place.

and many more.

So I'll try to guess that your problem is in the disks because I have 150 instances of MySQL that I manage with 3TB of data on one of them, and usually the disk is the problem

Now to the solution:

First of all - your MySQL is not configured for best performance.

You can read about the most important settings to configure at Percona blog post: http://www.percona.com/blog/2014/01/28/10-mysql-settings-to-tune-after-installation/

Especially check the parameters:

innodb_buffer_pool_size  innodb_flush_log_at_trx_commit innodb_flush_method 

If your problem is the disk - reading the file from the same drive - is making the problem worse.

And if your MySQL server starting to swap because it does not have enough RAM available - your problem becomes even bigger.

You need to run diagnostics on your machine before and at the time of the restore procedure to figure that out.

Furthermore, I can suggest you to use another technic to perform the rebuild task, which works faster than mysqldump.

It is Percona Xtrabackup - http://www.percona.com/doc/percona-xtrabackup/2.2/

You will need to create the backup with it, and restore from it, or rebuild from running server directly with streaming option.

Also, MySQL version starting from 5.5 - InnoDB performs faster than MyISAM. Consider changing all your tables to it.

like image 115
Tata Avatar answered Oct 09 '22 18:10

Tata


Doing a dump and restore in the manner described will mean MySQL has to completely rebuild indexes as the data is imported. It also has to parse the data each time.

It would be much more efficient if you could copy data files in a format MySQL already understands. A good way of doing this is to use innobackupex from Percona

(Open Source and distributed as part of XtraBackup available to download from here).

This will take a snapshot of MyISAM tables, and for InnoDB tables it will copy the underlying files, then replay the transaction log against them to ensure a consistent state. It can do this from a live server with no downtime (I have no idea if that is a requirement of yours?)

I suggest you read the documentation, but to take a backup in it's simplest form use:

$ innobackupex --user=DBUSER --password=DBUSERPASS /path/to/BACKUP-DIR/ $ innobackupex --apply-log /path/to/BACKUP-DIR/ 

If the data is on the same machine, then innobackupex even has a simple restore command:

$ innobackupex --copy-back /path/to/BACKUP-DIR 

There are many more options and different ways of actually doing the backup so I would really encourage you have a good read of the documentation before you begin.

For reference to speed, our slow test server, which does about 600 IOPS can restore a 500 GB backup in about 4 hours using this method.

Lastly: You mentioned what could be done to speed up importing. It's mostly going to depend on what the bottle neck is. Typically, import operations are I/O bound (you can test this by checking for io waits) and the way to speed that up is with faster disk throughput - either faster disks themselves, or more of them in unison.

like image 38
AndySavage Avatar answered Oct 09 '22 18:10

AndySavage