Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I efficiently change a MySQL table structure on a table with millions of entries?

I have a MySQL database that is up to about 17 GB in size and has 38 million entries. At the moment, I need to both increase the size of one column (varchar 40 to varchar 80) and add more columns.

Many of the fields are indexed including the one that I need to change. It is part of a unique pair that is necessary for the applications to work. In attempting to just make the change yesterday, the query ran for almost four hours without finishing, when I decided to cut our outage and just bring the service back up.

What is the most efficient way to make changes to something of this size?

Many of these entries are also old and if there is a good way to sort of shard off entries but still have them available that might help with this problem by making the table a much more manageable size.

like image 538
Cris Favero Avatar asked Oct 19 '12 16:10

Cris Favero


1 Answers

You have some choices.

In any case you should take a backup before you do this stuff.

One possibility is to take your service offline and do it in place, as you have tried. If you do that you should disable key checks and constraints.

ALTER TABLE bigtable DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
ALTER TABLE (whatever);
ALTER TABLE (whatever else);
...
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE bigtable ENABLE KEYS;

This will allow the ALTER TABLE operation to go faster. It will regenerate the indexes all at once when you do ENABLE KEYS.

Another possibility is to create a new table with the new schema you want, then disable the keys on the new table, then do as @Bader suggested and insert the contents of the old table.

After your new table is built you will re-enable the keys on it, then rename the old table to some name like "old_bigtable" then rename the new table to "bigtable".

It's possible that you can keep your service online while you're populating the new table. But that might work poorly.

A third possibility is to dump your giant table (to a flat file) and then load it to a new table with the new layout. That is pretty much like the second possibility except that you get a table backup for free. You can make this go pretty fast with SELECT DATA INTO OUTFILE and LOAD DATA INFILE. You'll need to have access to your server machine's file system to do this.

In all cases, disable, then re-enable, the constraints and keys to get things to go fast.

like image 153
O. Jones Avatar answered Nov 01 '22 18:11

O. Jones