Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Making structural changes to very large tables in an online environment

So here's what I'm facing.

The Problem

  • A large table, with ~230,000,000 rows.
  • We want to change the clustering index and primary key of this table to a simple bigint identity field. There is one other empty field being added to the table, for future use.
  • The existing table has a composite key. For the sake of argument, let's say it's 2 bigint's. The first one may have 1 or 10,000 'children' in the 2nd part of the key.

Requirements

  • Minimal downtime, like preferably the length of time it takes to run SP_Rename.
  • Existing rows may change while we're copying data. The updates must be reflected in the new table.

Ideas

  1. Put a trigger on existing table, to update row in new table if it already exists there.
  2. Iterate through original table, copying data into new table ~10,000 at a time. Maybe 2,000 of the first part of the old key.
  3. When the copy is complete, rename the old table to "ExistingTableOld" and the new one from "NewTable" to "ExistingTable". This should allow stored procs to continue to run without intervention

Are there any glaring omissions in the plan, or best practices I'm ignoring?

like image 716
DigDoug Avatar asked Nov 05 '22 01:11

DigDoug


1 Answers

Difficult problem. Your plan sounds good, but I'm not totally sure you really need to batch the query as long as you run it in a transaction isolation level of READ UNCOMMITTED to stop locks being generated.

like image 191
Tim Rogers Avatar answered Nov 09 '22 12:11

Tim Rogers