Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best way to update table schema for huge tables (SQL Server)

I have a few huge tables on a production SQL 2005 DB that need a schema update. This is mostly an addition of columns with default values, and some column type change that require some simple transformation. The whole thing can be done with a simple "SELECT INTO" where the target is a table with the new schema.

Our tests so far show that even this simple operation, done entirely inside the server (Not fetching or pushing any data), could take hours if not days, on a table with many millions of rows.

Is there a better update strategy for such tables?

edit 1: We are still experimenting with no definitive conclusion. What happens if one of my transformations to a new table, involve merging every five lines to one. There is some code that has to run on every transformation. The best performance we could get on this got us at a speed that will take at least a few days to convert a 30M rows table

Will using SQLCLR in this case (doing the transformation with code running inside the server) give me a major speed boost?

like image 247
Ron Harlev Avatar asked Jan 23 '23 23:01

Ron Harlev


1 Answers

We have a similar problem and I've found that the fastest way to do it is to export the data to delimited files (in chunks - depending on the size of the rows - in our case, each file had 500,000 rows), doing any transforms during the export, drop and recreate the table with the new schema, and then do a bcp import from the files.

A 30 million row table took a couple of hours using that method, where an alter table took over 30 hours.

like image 187
rjrapson Avatar answered Jan 27 '23 02:01

rjrapson