Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Migrating `int` to `bigint` in PostgresSQL without any downtime?

I have a database that is going to experience the integer exhaustion problem that Basecamp famously faced back in November. I have several months to figure out what to do.

Is there a no-downtime-required, proactive solution to migrating this column type? If so what is it? If not, is it just a matter of eating the downtime and migrating the column when I can?

Is this article sufficient, assuming I have several days/weeks to perform the migration now before I'm forced to do it when I run out of ids?

like image 901
jefflunt Avatar asked Feb 20 '19 21:02

jefflunt


People also ask

Does Postgres have Bigint?

PostgreSQL allows a type of integer type namely BIGINT . It requires 8 bytes of storage size and can store integers in the range of -9, 223, 372, 036, 854, 775, 808 to +9, 223, 372, 036, 854, 775, 807.

How do I change the datatype of a column in PostgreSQL?

First, specify the name of the table to which the column you want to change belongs in the ALTER TABLE clause. Second, give the name of column whose data type will be changed in the ALTER COLUMN clause. Third, provide the new data type for the column after the TYPE keyword.


2 Answers

Another solution for pre-v10 databases where all transactions are short:

  • Add a bigint column to the table.

  • Create a BEFORE trigger that sets the new column whenever a row is added or updated.

  • Run a series of updates that set the new column from the old one where it IS NULL. Keep those batches short so you don't lock long and don't deadlock much. Make sure these transaction run with session_replication_role = replica so they don't trigger triggers.

  • Once all rows are updated, create a unique index CONCURRENTLY on the new column.

  • Add a unique constraint USING the index you just created. That will be fast.

  • Perform the switch:

    BEGIN;
    ALTER TABLE ... DROP oldcol;
    ALTER TABLE ... ALTER newcol RENAME TO oldcol;
    COMMIT;
    

    That will be fast.

Your new column has no NOT NULL set. This cannot be done without a long invasive lock. But you can add a check constraint IS NOT NULL and create it NOT VALID. That is good enough, and you can later validate it without disruptions.

If there are foreign key constraints, things get a little more complicated. You have to drop these and create NOT VALID foreign keys to the new column.

like image 105
Laurenz Albe Avatar answered Nov 15 '22 05:11

Laurenz Albe


Create a copy of the old table but with modified ID field. Next create a trigger on the old table that inserts new data to both tables. Finally copy data from the old table to the new one (it would be a good idea to distinguish pre-trigger data with post-trigger for example by id if it is sequential). Once you are done switch tables and delete the old one.

This obviously requires twice as much space (and time for copy) but will work without any downtime.

like image 21
freakish Avatar answered Nov 15 '22 03:11

freakish