I understand that the COPY command imports lots of data very efficiently. But copying data from one table to another with the INSERT command is slow. Is there a more efficient way to copy data from one table to the other? Or should I use the UNLOAD command to unload the table into S3, then COPY it back from there?
The COPY command is an extension of SQL supported by Redshift. Therefore, the COPY command needs to be issued from an SQL client. You mention that you have configured SQL Workbench. Once you connect to the Redshift cluster, run the command from within that connection.
You can do insert
into new_table
(select * from old_table
) .
But for bigger tables you should always do unload from old table then copy to new table.
The copy commands load data in parallel and it works fast. Unload also unloads data parallel. So unload and copy is good option to copy data from one table to other.
when you do copy command it automatically do the encoding ( compression ) for your data. When you do insert into ( select * from
) it will not do compression/encoding. You need to explicitly apply encoding types when you create new table.
If you want to copy the records from source_table
to target_table
. Then query must be below
insert into target_table select * from source_table
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With